hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ecf52ada3cf8fa6bda8a4103a774e3e88ab03f44 | 9,780 | ipynb | Jupyter Notebook | webcast_materials/01-Large-File-Read.ipynb | ctdorris/excel-to-python-course | a637a4a22189f017754d7f4b80a00e55d157bd6e | [
"MIT"
] | 79 | 2020-08-29T00:12:08.000Z | 2022-03-12T12:18:00.000Z | webcast_materials/01-Large-File-Read.ipynb | keiths3/excel-to-python-course | 54d39bb9e18d065da58fda55cbcce17d26b5d9ad | [
"MIT"
] | 3 | 2020-12-31T21:33:06.000Z | 2021-08-30T21:16:45.000Z | webcast_materials/01-Large-File-Read.ipynb | keiths3/excel-to-python-course | 54d39bb9e18d065da58fda55cbcce17d26b5d9ad | [
"MIT"
] | 36 | 2020-08-29T03:56:29.000Z | 2022-03-12T12:18:01.000Z | 35.434783 | 593 | 0.438037 | [
[
[
"### Gather data\n\nExcel has ~1M row limit which is easy to exceed today. Pandas can read these files and process them efficiently.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom pathlib import Path",
"_____no_output_____"
],
[
"# CSV file with 1.1M rows (96MB)\n# Can be read directly from the internet\ndf = pd.read_csv('https://talk-python-course-videos.nyc3.digitaloceanspaces.com/large-files/2019_customer_transactions.csv')\n\n# Read from a local file\n# File is not stored in github\n#csv_file = Path.cwd() / 'data' / 'raw' / '2019_customer_transactions.csv'\n#df = pd.read_csv(csv_file)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1100000 entries, 0 to 1099999\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 cust_num 1100000 non-null object \n 1 sku 1100000 non-null object \n 2 category 1100000 non-null object \n 3 qty 1100000 non-null int64 \n 4 list_price 1100000 non-null float64\n 5 discount_rate 1100000 non-null float64\n 6 invoice_price 1100000 non-null float64\n 7 invoice_num 1100000 non-null int64 \n 8 invoice_date_time 1100000 non-null object \n 9 invoice_total 1100000 non-null float64\ndtypes: float64(4), int64(2), object(4)\nmemory usage: 83.9+ MB\n"
],
[
"# Total the invoice amounts and count how many invoices there were\nagg_func = {'invoice_total': ['sum'], 'invoice_num': ['nunique']}\ndf.groupby(['category']).agg(agg_func).style.format('{0:,.0f}')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf53937fd5bc6827250ce9b93f33762d9d18e8d | 6,656 | ipynb | Jupyter Notebook | codeSnippets/2_averageChunkOfData.ipynb | praiteri/TeachingNotebook | 75ee8baf8ef81154dffcac556d4739bf73eba712 | [
"MIT"
] | null | null | null | codeSnippets/2_averageChunkOfData.ipynb | praiteri/TeachingNotebook | 75ee8baf8ef81154dffcac556d4739bf73eba712 | [
"MIT"
] | null | null | null | codeSnippets/2_averageChunkOfData.ipynb | praiteri/TeachingNotebook | 75ee8baf8ef81154dffcac556d4739bf73eba712 | [
"MIT"
] | 1 | 2022-02-23T11:36:12.000Z | 2022-02-23T11:36:12.000Z | 27.618257 | 368 | 0.584886 | [
[
[
"# Click \"Edit App\" to see the code\n# Averaging a subset of data\n\nIn this notebook we'll demonstrate how to compute the average of a chunk of data from a large dataset.\nWe can start from loading the Python packages",
"_____no_output_____"
],
[
"# The Jupyter Notebook\nFirst of all we import the Python packages",
"_____no_output_____"
]
],
[
[
"# python packages\nimport pandas as pd # DataFrames and reading CSV files\nimport numpy as np # Numerical libraries\nimport matplotlib.pyplot as plt # Plotting library\nfrom lmfit import Model # Least squares fitting library",
"_____no_output_____"
]
],
[
[
"We then read a data file into a DataFrame, and rename the columns",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv(\"../miscData/random1.csv\")\ndata.columns = (\"X\",\"Y\")\nprint(data)",
"_____no_output_____"
]
],
[
[
"The most common scenario is to compute the average of a chunk of data, discarding the initial and/or final part of the data set. We can therefore define two variables; the index of the first point to be included in the average and the total number of points to be averaged. Alternatively one could set the index of the last point to be included in the average\nRemember that Python starts counting from zero",
"_____no_output_____"
]
],
[
[
"# Total number of points\ntotalNumberOfValues = len(data[\"Y\"]) \n# First element to be included in the average\nfirstValue = 0\n# Number of elements to be included in the average\nnumberOfValuesToAverage = 3 \n# Last element to be included in the average\nlastValue = firstValue + numberOfValuesToAverage - 1\nprint(\"Total number of points in the DataFrame :\",totalNumberOfValues)\nprint(\"First element to be included in the average :\",firstValue)\nprint(\"Last element to be included in the average :\",lastValue)\nprint(\"Number of values to be included in the average :\",\n numberOfValuesToAverage)",
"_____no_output_____"
]
],
[
[
"Let's print the values in the second column that corresponds to the interval we have chosen.\nWe can also to check they are what we expect.",
"_____no_output_____"
]
],
[
[
"values = data.iloc[firstValue:lastValue+1][\"Y\"].values\nprint(values)",
"_____no_output_____"
]
],
[
[
"* Note how in the cell above we used a different syntax for selecting the elements of the data frame, **iloc[:][\"Y\"]**. That is equivalent to the following code.\n* Also note how we used **.values** to convert the DataFrame to an array",
"_____no_output_____"
]
],
[
[
"v0 = data[\"Y\"].values\nv1 = v0[firstValue:lastValue+1]\nprint(v1)",
"_____no_output_____"
]
],
[
[
"We can now compute the average of the numbers in the array using the **mean** function in NumPy.",
"_____no_output_____"
]
],
[
[
"average = np.mean(values)\nprint(\"Average :\",average)",
"_____no_output_____"
]
],
[
[
"For some types of statistical analysis, like bootstrapping, we might be interested in randomly selecting a subset of data, to reduce the human bias in the analysis. In order to do this we can use the **ramdom.choice()** function in NumPy to create an array of random numbers taken between 0 and the size of out sample (_numberOfValues_).\nThis array will contain the indices of the elements that we'll pick from our global array.",
"_____no_output_____"
]
],
[
[
"numberOfValues = 20 \nrandomIndices = np.random.choice(totalNumberOfValues, \n replace=False, \n size=numberOfValues)\nprint(randomIndices)",
"_____no_output_____"
]
],
[
[
"We can then use that array of number to create a array with the data that we are going to average.",
"_____no_output_____"
]
],
[
[
"randomValues = data.iloc[randomIndices][\"Y\"].values\nprint(randomValues)",
"_____no_output_____"
]
],
[
[
"We can then compute the average of _randomValues_",
"_____no_output_____"
]
],
[
[
"averageOfRandomValues = np.mean(randomValues)\nprint(\"Number of randomly selected values :\",numberOfValues)\nprint(\"Average of the randomly selected values :\",averageOfRandomValues)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf549903b7f88743cd34a0aa27b185ad12d5489 | 81,037 | ipynb | Jupyter Notebook | docs/bathymetry/ExploringBagFiles.ipynb | mdunphy/tools | 2f77a936a4d649d5e28189e2d230490fd7ef81a5 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-06-23T16:09:00.000Z | 2022-01-11T17:37:37.000Z | docs/bathymetry/ExploringBagFiles.ipynb | mdunphy/tools | 2f77a936a4d649d5e28189e2d230490fd7ef81a5 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-11-19T17:51:25.000Z | 2021-04-07T21:36:07.000Z | docs/bathymetry/ExploringBagFiles.ipynb | mdunphy/tools | 2f77a936a4d649d5e28189e2d230490fd7ef81a5 | [
"ECL-2.0",
"Apache-2.0"
] | 4 | 2020-05-15T03:34:47.000Z | 2021-11-24T22:39:16.000Z | 89.346196 | 50,238 | 0.79597 | [
[
[
"# Exploring `.bag` Bathymetry Data Files\n\nAn exploration of data and metadata in Bathymetric Attributed Grid (BAG) files.",
"_____no_output_____"
],
[
"References:\n\n* BAG website: https://marinemetadata.org/references/bag\n* Format Specification Document: http://www.opennavsurf.org/papers/ons_fsd.pdf\n* A slightly dated, Python 2 based video lesson on accessing BAG files: https://www.youtube.com/watch?v=dEtC6bRcjvc",
"_____no_output_____"
],
[
"Working environment for this notebook:\n\n* Python 3\n* `conda` packages:\n\n * `h5py` - Python interface to HDF5 format used by BAG\n * `lxml` - XML parser and manipulation library to access BAG metadata\n * `numpy` - for n-dimensional arrays\n * `matplotlib` - for plotting\n * `notebook` - Jupyter notebook\n \n\"Keep Calm and Conda Install\"",
"_____no_output_____"
],
[
"If you are looking at this in the Salish Sea Tools docs at\nhttp://salishsea-meopar-tools.readthedocs.io/en/latest/bathymetry/ExploringBagFiles.html,\nyou can find the source notebook that generated the page in the Salish Sea project\n[tools repo](https://github.com/SalishSeaCast/tools)\nat `tools/bathymetry/ExploringBagFiles.ipynb`\nor download the notebook by itself\n(instead of cloning the [tools repo](https://github.com/SalishSeaCast/tools) to get it)\nfrom\nhttp://nbviewer.jupyter.org/github/SalishSeaCast/tools/blob/master/bathymetry/ExploringBagFiles.ipynb.",
"_____no_output_____"
]
],
[
[
"from io import BytesIO\n\nimport h5py\nfrom lxml import etree\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## BAG Dataset\n\nLoad the BAG dataset and explore some of its basic attributes:",
"_____no_output_____"
]
],
[
[
"bag = h5py.File('/ocean/sallen/allen/research/MEOPAR/chs_bathy/092B.bag')",
"_____no_output_____"
],
[
"print(type(bag))\nprint(bag.name)\nprint(bag.filename)",
"<class 'h5py._hl.files.File'>\n/\n/ocean/sallen/allen/research/MEOPAR/chs_bathy/092B.bag\n"
],
[
"for item in bag.items():\n print(item)\n \nfor value in bag.values():\n print(value)",
"('BAG_root', <HDF5 group \"/BAG_root\" (4 members)>)\n<HDF5 group \"/BAG_root\" (4 members)>\n"
],
[
"list(bag['BAG_root'].items())",
"_____no_output_____"
]
],
[
[
"The list above contains the 4 elements that the BAG specification tells us\nshould be in the file:\n\n* `elevation` is the depths as negative 32-bit floats, with `1.0e6` as the \"no data\" value (land, typically)\n* `metadata` is the BAG metadata, a blob of XML\n* `tracking_list` is adjustments to the `elevation` data values made by a hydrographer\n* `uncertainty` is the vertical uncertainty in the `elevation` data values\n\nNote that under Python 3 the `h5py` library maked heavy use of `memoryview` objects\nwhich are iterators.\nThe transformation to a `list` object above,\nor the use of a `for` loop above that collects the items from the `memoryview`.\n\nOne odd thing to note is that the metadata is stored as a collection of 1-character strings\nwhich turn out to be single bytes in Python 3.\nWe're going to have to do something about that...\n\nPeeling away the HDF5 group layer:",
"_____no_output_____"
]
],
[
[
"root = bag['BAG_root']\nprint(root.name)\nprint(root.parent)\nlist(root.items())",
"/BAG_root\n<HDF5 group \"/\" (1 members)>\n"
]
],
[
[
"## The `elevation` Element\n\nPulling the `elevation` dataset out of the BAG,\nand the depths data out of the dataset:",
"_____no_output_____"
]
],
[
[
"elev_node = root['elevation']\nprint(type(elev_node))",
"<class 'h5py._hl.dataset.Dataset'>\n"
],
[
"elev = elev_node.value\nprint(type(elev))",
"<class 'numpy.ndarray'>\n"
],
[
"print(elev.min(), elev.max())",
"-341.917 1e+06\n"
]
],
[
[
"As noted above `1e+06` indicates no data at a point,\ntypically meaning land.\nLet's replace those with NumPy `NaN`s so that we can work with the data more easily:",
"_____no_output_____"
]
],
[
[
"elev[elev > 9e5] = np.NAN\nprint(np.nanmin(elev), np.nanmax(elev))",
"-341.917 4.2\n"
],
[
"fig, ax = plt.subplots(1, 1)\nax.imshow(elev)\nax.invert_yaxis()",
"_____no_output_____"
]
],
[
[
"## The `metadata` Element\n\nPulling the `metadata` element out of the BAG,\nand getting it into a form that we can work with:",
"_____no_output_____"
]
],
[
[
"metadata_node = root['metadata']\nprint(type(metadata_node))\nprint(metadata_node)",
"<class 'h5py._hl.dataset.Dataset'>\n<HDF5 dataset \"metadata\": shape (9730,), type \"|S1\">\n"
]
],
[
[
"As noted above,\nthe metadata is a collection of single characters in the form of bytes.\nWe need to collect those bytes into a buffer and parse them to get an XML tree object\nthat we can work with in code:",
"_____no_output_____"
]
],
[
[
"buffer = BytesIO(metadata_node.value)\ntree = etree.parse(buffer)\nroot = tree.getroot()",
"_____no_output_____"
]
],
[
[
"Now we can get a somewhat readable rendering of the metadata in all its verbose XML glory:",
"_____no_output_____"
]
],
[
[
"print(etree.tostring(root, pretty_print=True).decode('ascii'))",
"<gmi:MI_Metadata xmlns:gmi=\"http://www.isotc211.org/2005/gmi\" xmlns:gmd=\"http://www.isotc211.org/2005/gmd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:gml=\"http://www.opengis.net/gml/3.2\" xmlns:gco=\"http://www.isotc211.org/2005/gco\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:bag=\"http://www.opennavsurf.org/schema/bag\">\n <gmd:fileIdentifier>\n <gco:CharacterString>2db1df98-90f2-4e20-a91e-6089111e2f5d</gco:CharacterString>\n </gmd:fileIdentifier>\n <gmd:language>\n <gmd:LanguageCode codeList=\"http://www.loc.gov/standards/iso639-2/\" codeListValue=\"eng\">eng</gmd:LanguageCode>\n </gmd:language>\n <gmd:characterSet>\n <gmd:MD_CharacterSetCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode\" codeListValue=\"utf8\">utf8</gmd:MD_CharacterSetCode>\n </gmd:characterSet>\n <gmd:hierarchyLevel>\n <gmd:MD_ScopeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode\" codeListValue=\"dataset\">dataset</gmd:MD_ScopeCode>\n </gmd:hierarchyLevel>\n <gmd:contact>\n <gmd:CI_ResponsibleParty>\n <gmd:individualName>\n <gco:CharacterString>dillt</gco:CharacterString>\n </gmd:individualName>\n <gmd:organisationName>\n <gco:CharacterString>CHS</gco:CharacterString>\n </gmd:organisationName>\n <gmd:positionName>\n <gco:CharacterString> MDH</gco:CharacterString>\n </gmd:positionName>\n <gmd:role>\n <gmd:CI_RoleCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode\" codeListValue=\"pointOfContact\">pointOfContact</gmd:CI_RoleCode>\n </gmd:role>\n </gmd:CI_ResponsibleParty>\n </gmd:contact>\n <gmd:dateStamp>\n <gco:Date>2014-02-07</gco:Date>\n </gmd:dateStamp>\n <gmd:metadataStandardName>\n <gco:CharacterString>ISO 19115</gco:CharacterString>\n </gmd:metadataStandardName>\n <gmd:metadataStandardVersion>\n <gco:CharacterString>2003/Cor.1:2006</gco:CharacterString>\n </gmd:metadataStandardVersion>\n <gmd:spatialRepresentationInfo>\n <gmd:MD_Georectified>\n <gmd:numberOfDimensions>\n <gco:Integer>2</gco:Integer>\n </gmd:numberOfDimensions>\n <gmd:axisDimensionProperties>\n <gmd:MD_Dimension>\n <gmd:dimensionName>\n <gmd:MD_DimensionNameTypeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_DimensionNameTypeCode\" codeListValue=\"row\">row</gmd:MD_DimensionNameTypeCode>\n </gmd:dimensionName>\n <gmd:dimensionSize>\n <gco:Integer>337</gco:Integer>\n </gmd:dimensionSize>\n <gmd:resolution>\n <gco:Measure uom=\"Metres\">500</gco:Measure>\n </gmd:resolution>\n </gmd:MD_Dimension>\n </gmd:axisDimensionProperties>\n <gmd:axisDimensionProperties>\n <gmd:MD_Dimension>\n <gmd:dimensionName>\n <gmd:MD_DimensionNameTypeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_DimensionNameTypeCode\" codeListValue=\"column\">column</gmd:MD_DimensionNameTypeCode>\n </gmd:dimensionName>\n <gmd:dimensionSize>\n <gco:Integer>448</gco:Integer>\n </gmd:dimensionSize>\n <gmd:resolution>\n <gco:Measure uom=\"Metres\">500</gco:Measure>\n </gmd:resolution>\n </gmd:MD_Dimension>\n </gmd:axisDimensionProperties>\n <gmd:cellGeometry>\n <gmd:MD_CellGeometryCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CellGeometryCode\" codeListValue=\"point\">point</gmd:MD_CellGeometryCode>\n </gmd:cellGeometry>\n <gmd:transformationParameterAvailability>\n <gco:Boolean>1</gco:Boolean>\n </gmd:transformationParameterAvailability>\n <gmd:checkPointAvailability>\n <gco:Boolean>0</gco:Boolean>\n </gmd:checkPointAvailability>\n <gmd:cornerPoints>\n <gml:Point gml:id=\"id1\">\n <gml:coordinates decimal=\".\" cs=\",\" ts=\" \">-13804000.000000000000,6075000.000000000000 -13580500.000000000000,6243000.000000000000</gml:coordinates>\n </gml:Point>\n </gmd:cornerPoints>\n <gmd:pointInPixel>\n <gmd:MD_PixelOrientationCode>center</gmd:MD_PixelOrientationCode>\n </gmd:pointInPixel>\n </gmd:MD_Georectified>\n </gmd:spatialRepresentationInfo>\n <gmd:referenceSystemInfo>\n <gmd:MD_ReferenceSystem>\n <gmd:referenceSystemIdentifier>\n <gmd:RS_Identifier>\n <gmd:code>\n <gco:CharacterString>PROJCS[\"WRLDMERC\",\n GEOGCS[\"unnamed\",\n DATUM[\"WGS_1984\",\n SPHEROID[\"WGS_1984\",6378137,298.2572201434276],\n TOWGS84[0,0,0,0,0,0,0]],\n PRIMEM[\"Greenwich\",0],\n UNIT[\"degree\",0.0174532925199433],\n EXTENSION[\"Scaler\",\"0,0,0,0.01,0.01,0.0001\"],\n EXTENSION[\"Source\",\"CARIS\"]],\n PROJECTION[\"Mercator_1SP\"],\n PARAMETER[\"central_meridian\",0],\n PARAMETER[\"scale_factor\",1],\n PARAMETER[\"false_easting\",0],\n PARAMETER[\"false_northing\",0],\n UNIT[\"Meter\",1]]</gco:CharacterString>\n </gmd:code>\n <gmd:codeSpace>\n <gco:CharacterString>WKT</gco:CharacterString>\n </gmd:codeSpace>\n </gmd:RS_Identifier>\n </gmd:referenceSystemIdentifier>\n </gmd:MD_ReferenceSystem>\n </gmd:referenceSystemInfo>\n <gmd:referenceSystemInfo>\n <gmd:MD_ReferenceSystem>\n <gmd:referenceSystemIdentifier>\n <gmd:RS_Identifier>\n <gmd:code>\n <gco:CharacterString>VERT_CS[\"Unknown\", VERT_DATUM[Unknown, 2000]]</gco:CharacterString>\n </gmd:code>\n <gmd:codeSpace>\n <gco:CharacterString>WKT</gco:CharacterString>\n </gmd:codeSpace>\n </gmd:RS_Identifier>\n </gmd:referenceSystemIdentifier>\n </gmd:MD_ReferenceSystem>\n </gmd:referenceSystemInfo>\n <gmd:identificationInfo>\n <bag:BAG_DataIdentification>\n <gmd:citation>\n <gmd:CI_Citation>\n <gmd:title>\n <gco:CharacterString>BDB_92B_500m_WorldMerc_2014-02-05_extract_final.csar</gco:CharacterString>\n </gmd:title>\n <gmd:date>\n <gmd:CI_Date>\n <gmd:date>\n <gco:Date>2014-02-07</gco:Date>\n </gmd:date>\n <gmd:dateType>\n <gmd:CI_DateTypeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode\" codeListValue=\"creation\">creation</gmd:CI_DateTypeCode>\n </gmd:dateType>\n </gmd:CI_Date>\n </gmd:date>\n <gmd:citedResponsibleParty>\n <gmd:CI_ResponsibleParty>\n <gmd:individualName>\n <gco:CharacterString>dillt</gco:CharacterString>\n </gmd:individualName>\n <gmd:organisationName>\n <gco:CharacterString>CHS</gco:CharacterString>\n </gmd:organisationName>\n <gmd:positionName>\n <gco:CharacterString> MDH</gco:CharacterString>\n </gmd:positionName>\n <gmd:role>\n <gmd:CI_RoleCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode\" codeListValue=\"originator\">originator</gmd:CI_RoleCode>\n </gmd:role>\n </gmd:CI_ResponsibleParty>\n </gmd:citedResponsibleParty>\n </gmd:CI_Citation>\n </gmd:citation>\n <gmd:abstract>\n <gco:CharacterString>unknown</gco:CharacterString>\n </gmd:abstract>\n <gmd:status>\n <gmd:MD_ProgressCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode\" codeListValue=\"onGoing\">onGoing</gmd:MD_ProgressCode>\n </gmd:status>\n <gmd:spatialRepresentationType>\n <gmd:MD_SpatialRepresentationTypeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_SpatialRepresentationTypeCode\" codeListValue=\"grid\">grid</gmd:MD_SpatialRepresentationTypeCode>\n </gmd:spatialRepresentationType>\n <gmd:language>\n <gmd:LanguageCode codeList=\"http://www.loc.gov/standards/iso639-2/\" codeListValue=\"eng\">eng</gmd:LanguageCode>\n </gmd:language>\n <gmd:characterSet>\n <gmd:MD_CharacterSetCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode\" codeListValue=\"utf8\">utf8</gmd:MD_CharacterSetCode>\n </gmd:characterSet>\n <gmd:topicCategory>\n <gmd:MD_TopicCategoryCode>elevation</gmd:MD_TopicCategoryCode>\n </gmd:topicCategory>\n <gmd:extent>\n <gmd:EX_Extent>\n <gmd:geographicElement>\n <gmd:EX_GeographicBoundingBox>\n <gmd:westBoundLongitude>\n <gco:Decimal>-124.003</gco:Decimal>\n </gmd:westBoundLongitude>\n <gmd:eastBoundLongitude>\n <gco:Decimal>-121.996</gco:Decimal>\n </gmd:eastBoundLongitude>\n <gmd:southBoundLatitude>\n <gco:Decimal>47.9995</gco:Decimal>\n </gmd:southBoundLatitude>\n <gmd:northBoundLatitude>\n <gco:Decimal>49.0024</gco:Decimal>\n </gmd:northBoundLatitude>\n </gmd:EX_GeographicBoundingBox>\n </gmd:geographicElement>\n </gmd:EX_Extent>\n </gmd:extent>\n <bag:verticalUncertaintyType>\n <bag:BAG_VertUncertCode codeList=\"http://www.opennavsurf.org/schema/bag/bagCodelists.xml#BAG_VertUncertCode\" codeListValue=\"unknown\">unknown</bag:BAG_VertUncertCode>\n </bag:verticalUncertaintyType>\n </bag:BAG_DataIdentification>\n </gmd:identificationInfo>\n <gmd:dataQualityInfo>\n <gmd:DQ_DataQuality>\n <gmd:scope>\n <gmd:DQ_Scope>\n <gmd:level>\n <gmd:MD_ScopeCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode\" codeListValue=\"dataset\">dataset</gmd:MD_ScopeCode>\n </gmd:level>\n </gmd:DQ_Scope>\n </gmd:scope>\n <gmd:lineage>\n <gmd:LI_Lineage>\n <gmd:processStep>\n <bag:BAG_ProcessStep>\n <gmd:description>\n <gco:CharacterString/>\n </gmd:description>\n <gmd:dateTime>\n <gco:DateTime/>\n </gmd:dateTime>\n <bag:trackingId>\n <gco:CharacterString/>\n </bag:trackingId>\n </bag:BAG_ProcessStep>\n </gmd:processStep>\n <gmd:processStep>\n <bag:BAG_ProcessStep>\n <gmd:description>\n <gco:CharacterString>Designated soundings applied by automated procedure.</gco:CharacterString>\n </gmd:description>\n <gmd:dateTime>\n <gco:DateTime/>\n </gmd:dateTime>\n <bag:trackingId>\n <gco:CharacterString>0</gco:CharacterString>\n </bag:trackingId>\n </bag:BAG_ProcessStep>\n </gmd:processStep>\n </gmd:LI_Lineage>\n </gmd:lineage>\n </gmd:DQ_DataQuality>\n </gmd:dataQualityInfo>\n <gmd:metadataConstraints>\n <gmd:MD_LegalConstraints>\n <gmd:useConstraints>\n <gmd:MD_RestrictionCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_RestrictionCode\" codeListValue=\"otherRestrictions\">otherRestrictions</gmd:MD_RestrictionCode>\n </gmd:useConstraints>\n <gmd:otherConstraints>\n <gco:CharacterString>Not for navigation, not to be redistributed</gco:CharacterString>\n </gmd:otherConstraints>\n </gmd:MD_LegalConstraints>\n </gmd:metadataConstraints>\n <gmd:metadataConstraints>\n <gmd:MD_SecurityConstraints>\n <gmd:classification>\n <gmd:MD_ClassificationCode codeList=\"http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ClassificationCode\" codeListValue=\"unclassified\">unclassified</gmd:MD_ClassificationCode>\n </gmd:classification>\n <gmd:userNote>\n <gco:CharacterString>Contact Pete Wills for inquiries: [email protected], 250-363-6384</gco:CharacterString>\n </gmd:userNote>\n </gmd:MD_SecurityConstraints>\n </gmd:metadataConstraints>\n</gmi:MI_Metadata>\n\n"
]
],
[
[
"To get information out of the tree we need to deal with the\nnamespaces that are used for the various tags:",
"_____no_output_____"
]
],
[
[
"root.nsmap",
"_____no_output_____"
]
],
[
[
"Building the tags that we need to get to the resolution,\nand then walking the tree to get the resolution and its units:",
"_____no_output_____"
]
],
[
[
"sri = etree.QName(root.nsmap['gmd'], 'spatialRepresentationInfo').text\nadp = etree.QName(root.nsmap['gmd'], 'axisDimensionProperties').text\ndim = etree.QName(root.nsmap['gmd'], 'MD_Dimension').text\nres = etree.QName(root.nsmap['gmd'], 'resolution').text\nres_meas = etree.QName(root.nsmap['gco'], 'Measure').text",
"_____no_output_____"
],
[
"resolution = (\n root\n .find('.//{}'.format(sri))\n .find('.//{}'.format(adp))\n .find('.//{}'.format(dim))\n .find('.//{}'.format(res))\n .find('.//{}'.format(res_meas))\n)\nprint(resolution.text, resolution.get('uom'))",
"500 Metres\n"
]
],
[
[
"There might be a more elegant way of doing the sequence of `find`s above\nif one were to dig more deeply into XPATH syntax.\n\nSimilarily for the data region boundaries:",
"_____no_output_____"
]
],
[
[
"id_info = etree.QName(root.nsmap['gmd'], 'identificationInfo').text\nbag_data_id = etree.QName(root.nsmap['bag'], 'BAG_DataIdentification').text\nextent = etree.QName(root.nsmap['gmd'], 'extent').text\nex_extent = etree.QName(root.nsmap['gmd'], 'EX_Extent').text\ngeo_el = etree.QName(root.nsmap['gmd'], 'geographicElement').text\ngeo_bb = etree.QName(root.nsmap['gmd'], 'EX_GeographicBoundingBox').text\n\nwest_bound_lon = etree.QName(root.nsmap['gmd'], 'westBoundLongitude').text\neast_bound_lon = etree.QName(root.nsmap['gmd'], 'eastBoundLongitude').text\nnorth_bound_lat = etree.QName(root.nsmap['gmd'], 'northBoundLatitude').text\nsouth_bound_lat = etree.QName(root.nsmap['gmd'], 'southBoundLatitude').text\n\ndecimal = etree.QName(root.nsmap['gco'], 'Decimal').text",
"_____no_output_____"
],
[
"bbox = (\n root\n .find('.//{}'.format(id_info))\n .find('.//{}'.format(bag_data_id))\n .find('.//{}'.format(extent))\n .find('.//{}'.format(ex_extent))\n .find('.//{}'.format(geo_el))\n .find('.//{}'.format(geo_bb))\n)\nwest_lon = (\n bbox\n .find('.//{}'.format(west_bound_lon))\n .find('.//{}'.format(decimal))\n)\nprint('west:', west_lon.text)\n\neast_lon = (\n bbox\n .find('.//{}'.format(east_bound_lon))\n .find('.//{}'.format(decimal))\n)\nprint('east:', east_lon.text)\n\nnorth_lat = (\n bbox\n .find('.//{}'.format(north_bound_lat))\n .find('.//{}'.format(decimal))\n)\nprint('north:', north_lat.text)\n\nsouth_lat = (\n bbox\n .find('.//{}'.format(south_bound_lat))\n .find('.//{}'.format(decimal))\n)\nprint('south:', south_lat.text)",
"west: -124.003\neast: -121.996\nnorth: 49.0024\nsouth: 47.9995\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf54c9e72f456edf16c4b404ffa05d1f516ee98 | 2,804 | ipynb | Jupyter Notebook | 100days/day 07 - binary addition FSA.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 13 | 2021-03-11T00:25:22.000Z | 2022-03-19T00:19:23.000Z | 100days/day 07 - binary addition FSA.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 160 | 2021-04-26T19:04:15.000Z | 2022-03-26T20:18:37.000Z | 100days/day 07 - binary addition FSA.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 12 | 2021-04-26T19:43:01.000Z | 2022-01-31T08:36:29.000Z | 19.886525 | 81 | 0.462197 | [
[
[
"from itertools import zip_longest",
"_____no_output_____"
]
],
[
[
"## algorithm",
"_____no_output_____"
]
],
[
[
"# states\np0c0 = 0, {}\np1c0 = 1, {}\np0c1 = 0, {}\np1c1 = 1, {}\n\n# transitions between states\np0c0[1].update({(0, 0): p0c0, (1, 0): p1c0, (0, 1): p1c0, (1, 1): p0c1})\np1c0[1].update({(0, 0): p0c0, (1, 0): p1c0, (0, 1): p1c0, (1, 1): p0c1})\np0c1[1].update({(0, 0): p1c0, (1, 0): p0c1, (0, 1): p0c1, (1, 1): p1c1})\np1c1[1].update({(0, 0): p1c0, (1, 0): p0c1, (0, 1): p0c1, (1, 1): p1c1})\n\ndef add(x, y):\n x = map(int, reversed(x))\n y = map(int, reversed(y))\n z = []\n\n # simulate automaton\n value, transition = p0c0\n for r, s in zip_longest(x, y, fillvalue=0):\n value, transition = transition[r, s]\n z.append(value)\n\n # handle carry\n z.append(transition[0, 0][0])\n \n return ''.join(map(str, reversed(z)))",
"_____no_output_____"
]
],
[
[
"## run",
"_____no_output_____"
]
],
[
[
"add('1100100100100', '100100011000')",
"_____no_output_____"
],
[
"bin(0b1100100100100 + 0b100100011000)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf55f6acfb174fd17a89e897ed27e65d8994a92 | 24,441 | ipynb | Jupyter Notebook | 04 - Run Experiments.ipynb | HarshKothari21/mslearn-dp100 | 5edb988bf8af81018afa87b0c42cbff7682d684c | [
"MIT"
] | 1 | 2021-03-11T12:45:11.000Z | 2021-03-11T12:45:11.000Z | 04 - Run Experiments.ipynb | gosiaborzecka/mslearn-dp100 | f239fd89deb74b8808e79f452dab1b737a3c3070 | [
"MIT"
] | 2 | 2021-02-22T11:34:30.000Z | 2021-02-22T11:34:58.000Z | 04 - Run Experiments.ipynb | gosiaborzecka/mslearn-dp100 | f239fd89deb74b8808e79f452dab1b737a3c3070 | [
"MIT"
] | 6 | 2021-02-09T11:07:16.000Z | 2021-07-08T08:46:58.000Z | 37.144377 | 699 | 0.62792 | [
[
[
"# Run Experiments\n\nYou can use the Azure Machine Learning SDK to run code experiments that log metrics and generate outputs. This is at the core of most machine learning operations in Azure Machine Learning.\n\n## Connect to your workspace\n\nAll experiments and associated resources are managed within your Azure Machine Learning workspace. In most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your wokspace, the configuration file has already been downloaded to the root folder.\n\nThe code below uses the configuration file to connect to your workspace.\n\n> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.",
"_____no_output_____"
]
],
[
[
"import azureml.core\nfrom azureml.core import Workspace\n\n# Load the workspace from the saved config file\nws = Workspace.from_config()\nprint('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))",
"_____no_output_____"
]
],
[
[
"## Run an experiment\n\nOne of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline \n\n# Create an Azure ML experiment in your workspace\nexperiment = Experiment(workspace=ws, name=\"mslearn-diabetes\")\n\n# Start logging data from the experiment, obtaining a reference to the experiment run\nrun = experiment.start_logging()\nprint(\"Starting experiment:\", experiment.name)\n\n# load the data from a local file\ndata = pd.read_csv('data/diabetes.csv')\n\n# Count the rows and log the result\nrow_count = (len(data))\nrun.log('observations', row_count)\nprint('Analyzing {} rows of data'.format(row_count))\n\n# Plot and log the count of diabetic vs non-diabetic patients\ndiabetic_counts = data['Diabetic'].value_counts()\nfig = plt.figure(figsize=(6,6))\nax = fig.gca() \ndiabetic_counts.plot.bar(ax = ax) \nax.set_title('Patients with Diabetes') \nax.set_xlabel('Diagnosis') \nax.set_ylabel('Patients')\nplt.show()\nrun.log_image(name='label distribution', plot=fig)\n\n# log distinct pregnancy counts\npregnancies = data.Pregnancies.unique()\nrun.log_list('pregnancy categories', pregnancies)\n\n# Log summary statistics for numeric columns\nmed_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI']\nsummary_stats = data[med_columns].describe().to_dict()\nfor col in summary_stats:\n keys = list(summary_stats[col].keys())\n values = list(summary_stats[col].values())\n for index in range(len(keys)):\n run.log_row(col, stat=keys[index], value = values[index])\n \n# Save a sample of the data and upload it to the experiment output\ndata.sample(100).to_csv('sample.csv', index=False, header=True)\nrun.upload_file(name='outputs/sample.csv', path_or_stream='./sample.csv')\n\n# Complete the run\nrun.complete()",
"_____no_output_____"
]
],
[
[
"## View run details\n\nIn Jupyter Notebooks, you can use the **RunDetails** widget to see a visualization of the run details.",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\n\nRunDetails(run).show()",
"_____no_output_____"
]
],
[
[
"### View more details in Azure Machine Learning studio\n\nNote that the **RunDetails** widget includes a link to **view run details** in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:\n\n- The **Details** tab contains the general properties of the experiment run.\n- The **Metrics** tab enables you to select logged metrics and view them as tables or charts.\n- The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)\n- The **Child Runs** tab lists any child runs (in this experiment there are none).\n- The **Outputs + Logs** tab shows the output or log files generated by the experiment.\n- The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).\n- The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).\n- The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none).",
"_____no_output_____"
],
[
"### Retrieve experiment details using the SDK\n\nThe **run** variable in the code you ran previously is an instance of a **Run** object, which is a reference to an individual run of an experiment in Azure Machine Learning. You can use this reference to get information about the run and its outputs:",
"_____no_output_____"
]
],
[
[
"import json\n\n# Get logged metrics\nprint(\"Metrics:\")\nmetrics = run.get_metrics()\nfor metric_name in metrics:\n print(metric_name, \":\", metrics[metric_name])\n\n# Get output files\nprint(\"\\nFiles:\")\nfiles = run.get_file_names()\nfor file in files:\n print(file)",
"_____no_output_____"
]
],
[
[
"You can download the files produced by the experiment, either individually by using the **download_file** method, or by using the **download_files** method to retrieve multiple files. The following code downloads all of the files in the run's **output** folder:",
"_____no_output_____"
]
],
[
[
"import os\n\ndownload_folder = 'downloaded-files'\n\n# Download files in the \"outputs\" folder\nrun.download_files(prefix='outputs', output_directory=download_folder)\n\n# Verify the files have been downloaded\nfor root, directories, filenames in os.walk(download_folder): \n for filename in filenames: \n print (os.path.join(root,filename))",
"_____no_output_____"
]
],
[
[
"If you need to troubleshoot the experiment run, you can use the **get_details** method to retrieve basic details about the run, or you can use the **get_details_with_logs** method to retrieve the run details as well as the contents of log files generated during the run:",
"_____no_output_____"
]
],
[
[
"run.get_details_with_logs()",
"_____no_output_____"
]
],
[
[
"Note that the details include information about the compute target on which the experiment was run, the date and time when it started and ended. Additionally, because the notebook containing the experiment code (this one) is in a cloned Git repository, details about the repo, branch, and status are recorded in the run history.\n\nIn this case, note that the **logFiles** entry in the details indicates that no log files were generated. That's typical for an inline experiment like the one you ran, but things get more interesting when you run a script as an experiment; which is what we'll look at next.",
"_____no_output_____"
],
[
"## Run an experiment script\n\nIn the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.\n\nFirst, let's create a folder for the experiment files, and copy the data into it:",
"_____no_output_____"
]
],
[
[
"import os, shutil\n\n# Create a folder for the experiment files\nfolder_name = 'diabetes-experiment-files'\nexperiment_folder = './' + folder_name\nos.makedirs(folder_name, exist_ok=True)\n\n# Copy the data file into the experiment folder\nshutil.copy('data/diabetes.csv', os.path.join(folder_name, \"diabetes.csv\"))",
"_____no_output_____"
]
],
[
[
"Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.\n\n> **Note**: running the following cell just *creates* the script file - it doesn't run it!",
"_____no_output_____"
]
],
[
[
"%%writefile $folder_name/diabetes_experiment.py\nfrom azureml.core import Run\nimport pandas as pd\nimport os\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the diabetes dataset\ndata = pd.read_csv('diabetes.csv')\n\n# Count the rows and log the result\nrow_count = (len(data))\nrun.log('observations', row_count)\nprint('Analyzing {} rows of data'.format(row_count))\n\n# Count and log the label counts\ndiabetic_counts = data['Diabetic'].value_counts()\nprint(diabetic_counts)\nfor k, v in diabetic_counts.items():\n run.log('Label:' + str(k), v)\n \n# Save a sample of the data in the outputs folder (which gets uploaded automatically)\nos.makedirs('outputs', exist_ok=True)\ndata.sample(100).to_csv(\"outputs/sample.csv\", index=False, header=True)\n\n# Complete the run\nrun.complete()",
"_____no_output_____"
]
],
[
[
"This code is a simplified version of the inline code used before. However, note the following:\n- It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.\n- It loads the diabetes data from the folder where the script is located.\n- It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run",
"_____no_output_____"
],
[
"Now you're almost ready to run the experiment. To run the script, you must create a **ScriptRunConfig** that identifies the Python script file to be run in the experiment, and then run an experiment based on it.\n\n> **Note**: The ScriptRunConfig also determines the compute target and Python environment. If you don't specify these, a default environment is created automatically on the local compute where the code is being run (in this case, where this notebook is being run).\n\nThe following cell configures and submits the script-based experiment.",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nfrom azureml.core import Experiment, ScriptRunConfig\nfrom azureml.widgets import RunDetails\n\n\n# Create a script config\nscript_config = ScriptRunConfig(source_directory=experiment_folder, \n script='diabetes_experiment.py') \n\n# submit the experiment\nexperiment = Experiment(workspace=ws, name='mslearn-diabetes')\nrun = experiment.submit(config=script_config)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\nprint('\\n')\nfor file in run.get_file_names():\n print(file)",
"_____no_output_____"
]
],
[
[
"Note that this time, the run generated some log files. You can view these in the widget, or you can use the **get_details_with_logs** method like we did before, only this time the output will include the log data.",
"_____no_output_____"
]
],
[
[
"run.get_details_with_logs()",
"_____no_output_____"
]
],
[
[
"Although you can view the log details in the output above, it's usually easier to download the log files and view them in a text editor.",
"_____no_output_____"
]
],
[
[
"import os\n\nlog_folder = 'downloaded-logs'\n\n# Download all files\nrun.get_all_logs(destination=log_folder)\n\n# Verify the files have been downloaded\nfor root, directories, filenames in os.walk(log_folder): \n for filename in filenames: \n print (os.path.join(root,filename))",
"_____no_output_____"
]
],
[
[
"## View experiment run history\n\nNow that you've run the same experiment multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, Run\n\ndiabetes_experiment = ws.experiments['mslearn-diabetes']\nfor logged_run in diabetes_experiment.get_runs():\n print('Run ID:', logged_run.id)\n metrics = logged_run.get_metrics()\n for key in metrics.keys():\n print('-', key, metrics.get(key))",
"_____no_output_____"
]
],
[
[
"## Use MLflow\n\nMLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics as an alternative to the native log functionality.\n\nTo take advantage of this capability, you'll need the **mlflow** and **azureml-mlflow** packages, so let's ensure they are installed.",
"_____no_output_____"
]
],
[
[
"!pip show mlflow azureml-mlflow",
"_____no_output_____"
]
],
[
[
"### Use MLflow with an inline experiment\n\nTo use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nimport pandas as pd\nimport mlflow\n\n# Set the MLflow tracking URI to the workspace\nmlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())\n\n# Create an Azure ML experiment in your workspace\nexperiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')\nmlflow.set_experiment(experiment.name)\n\n# start the MLflow experiment\nwith mlflow.start_run():\n \n print(\"Starting experiment:\", experiment.name)\n \n # Load data\n data = pd.read_csv('data/diabetes.csv')\n\n # Count the rows and log the result\n row_count = (len(data))\n mlflow.log_metric('observations', row_count)\n print(\"Run complete\")",
"_____no_output_____"
]
],
[
[
"Now let's look at the metrics logged during the run",
"_____no_output_____"
]
],
[
[
"# Get the latest run of the experiment\nrun = list(experiment.get_runs())[0]\n\n# Get logged metrics\nprint(\"\\nMetrics:\")\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\n \n# Get a link to the experiment in Azure ML studio \nexperiment_url = experiment.get_portal_url()\nprint('See details at', experiment_url)",
"_____no_output_____"
]
],
[
[
"After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of the experiment and view its **Metrics** tab to see the logged metric.\n\n### Use MLflow in an experiment script\n\nYou can also use MLflow to track metrics in an experiment script.\n\nRun the following two cells to create a folder and a script for an experiment that uses MLflow.",
"_____no_output_____"
]
],
[
[
"import os, shutil\n\n# Create a folder for the experiment files\nfolder_name = 'mlflow-experiment-files'\nexperiment_folder = './' + folder_name\nos.makedirs(folder_name, exist_ok=True)\n\n# Copy the data file into the experiment folder\nshutil.copy('data/diabetes.csv', os.path.join(folder_name, \"diabetes.csv\"))",
"_____no_output_____"
],
[
"%%writefile $folder_name/mlflow_diabetes.py\nfrom azureml.core import Run\nimport pandas as pd\nimport mlflow\n\n\n# start the MLflow experiment\nwith mlflow.start_run():\n \n # Load data\n data = pd.read_csv('diabetes.csv')\n\n # Count the rows and log the result\n row_count = (len(data))\n print('observations:', row_count)\n mlflow.log_metric('observations', row_count)",
"_____no_output_____"
]
],
[
[
"When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, ScriptRunConfig, Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\nfrom azureml.widgets import RunDetails\n\n\n# Create a Python environment for the experiment\nmlflow_env = Environment(\"mlflow-env\")\n\n# Ensure the required packages are installed\npackages = CondaDependencies.create(conda_packages=['pandas','pip'],\n pip_packages=['mlflow','azureml-mlflow'])\nmlflow_env.python.conda_dependencies = packages\n\n# Create a script config\nscript_mlflow = ScriptRunConfig(source_directory=experiment_folder,\n script='mlflow_diabetes.py',\n environment=mlflow_env) \n\n# submit the experiment\nexperiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')\nrun = experiment.submit(config=script_mlflow)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"As usual, you can get the logged metrics from the experiment run when it's finished.",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))",
"_____no_output_____"
]
],
[
[
"> **More Information**: To find out more about running experiments, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation. For details of how to log metrics in a run, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments). For more information about integrating Azure ML experiments with MLflow, see [this topic](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecf5709a9eff9f4887d2ca7bcdee812e30c84e5e | 86,002 | ipynb | Jupyter Notebook | 1-Training/AzureServiceClassifier_Training.ipynb | phillipf/shared-inbox-classification | f35424def5a104ac32328ebc4b9d0fb680080c1e | [
"MIT"
] | 3 | 2020-09-27T12:26:07.000Z | 2021-08-30T09:32:08.000Z | 1-Training/AzureServiceClassifier_Training.ipynb | phillipf/shared-inbox-classification | f35424def5a104ac32328ebc4b9d0fb680080c1e | [
"MIT"
] | null | null | null | 1-Training/AzureServiceClassifier_Training.ipynb | phillipf/shared-inbox-classification | f35424def5a104ac32328ebc4b9d0fb680080c1e | [
"MIT"
] | null | null | null | 59.393646 | 3,922 | 0.65731 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"# Part 1: Training Tensorflow 2.0 Model on Azure Machine Learning Service\n\n## Overview of the part 1\nThis notebook is Part 1 (Preparing Data and Model Training) of a four part workshop that demonstrates an end-to-end workflow using Tensorflow 2.0 on Azure Machine Learning service. The different components of the workshop are as follows:\n\n- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)\n- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)\n- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)\n- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)\n\n**This notebook will cover the following topics:**\n\n- Stackoverflow question tagging problem\n- Introduction to Transformer and BERT deep learning models\n- Introduction to Azure Machine Learning service\n- Preparing raw data for training using Apache Spark\n- Registering cleaned up training data as a Dataset\n- Debugging the model in Tensorflow 2.0 Eager Mode\n- Training the model on GPU cluster\n- Monitoring training progress with built-in Tensorboard dashboard \n- Automated search of best hyper-parameters of the model\n- Registering the trained model for future deployment",
"_____no_output_____"
],
[
"## Prerequisites\nThis notebook is designed to be run in Azure ML Notebook VM. See [readme](https://github.com/microsoft/bert-stack-overflow/blob/master/README.md) file for instructions on how to create Notebook VM and open this notebook in it.",
"_____no_output_____"
],
[
"### Check Azure Machine Learning Python SDK version\n\nThis tutorial requires version 1.0.69 or higher. Let's check the version of the SDK:",
"_____no_output_____"
]
],
[
[
"import azureml.core\n\nprint(\"Azure Machine Learning Python SDK version:\", azureml.core.VERSION)",
"_____no_output_____"
]
],
[
[
"## Stackoverflow Question Tagging Problem \nIn this workshop we will use powerful language understanding model to automatically route Stackoverflow questions to the appropriate support team on the example of Azure services.\n\nOne of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered. \n\n**In order to solve this problem, we will build a model to classify posts on Stackoverflow with the appropriate Azure service tag.**\n\nWe will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Reasearch. Unlike prior language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications.\n\n## Why use BERT model?\n[Introduction of BERT model](https://arxiv.org/pdf/1810.04805.pdf) changed the world of NLP. Many NLP problems that before relied on specialized models to achive state of the art performance are now solved with BERT better and with more generic approach.\n\nIf we look at the leaderboards on such popular NLP problems as GLUE and SQUAD, most of the top models are based on BERT:\n* [GLUE Benchmark Leaderboard](https://gluebenchmark.com/leaderboard/)\n* [SQuAD Benchmark Leaderboard](https://rajpurkar.github.io/SQuAD-explorer/)\n\nRecently, Allen Institue for AI announced new language understanding system called Aristo [https://allenai.org/aristo/](https://allenai.org/aristo/). The system has been developed for 20 years, but it's performance was stuck at 60% on 8th grade science test. The result jumped to 90% once researchers adopted BERT as core language understanding component. With BERT Aristo now solves the test with A grade. ",
"_____no_output_____"
],
[
"## Quick Overview of How BERT model works\n\nThe foundation of BERT model is Transformer model, which was introduced in [Attention Is All You Need paper](https://arxiv.org/abs/1706.03762). Before that event the dominant way of processing language was Recurrent Neural Networks (RNNs). Let's start our overview with RNNs.\n\n## RNNs\n\nRNNs were powerful way of processing language due to their ability to memorize its previous state and perform sophisticated inference based on that.\n\n<img src=\"https://miro.medium.com/max/400/1*L38xfe59H5tAgvuIjKoWPg.png\" alt=\"Drawing\" style=\"width: 100px;\"/>\n\n_Taken from [1](https://towardsdatascience.com/transformers-141e32e69591)_\n\nApplied to language translation task, the processing dynamics looked like this.\n\n\n_Taken from [2](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/)_\n \nBut RNNs suffered from 2 disadvantes:\n1. Sequential computation put a limit on parallelization, which limited effectiveness of larger models.\n2. Long term relationships between words were harder to detect.",
"_____no_output_____"
],
[
"## Transformers\n\nTransformers were designed to address these two limitations of RNNs.\n\n<img src=\"https://miro.medium.com/max/2436/1*V2435M1u0tiSOz4nRBfl4g.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\n_Taken from [3](http://jalammar.github.io/illustrated-transformer/)_\n\nIn each Encoder layer Transformer performs Self-Attention operation which detects relationships between all word embeddings in one matrix multiplication operation. \n\n<img src=\"https://miro.medium.com/max/2176/1*fL8arkEFVKA3_A7VBgapKA.gif\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\n_Taken from [4](https://towardsdatascience.com/deconstructing-bert-part-2-visualizing-the-inner-workings-of-attention-60a16d86b5c1)_\n",
"_____no_output_____"
],
[
"## BERT Model\n\nBERT is a very large network with multiple layers of Transformers (12 for BERT-base, and 24 for BERT-large). The model is first pre-trained on large corpus of text data (WikiPedia + books) using un-superwised training (predicting masked words in a sentence). During pre-training the model absorbs significant level of language understanding.\n\n<img src=\"http://jalammar.github.io/images/bert-output-vector.png\" alt=\"Drawing\" style=\"width: 700px;\"/>\n\n_Taken from [5](http://jalammar.github.io/illustrated-bert/)_\n\nPre-trained network then can easily be fine-tuned to solve specific language task, like answering questions, or categorizing spam emails.\n\n<img src=\"http://jalammar.github.io/images/bert-classifier.png\" alt=\"Drawing\" style=\"width: 700px;\"/>\n\n_Taken from [5](http://jalammar.github.io/illustrated-bert/)_\n\nThe end-to-end training process of the stackoverflow question tagging model looks like this:\n\n\n",
"_____no_output_____"
],
[
"## What is Azure Machine Learning Service?\nAzure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.\n\n\n\n#### How can we use it for training machine learning models?\nTraining machine learning models, particularly deep neural networks, is often a time- and compute-intensive task. Once you've finished writing your training script and running on a small subset of data on your local machine, you will likely want to scale up your workload.\n\nTo facilitate training, the Azure Machine Learning Python SDK provides a high-level abstraction, the estimator class, which allows users to easily train their models in the Azure ecosystem. You can create and use an Estimator object to submit any training code you want to run on remote compute, whether it's a single-node run or distributed training across a GPU cluster.",
"_____no_output_____"
],
[
"## Connect To Workspace\n\nThe [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace holds all your experiments, compute targets, models, datastores, etc.\n\nYou can [open ml.azure.com](https://ml.azure.com) to access your workspace resources through a graphical user interface of **Azure Machine Learning studio**.\n\n\n\n**You will be asked to login in the next step. Use your Microsoft AAD credentials.**",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace\nworkspace = Workspace.get(\"BERT\", subscription_id=\"cdf3e529-94ee-4f54-a219-4720963fee3b\")\n#workspace = Workspace.from_config()\nprint('Workspace name: ' + workspace.name, \n 'Azure region: ' + workspace.location, \n 'Subscription id: ' + workspace.subscription_id, \n 'Resource group: ' + workspace.resource_group, sep = '\\n')",
"Workspace name: BERT\nAzure region: australiaeast\nSubscription id: cdf3e529-94ee-4f54-a219-4720963fee3b\nResource group: DEV-CRM\n"
]
],
[
[
"## Create Compute Target\n\nA [compute target](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.computetarget?view=azure-ml-py) is a designated compute resource/environment where you run your training script or host your service deployment. This location may be your local machine or a cloud-based compute resource. Compute targets can be reused across the workspace for different runs and experiments. \n\nFor this tutorial, we will create an auto-scaling [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.amlcompute?view=azure-ml-py) cluster, which is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. To create the cluster, we need to specify the following parameters:\n\n- `vm_size`: The is the type of GPUs that we want to use in our cluster. For this tutorial, we will use **Standard_NC12s_v3 (NVIDIA V100) GPU Machines** .\n- `idle_seconds_before_scaledown`: This is the number of seconds before a node will scale down in our auto-scaling cluster. We will set this to **6000** seconds. \n- `min_nodes`: This is the minimum numbers of nodes that the cluster will have. To avoid paying for compute while they are not being used, we will set this to **0** nodes.\n- `max_modes`: This is the maximum number of nodes that the cluster will scale up to. Will will set this to **2** nodes.\n\n**When jobs are submitted to the cluster it takes approximately 5 minutes to allocate new nodes** ",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute import AmlCompute, ComputeTarget\n\ncluster_name = 'v100cluster'\ncompute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', \n idle_seconds_before_scaledown=6000,\n min_nodes=0, \n max_nodes=2)\n\ncompute_target = ComputeTarget.create(workspace, cluster_name, compute_config)\ncompute_target.wait_for_completion(show_output=True)",
"Creating\nSucceeded\nAmlCompute wait for completion finished\nMinimum number of nodes requested have been provisioned\n"
]
],
[
[
"To ensure our compute target was created successfully, we can check it's status.",
"_____no_output_____"
]
],
[
[
"compute_target.get_status().serialize()",
"_____no_output_____"
]
],
[
[
"#### If the compute target has already been created, then you (and other users in your workspace) can directly run this cell.",
"_____no_output_____"
]
],
[
[
"compute_target = workspace.compute_targets['v100cluster']",
"_____no_output_____"
]
],
[
[
"## Prepare Data Using Apache Spark\n\nTo train our model, we used the Stackoverflow data dump from [Stack exchange archive](https://archive.org/download/stackexchange). Since the Stackoverflow _posts_ dataset is 12GB, we prepared the data using [Apache Spark](https://spark.apache.org/) framework on a scalable Spark compute cluster in [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/). \n\nFor the purpose of this tutorial, we have processed the data ahead of time and uploaded it to an [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. The full data processing notebook can be found in the _spark_ folder.\n\n* **ACTION**: Open and explore [data preparation notebook](spark/stackoverflow-data-prep.ipynb).\n",
"_____no_output_____"
],
[
"## Register Datastore",
"_____no_output_____"
],
[
"A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. \n\nIn this tutorial, the data was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). ",
"_____no_output_____"
]
],
[
[
"from azureml.core import Datastore, Dataset\nfrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient\n# datastore_name = 'tfworld'\n# container_name = 'azureml-blobstore-7c6bdd88-21fa-453a-9c80-16998f02935f'\n# account_name = 'bert3308922571'\n# sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-06-01T14:18:31Z&st=2019-11-05T07:18:31Z&spr=https&sig=Z4JmM0V%2FQzoFNlWS3a3vJxoGAx58iCz2HAWtmeLDbGE%3D'\n\n# datastore = Datastore.register_azure_blob_container(workspace=workspace, \n# datastore_name=datastore_name, \n# container_name=container_name,\n# account_name=account_name)\n\nblob_service_client = BlobServiceClient.from_connection_string(\"DefaultEndpointsProtocol=https;AccountName=bert3308922571;AccountKey=T2mfL1at6AMaL0Qstx/JjuWKrz8c/r8PayBdSRFPan3aIAeMJpuAx3biqojbKexx7aKBOAtSnpEtu8H+lj4FRw==;EndpointSuffix=core.windows.net\")\n\nwith open(upload_file_path, \"rb\") as data:\n blob_client.upload_blob(data)\n",
"_____no_output_____"
]
],
[
[
"#### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell.",
"_____no_output_____"
]
],
[
[
"datastore = workspace.datastores['tfworld']",
"_____no_output_____"
]
],
[
[
"#### What if my data wasn't already hosted remotely?\nAll workspaces also come with a blob container which is registered as a default datastore. This allows you to easily upload your own data to a remote storage location. You can access this datastore and upload files as follows:\n```\ndatastore = workspace.get_default_datastore()\nds.upload(src_dir='<LOCAL-PATH>', target_path='<REMOTE-PATH>')\n```\n",
"_____no_output_____"
],
[
"## Register Dataset\n\nAzure Machine Learning service supports first class notion of a Dataset. A [Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) is a resource for exploring, transforming and managing data in Azure Machine Learning. The following Dataset types are supported:\n\n* [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format created by parsing the provided file or list of files.\n\n* [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in datastores or from public URLs.\n\nFirst, we will use visual tools in Azure ML studio to register and explore our dataset as Tabular Dataset.\n\n* **ACTION**: Follow [create-dataset](images/create-dataset.ipynb) guide to create Tabular Dataset from our training data.",
"_____no_output_____"
],
[
"#### Use created dataset in code",
"_____no_output_____"
]
],
[
[
"from azureml.core import Dataset\n\n# Get a dataset by name\ntabular_ds = Dataset.get_by_name(workspace=workspace, name='Stackoverflow dataset')\n\n# Load a TabularDataset into pandas DataFrame\ndf = tabular_ds.to_pandas_dataframe()\n\ndf.head(10)",
"_____no_output_____"
]
],
[
[
"## Register Dataset using SDK\n\nIn addition to UI we can register datasets using SDK. In this workshop we will register second type of Datasets using code - File Dataset. File Dataset allows specific folder in our datastore that contains our data files to be registered as a Dataset.\n\nThere is a folder within our datastore called **azure-service-data** that contains all our training and testing data. We will register this as a dataset.",
"_____no_output_____"
]
],
[
[
"azure_dataset = Dataset.File.from_files(path=(datastore, 'C:/Users/phill/azure-ml/azure-service-classifier/data-shared-inbox'))\n\nazure_dataset = azure_dataset.register(workspace=workspace,\n name='Azure Services Dataset',\n description='Dataset containing azure related posts on Stackoverflow')",
"_____no_output_____"
]
],
[
[
"#### If the dataset has already been registered, then you (and other users in your workspace) can directly run this cell.",
"_____no_output_____"
]
],
[
[
"azure_dataset = workspace.datasets['Azure Services Dataset']",
"_____no_output_____"
]
],
[
[
"## Explore Training Code",
"_____no_output_____"
],
[
"In this workshop the training code is provided in [train.py](./train.py) and [model.py](./model.py) files. The model is based on popular [huggingface/transformers](https://github.com/huggingface/transformers) libary. Transformers library provides performant implementation of BERT model with high level and easy to use APIs based on Tensorflow 2.0.\n\n\n\n* **ACTION**: Explore _train.py_ and _model.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)\n* NOTE: You can also explore the files using Jupyter or Jupyter Lab UI.",
"_____no_output_____"
],
[
"## Test Locally\n\nLet's try running the script locally to make sure it works before scaling up to use our compute cluster. To do so, you will need to install the transformers libary.",
"_____no_output_____"
]
],
[
[
"%pip install transformers==2.0.0",
"Collecting transformers==2.0.0\n Downloading https://files.pythonhosted.org/packages/66/99/ca0e4c35ccde7d290de3c9c236d5629d1879b04927e5ace9bd6d9183e236/transformers-2.0.0-py3-none-any.whl (290kB)\nRequirement already satisfied: numpy in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==2.0.0) (1.16.2)\nRequirement already satisfied: boto3 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==2.0.0) (1.11.6)\nRequirement already satisfied: requests in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==2.0.0) (2.22.0)\nCollecting sacremoses (from transformers==2.0.0)\n Downloading https://files.pythonhosted.org/packages/a6/b4/7a41d630547a4afd58143597d5a49e07bfd4c42914d8335b2a5657efc14b/sacremoses-0.0.38.tar.gz (860kB)\nCollecting regex (from transformers==2.0.0)\n Downloading https://files.pythonhosted.org/packages/87/61/a3d8311dccec246605983a39b074eb175338f21cba774db0163e5ad0a139/regex-2020.1.8-cp37-cp37m-win_amd64.whl (271kB)\nRequirement already satisfied: tqdm in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==2.0.0) (4.36.1)\nCollecting sentencepiece (from transformers==2.0.0)\n Downloading https://files.pythonhosted.org/packages/61/c5/e7e2f45c076097ac1a58b21288be25ae4eb4044be899e6c04cd897a00f15/sentencepiece-0.1.85-cp37-cp37m-win_amd64.whl (1.2MB)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from boto3->transformers==2.0.0) (0.9.4)\nRequirement already satisfied: botocore<1.15.0,>=1.14.6 in c:\\programdata\\anaconda3\\lib\\site-packages (from boto3->transformers==2.0.0) (1.14.6)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from boto3->transformers==2.0.0) (0.3.1)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==2.0.0) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==2.0.0) (1.24.2)\nRequirement already satisfied: idna<2.9,>=2.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==2.0.0) (2.8)\nRequirement already satisfied: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==2.0.0) (2019.9.11)\nRequirement already satisfied: six in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers==2.0.0) (1.12.0)\nRequirement already satisfied: click in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers==2.0.0) (7.0)\nRequirement already satisfied: joblib in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers==2.0.0) (0.13.2)\nRequirement already satisfied: docutils<0.16,>=0.10 in c:\\programdata\\anaconda3\\lib\\site-packages (from botocore<1.15.0,>=1.14.6->boto3->transformers==2.0.0) (0.15.2)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from botocore<1.15.0,>=1.14.6->boto3->transformers==2.0.0) (2.8.0)\nBuilding wheels for collected packages: sacremoses\n Building wheel for sacremoses (setup.py): started\n Building wheel for sacremoses (setup.py): finished with status 'done'\n Created wheel for sacremoses: filename=sacremoses-0.0.38-cp37-none-any.whl size=885394 sha256=f095f3482f6c757823ceb6ea26e2fa6b5b0ddabbb824d629e99fceefa2bc8119\n Stored in directory: C:\\Users\\phill\\AppData\\Local\\pip\\Cache\\wheels\\6d\\ec\\1a\\21b8912e35e02741306f35f66c785f3afe94de754a0eaf1422\nSuccessfully built sacremoses\nInstalling collected packages: regex, sacremoses, sentencepiece, transformers\nSuccessfully installed regex-2020.1.8 sacremoses-0.0.38 sentencepiece-0.1.85 transformers-2.0.0\nNote: you may need to restart the kernel to use updated packages.\n"
]
],
[
[
"We have taken a small partition of the dataset and included it in this repository. Let's take a quick look at the format of the data.",
"_____no_output_____"
]
],
[
[
"data_dir = 'C:/Users/phill/azure-ml/bert-stack-overflow/1-Training/data-shared-inbox/'",
"_____no_output_____"
],
[
"import os \nimport pandas as pd\ndata = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None)\ndata.head(5)",
"_____no_output_____"
]
],
[
[
"Now we know what the data looks like, let's test out our script!",
"_____no_output_____"
]
],
[
[
"import sys\n!{sys.executable} train.py --data_dir {data_dir} --max_seq_length 128 --batch_size 16 --learning_rate 3e-5 --steps_per_epoch 5 --num_epochs 1 --export_dir ../outputs/model",
"_____no_output_____"
]
],
[
[
"## Debugging in TensorFlow 2.0 Eager Mode\n\nEager mode is new feature in TensorFlow 2.0 which makes understanding and debugging models easy. Let's start by configuring our remote debugging environment.\n\n#### Configure VS Code Remote connection to Notebook VM\n\n* **ACTION**: Install [Microsoft VS Code](https://code.visualstudio.com/) on your local machine.\n\n* **ACTION**: Follow this [configuration guide](https://github.com/danielsc/azureml-debug-training/blob/master/Setting%20up%20VSCode%20Remote%20on%20an%20AzureML%20Notebook%20VM.md) to setup VS Code Remote connection to Notebook VM.\n\n#### Debug training code using step-by-step debugger\n\n* **ACTION**: Open Remote VS Code session to your Notebook VM.\n* **ACTION**: Open file `/home/azureuser/cloudfiles/code/<username>/bert-stack-overflow/1-Training/train_eager.py`.\n* **ACTION**: Set break point in the file and start Python debugging session. \n",
"_____no_output_____"
],
[
"On a CPU machine training on a full dataset will take approximatly 1.5 hours. Although it's a small dataset, it still takes a long time. Let's see how we can speed up the training by using latest NVidia V100 GPUs in the Azure cloud. ",
"_____no_output_____"
],
[
"## Perform Experiment\n\nNow that we have our compute target, dataset, and training script working locally, it is time to scale up so that the script can run faster. We will start by creating an [experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py). An experiment is a grouping of many runs from a specified script. All runs in this tutorial will be performed under the same experiment. ",
"_____no_output_____"
]
],
[
[
"\nfrom azureml.core import Workspace\n\nws = Workspace.get(\"BERT\", subscription_id=\"cdf3e529-94ee-4f54-a219-4720963fee3b\")\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')",
"Workspace name: BERT\nAzure region: australiaeast\nSubscription id: cdf3e529-94ee-4f54-a219-4720963fee3b\nResource group: DEV-CRM\n"
],
[
"from azureml.core import Experiment\nworkspace=Workspace.get(\"BERT\", subscription_id=\"cdf3e529-94ee-4f54-a219-4720963fee3b\")\nexperiment_name = 'azure-service-classifier' \nexperiment = Experiment(workspace, name=experiment_name)\nexperiment",
"_____no_output_____"
]
],
[
[
"#### Create TensorFlow Estimator\n\nThe Azure Machine Learning Python SDK Estimator classes allow you to easily construct run configurations for your experiments. They allow you too define parameters such as the training script to run, the compute target to run it on, framework versions, additional package requirements, etc. \n\nYou can also use a generic [Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) to submit training scripts that use any learning framework you choose.\n\nFor popular libaries like PyTorch and Tensorflow you can use their framework specific estimators. We will use the [TensorFlow Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) for our experiment.",
"_____no_output_____"
]
],
[
[
"from azureml.train.dnn import TensorFlow\n\nscript_params = {\n # to mount files referenced by mnist dataset\n '--data_dir': './1-Training/data-shared-inbox',\n '--max_seq_length': 128,\n '--batch_size': 32,\n '--learning_rate': 3e-5,\n '--steps_per_epoch': 150,\n '--num_epochs': 3,\n '--export_dir':'./outputs'\n}\n\nestimator1 = TensorFlow(source_directory='C:/Users/phill/azure-ml/bert-stack-overflow',\n entry_script='./1-Training/train.py',\n compute_target=\"local\",\n script_params = script_params,\n framework_version='2.0',\n pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29'])",
"_____no_output_____"
]
],
[
[
"A quick description for each of the parameters we have just defined:\n\n- `source_directory`: This specifies the root directory of our source code. \n- `entry_script`: This specifies the training script to run. It should be relative to the source_directory.\n- `compute_target`: This specifies to compute target to run the job on. We will use the one created earlier.\n- `script_params`: This specifies the input parameters to the training script. Please note:\n\n 1) *azure_dataset.as_named_input('azureservicedata').as_mount()* mounts the dataset to the remote compute and provides the path to the dataset on our datastore. \n \n 2) All outputs from the training script must be outputted to an './outputs' directory as this is the only directory that will be saved to the run. \n \n \n- `framework_version`: This specifies the version of TensorFlow to use. Use Tensorflow.get_supported_verions() to see all supported versions.\n- `use_gpu`: This will use the GPU on the compute target for training if set to True.\n- `pip_packages`: This allows you to define any additional libraries to install before training.",
"_____no_output_____"
],
[
"#### 1) Submit First Run \n\nWe can now train our model by submitting the estimator object as a [run](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run.run?view=azure-ml-py).",
"_____no_output_____"
]
],
[
[
"run1 = experiment.submit(estimator1)",
"_____no_output_____"
]
],
[
[
"We can view the current status of the run and stream the logs from within the notebook.",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\nRunDetails(run1).show()",
"_____no_output_____"
]
],
[
[
"You cancel a run at anytime which will stop the run and scale down the nodes in the compute target.",
"_____no_output_____"
]
],
[
[
"run1.cancel()",
"_____no_output_____"
]
],
[
[
"While we wait for the run to complete, let's go over how a Run is executed in Azure Machine Learning.\n\n",
"_____no_output_____"
],
[
"#### 2) Add Metrics Logging\n\nSo we were able to clone a Tensorflow 2.0 project and run it without any changes. However, with larger scale projects we would want to log some metrics in order to make it easier to monitor the performance of our model. \n\nWe can do this by adding a few lines of code into our training script:\n\n```python\n# 1) Import SDK Run object\nfrom azureml.core.run import Run\n\n# 2) Get current service context\nrun = Run.get_context()\n\n# 3) Log the metrics that we want\nrun.log('val_accuracy', float(logs.get('val_accuracy')))\nrun.log('accuracy', float(logs.get('accuracy')))\n```\nWe've created a *train_logging.py* script that includes logging metrics as shown above. \n\n* **ACTION**: Explore _train_logging.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)",
"_____no_output_____"
],
[
"We can submit this run in the same way that we did before. \n\n*Since our cluster can scale automatically to two nodes, we can run this job simultaneously with the previous one.*",
"_____no_output_____"
]
],
[
[
"script_params = {\n # to mount files referenced by mnist dataset\n '--data_dir': './1-Training/data-shared-inbox',\n '--max_seq_length': 128,\n '--batch_size': 32,\n '--learning_rate': 3e-5,\n '--steps_per_epoch': 150,\n '--num_epochs': 3,\n '--export_dir':'./1-Training/outputs/model'\n}\n\nestimator2 = TensorFlow(source_directory='C:/Users/phill/azure-ml/bert-stack-overflow',\n entry_script='./1-Training/train_logging.py',\n compute_target=\"local\",\n script_params = script_params,\n framework_version='2.0',\n pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29'])\n\nrun2 = experiment.submit(estimator2)",
"_____no_output_____"
]
],
[
[
"Now if we view the current details of the run, you will notice that the metrics will be logged into graphs.",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\nRunDetails(run2).show()",
"_____no_output_____"
]
],
[
[
"#### 3) Monitoring metrics with Tensorboard\n\nTensorboard is a popular Deep Learning Training visualization tool and it's built-in into TensorFlow framework. We can easily add tracking of the metrics in Tensorboard format by adding Tensorboard callback to the **fit** function call.\n```python\n # Add callback to record Tensorboard events\n model.fit(train_dataset, epochs=FLAGS.num_epochs, \n steps_per_epoch=FLAGS.steps_per_epoch, validation_data=valid_dataset, \n callbacks=[\n AmlLogger(),\n tf.keras.callbacks.TensorBoard(update_freq='batch')]\n )\n```\n\n#### Launch Tensorboard\nAzure ML service provides built-in integration with Tensorboard through **tensorboard** package.\n\nWhile the run is in progress (or after it has completed), we can start Tensorboard with the run as its target, and it will begin streaming logs.",
"_____no_output_____"
]
],
[
[
"from azureml.tensorboard import Tensorboard\n\n# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\ntb = Tensorboard([run2])\n\n# If successful, start() returns a string with the URI of the instance.\ntb.start()",
"http://localhost:6006/\n"
]
],
[
[
"#### Stop Tensorboard\nWhen you're done, make sure to call the stop() method of the Tensorboard object, or it will stay running even after your job completes.",
"_____no_output_____"
]
],
[
[
"tb.stop()",
"_____no_output_____"
]
],
[
[
"## Check the model performance\n\nLast training run produced model of decent accuracy. Let's test it out and see what it does. First, let's check what files our latest training run produced and download the model files.\n\n#### Download model files",
"_____no_output_____"
]
],
[
[
"run2.get_file_names()",
"_____no_output_____"
],
[
"run2.download_files(prefix='outputs/model')\n\n# If you haven't finished training the model then just download pre-made model from datastore\ndatastore.download('./',prefix=\"azure-service-classifier/model\")",
"_____no_output_____"
]
],
[
[
"#### Instantiate the model\n\nNext step is to import our model class and instantiate fine-tuned model from the model file.",
"_____no_output_____"
]
],
[
[
"from model import TFBertForMultiClassification\nfrom transformers import BertTokenizer\nimport tensorflow as tf\ndef encode_example(text, max_seq_length):\n # Encode inputs using tokenizer\n inputs = tokenizer.encode_plus(\n text,\n add_special_tokens=True,\n max_length=max_seq_length\n )\n input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n attention_mask = [1] * len(input_ids)\n # Zero-pad up to the sequence length.\n padding_length = max_seq_length - len(input_ids)\n input_ids = input_ids + ([0] * padding_length)\n attention_mask = attention_mask + ([0] * padding_length)\n token_type_ids = token_type_ids + ([0] * padding_length)\n \n return input_ids, attention_mask, token_type_ids\n \nlabels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n# Load model and tokenizer\nloaded_model = TFBertForMultiClassification.from_pretrained('azure-service-classifier/model', num_labels=len(labels))\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\nprint(\"Model loaded from disk.\")",
"_____no_output_____"
]
],
[
[
"#### Define prediction function\n\nUsing the model object we can interpret new questions and predict what Azure service they talk about. To do that conveniently we'll define **predict** function.",
"_____no_output_____"
]
],
[
[
"# Prediction function\ndef predict(question):\n input_ids, attention_mask, token_type_ids = encode_example(question, 128)\n predictions = loaded_model.predict({\n 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n 'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32),\n 'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32)\n })\n prediction = labels[predictions[0].argmax().item()]\n probability = predictions[0].max()\n result = {\n 'prediction': str(labels[predictions[0].argmax().item()]),\n 'probability': str(predictions[0].max())\n }\n print('Prediction: {}'.format(prediction))\n print('Probability: {}'.format(probability))",
"_____no_output_____"
]
],
[
[
"#### Experiement with our new model\n\nNow we can easily test responses of the model to new inputs. \n* **ACTION**: Invent yout own input for one of the 5 services our model understands: 'azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions'.",
"_____no_output_____"
]
],
[
[
"# Route question\npredict(\"How can I specify Service Principal in devops pipeline when deploying virtual machine\")",
"_____no_output_____"
],
[
"# Now more tricky cae - the opposite\npredict(\"How can virtual machine trigger devops pipeline\")",
"_____no_output_____"
]
],
[
[
"## Distributed Training Across Multiple GPUs\n\nDistributed training allows us to train across multiple nodes if your cluster allows it. Azure Machine Learning service helps manage the infrastructure for training distributed jobs. All we have to do is add the following parameters to our estimator object in order to enable this:\n\n- `node_count`: The number of nodes to run this job across. Our cluster has a maximum node limit of 2, so we can set this number up to 2.\n- `process_count_per_node`: The number of processes to enable per node. The nodes in our cluster have 2 GPUs each. We will set this value to 2 which will allow us to distribute the load on both GPUs. Using multi-GPUs nodes is benefitial as communication channel bandwidth on local machine is higher.\n- `distributed_training`: The backend to use for our distributed job. We will be using an MPI (Message Passing Interface) backend which is used by Horovod framework.\n\nWe use [Horovod](https://github.com/horovod/horovod), which is a framework that allows us to easily modifying our existing training script to be run across multiple nodes/GPUs. The distributed training script is saved as *train_horovod.py*.\n\n* **ACTION**: Explore _train_horovod.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)",
"_____no_output_____"
],
[
"We can submit this run in the same way that we did with the others, but with the additional parameters.",
"_____no_output_____"
]
],
[
[
"from azureml.train.dnn import Mpi\n\nestimator3 = TensorFlow(source_directory='./',\n entry_script='train_horovod.py',compute_target=compute_target,\n script_params = {\n '--data_dir': azure_dataset.as_named_input('azureservicedata').as_mount(),\n '--max_seq_length': 128,\n '--batch_size': 32,\n '--learning_rate': 3e-5,\n '--steps_per_epoch': 150,\n '--num_epochs': 3,\n '--export_dir':'./outputs/model'\n },\n framework_version='2.0',\n node_count=1,\n distributed_training=Mpi(process_count_per_node=2),\n use_gpu=True,\n pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29'])\n\nrun3 = experiment.submit(estimator3)",
"_____no_output_____"
]
],
[
[
"Once again, we can view the current details of the run. ",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\nRunDetails(run3).show()",
"_____no_output_____"
]
],
[
[
"Once the run completes note the time it took. It should be around 5 minutes. As you can see, by moving to the cloud GPUs and using distibuted training we managed to reduce training time of our model from more than an hour to 5 minutes. This greatly improves speed of experimentation and innovation.",
"_____no_output_____"
],
[
"## Tune Hyperparameters Using Hyperdrive\n\nSo far we have been putting in default hyperparameter values, but in practice we would need tune these values to optimize the performance. Azure Machine Learning service provides many methods for tuning hyperparameters using different strategies.\n\nThe first step is to choose the parameter space that we want to search. We have a few choices to make here :\n\n- **Parameter Sampling Method**: This is how we select the combinations of parameters to sample. Azure Machine Learning service offers [RandomParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.randomparametersampling?view=azure-ml-py), [GridParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.gridparametersampling?view=azure-ml-py), and [BayesianParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.bayesianparametersampling?view=azure-ml-py). We will use the `GridParameterSampling` method.\n- **Parameters To Search**: We will be searching for optimal combinations of `learning_rate` and `num_epochs`.\n- **Parameter Expressions**: This defines the [functions that can be used to describe a hyperparameter search space](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py), which can be discrete or continuous. We will be using a `discrete set of choices`.\n\nThe following code allows us to define these options.",
"_____no_output_____"
]
],
[
[
"from azureml.train.hyperdrive import GridParameterSampling\nfrom azureml.train.hyperdrive.parameter_expressions import choice\n\n\nparam_sampling = GridParameterSampling( {\n '--learning_rate': choice(3e-5, 3e-4),\n '--num_epochs': choice(3, 4)\n }\n)",
"_____no_output_____"
]
],
[
[
"The next step is to a define how we want to measure our performance. We do so by specifying two classes:\n\n- **[PrimaryMetricGoal](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.primarymetricgoal?view=azure-ml-py)**: We want to `MAXIMIZE` the `val_accuracy` that is logged in our training script.\n- **[BanditPolicy](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy?view=azure-ml-py)**: A policy for early termination so that jobs which don't show promising results will stop automatically.",
"_____no_output_____"
]
],
[
[
"from azureml.train.hyperdrive import BanditPolicy\nfrom azureml.train.hyperdrive import PrimaryMetricGoal\n\nprimary_metric_name='val_accuracy'\nprimary_metric_goal=PrimaryMetricGoal.MAXIMIZE\n\nearly_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=2)",
"_____no_output_____"
]
],
[
[
"We define an estimator as usual, but this time without the script parameters that we are planning to search.",
"_____no_output_____"
]
],
[
[
"estimator4 = TensorFlow(source_directory='./',\n entry_script='train_logging.py',\n compute_target=compute_target,\n script_params = {\n '--data_dir': azure_dataset.as_named_input('azureservicedata').as_mount(),\n '--max_seq_length': 128,\n '--batch_size': 32,\n '--steps_per_epoch': 150,\n '--export_dir':'./outputs/model',\n },\n framework_version='2.0',\n use_gpu=True,\n pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29'])",
"_____no_output_____"
]
],
[
[
"Finally, we add all our parameters in a [HyperDriveConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriveconfig?view=azure-ml-py) class and submit it as a run. ",
"_____no_output_____"
]
],
[
[
"from azureml.train.hyperdrive import HyperDriveConfig\n\nhyperdrive_run_config = HyperDriveConfig(estimator=estimator4,\n hyperparameter_sampling=param_sampling, \n policy=early_termination_policy,\n primary_metric_name=primary_metric_name, \n primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n max_total_runs=10,\n max_concurrent_runs=2)\n\nrun4 = experiment.submit(hyperdrive_run_config)",
"_____no_output_____"
]
],
[
[
"When we view the details of our run this time, we will see information and metrics for every run in our hyperparameter tuning.",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\nRunDetails(run4).show()",
"_____no_output_____"
]
],
[
[
"We can retrieve the best run based on our defined metric.",
"_____no_output_____"
]
],
[
[
"best_run = run4.get_best_run_by_primary_metric()",
"_____no_output_____"
]
],
[
[
"## Register Model\n\nA registered [model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py) is a reference to the directory or file that make up your model. After registering a model, you and other people in your workspace can easily gain access to and deploy your model without having to run the training script again. \n\nWe need to define the following parameters to register a model:\n\n- `model_name`: The name for your model. If the model name already exists in the workspace, it will create a new version for the model.\n- `model_path`: The path to where the model is stored. In our case, this was the *export_dir* defined in our estimators.\n- `description`: A description for the model.\n\nLet's register the best run from our hyperparameter tuning.",
"_____no_output_____"
]
],
[
[
"model = best_run.register_model(model_name='azure-service-classifier', \n model_path='./outputs/model',\n datasets=[('train, test, validation data', azure_dataset)],\n description='BERT model for classifying azure services on stackoverflow posts.')",
"_____no_output_____"
]
],
[
[
"We have registered the model with Dataset reference. \n* **ACTION**: Check dataset to model link in **Azure ML studio > Datasets tab > Azure Service Dataset**.",
"_____no_output_____"
],
[
"In the [next tutorial](), we will perform inferencing on this model and deploy it to a web service.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecf57e7a11f0066290a5f154e4a6ee6d6212d050 | 92,091 | ipynb | Jupyter Notebook | Projet_UV5_KASSEL_LACROIX_TOUIL.ipynb | zouheirtouil/Smart_Home | 46286433273b9f8f857bfc4bb3c94134ebcbe882 | [
"MIT"
] | null | null | null | Projet_UV5_KASSEL_LACROIX_TOUIL.ipynb | zouheirtouil/Smart_Home | 46286433273b9f8f857bfc4bb3c94134ebcbe882 | [
"MIT"
] | null | null | null | Projet_UV5_KASSEL_LACROIX_TOUIL.ipynb | zouheirtouil/Smart_Home | 46286433273b9f8f857bfc4bb3c94134ebcbe882 | [
"MIT"
] | null | null | null | 86.066355 | 34,141 | 0.780597 | [
[
[
"**Introduction:**\r\n",
"_____no_output_____"
],
[
"Durant ce projet, nous avons testé une nouvelle approche d'identification des tendances comportementales chez les gens âgées, nous nous sommes concentrés sur les données des capteurs thermiques. Nous voulions réaliser une visualisation des données. Pour cela nous avons créés des images pour chaque mesure des capteurs. En mettant bout à bout ces images nous avons réalisé des vidéos. Ensuite, nous avons créé une fonction qui va compter le nombre de passages des personnes dans les vidéos. Afin de visualiser ces résultats nous avons utilisé, pour chaque jour, des histogrammes décrivant le nombre de passage par heure.",
"_____no_output_____"
],
[
"**Encadré par:**\r\n\r\n* FLEURY Anthony\r\n\r\n\r\n**Réalisé par:**\r\n\r\n\r\n* KASSEL Mohammed Issam\r\n* LACROIX Maxime\r\n* TOUIL Zouheir\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"_____no_output_____"
],
[
"**Remerciements :**\r\n",
"_____no_output_____"
],
[
"Nous profitons de ce Colab pour exprimer nos vifs remerciements à M. Anthony FLEURY pour le grand intérêt qui porte à ses élèves\r\n, pour nous avoir présenter cette opportunité de\r\ntravailler sur ce projet et d’appliquer en pratique ce que nous\r\navons appris. Grâce à ce projet nous avons pu élargir nos\r\nconnaissances dans le domaine de l'analyse de données et de travailler sur un problème de porté réelle.,Malgré les difficultés\r\nrencontrées, ce projet reste pour nous l’un des\r\nmeilleurs projets académiques qu’on a eu\r\nl’honneur de mener. En effet, ce dernier nous a\r\npermis de travailler en groupe, de collaborer\r\nensemble et de développer nos connaissances techniques et\r\nthéoriques.\r\n",
"_____no_output_____"
],
[
"**Connexion au google drive**",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive', force_remount=False)",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
]
],
[
[
"**Les bibliothèques Utilisées**",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image, ImageDraw, ImageFont\nimport cv2\nimport os\nfrom os.path import isfile, join\nimport seaborn as sns\nfrom google.colab.patches import cv2_imshow\nimport datetime\nimport time\nimport datetime\nimport glob",
"_____no_output_____"
]
],
[
[
"**Regrouper tous les fichiers \".txt\" dans un seul fichier \".csv\"**",
"_____no_output_____"
]
],
[
[
"#La fonction glob retourne les chemins de tous les fichiers qui se termines avec .txt\r\nall_files = glob.glob(path + \"/*.txt\")\r\n#La boucle sera sur tous les fichiers dans all_files \r\n\r\n#Transformer tous les fichiers .txt au fichiers .csv\r\n\r\nfor f in all_files:\r\n dataframe1 = pd.read_csv(f,delimiter = ' ')\r\n dataframe1.to_csv(f +'.csv', index = None)\r\n\r\n#Fusionner des fichiers csv pour avoir un seul fichier \r\ndf_from_each_file = (pd.read_csv(f) for f in all_files)\r\ndf = pd.concat(df_from_each_file, ignore_index=True)\r\n\r\n#Exportation du fichier csv\r\n\r\ndf.to_csv('Mac1_DATA_P1',index = None)\r\n",
"_____no_output_____"
]
],
[
[
"**Regrouper les fichiers selon les jours:**",
"_____no_output_____"
]
],
[
[
"#1ere Etape : Ouvrire le fichier CSV \r\ndf = pd.read_csv('Mac1_DATA_P1.csv',header = None)\r\n\r\n#2eme Etape : Avoir la colone de date\r\nrows = df[0].unique()\r\n\r\n #3eme Etape : Exporter des fichiers CSV selon les jours\r\n\r\nfor row in rows:\r\n day = df[df[0] == row]\r\n day.to_csv('jour-' + row +'.csv', index = None)\r\n \r\n",
"_____no_output_____"
]
],
[
[
"**Regrouper les fichiers par quatre heures:**",
"_____no_output_____"
]
],
[
[
"path = '/Users/maximelacroix/Desktop/DataOtherMonth/3669959/'\n\nall_files = glob.glob(path +'/*.csv') #liste les fichiers de données de chaque jours \n\ndef select_3H(heure): \n L = []\n for h in range(3):\n for m in range(60):\n for s in range(60):\n H=str(h+heure)\n M=str(m)\n S=str(s)\n L.append(H +':'+ M +':'+ S)\n return df[df[1].isin(L)]\n\n# on crée ensuite les fichiers csv en utilisant la fonction et to_csv\nfor f in all_files: \n f.replace('.csv','') #essai infructueux afin de changer le nom jour_XX.csv_0-3H.csv en jour_XX_0-3H.csv \n select_3H(0).to_csv(f + '_0-3H.csv', index = None)\n select_3H(4).to_csv(f + '_4-7H.csv', index = None)\n select_3H(8).to_csv(f + '_8-11H.csv', index = None)\n select_3H(12).to_csv(f + '_12-15H.csv', index = None)\n select_3H(16).to_csv(f + '_16-19H.csv', index = None)\n select_3H(20).to_csv(f + '_20-23H.csv', index = None)\n#les fichiers seront alors dans le même dossier que les csv des jours\n",
"_____no_output_____"
]
],
[
[
"**Traitement des données:**\r\n",
"_____no_output_____"
]
],
[
[
"col = ['date', 'heure']\r\n#col = ['x','date', 'heure'] Pour les données mac2\r\nfor k in range(1,33):\r\n col.append(str(k))\r\ncol+= ['*', 'NaN','hour']\r\n",
"_____no_output_____"
]
],
[
[
"**Lecture des données du premier jour entre 0 et 3h avec mac1. Le but est de créer une vidéo à partir des images générées avec des nuances de gris, associées aux températures enregistrées, afin de déterminer un seuil optimal qui permet la détection du passage d'un patient devant le capteur**",
"_____no_output_____"
]
],
[
[
"path= \"/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac1/jour1/hour0.csv\"\ndf = pd.read_csv(path)\ndf.columns= col\ndf",
" date heure 1 2 3 ... 31 32 * NaN hour\n0 2019-06-05 00:00:00 12.6 13.1 13.4 ... 13.1 12.9 * 0.0 0\n1 2019-06-05 00:00:00 12.6 13.1 13.4 ... 13.1 12.9 * 0.0 0\n2 2019-06-05 00:00:00 12.5 13.0 13.4 ... 13.0 12.9 * 0.0 0\n3 2019-06-05 00:00:00 12.5 13.0 13.4 ... 13.1 13.0 * 0.0 0\n4 2019-06-05 00:00:00 12.7 13.2 13.5 ... 13.2 13.1 * 0.0 0\n... ... ... ... ... ... ... ... ... .. ... ...\n92003 2019-06-05 04:59:59 11.8 12.3 12.5 ... 12.8 12.7 * 0.0 4\n92004 2019-06-05 04:59:59 11.8 12.3 12.5 ... 12.9 12.7 * 0.0 4\n92005 2019-06-05 04:59:59 11.8 12.3 12.6 ... 12.9 12.8 * 0.0 4\n92006 2019-06-05 04:59:59 11.8 12.3 12.6 ... 12.8 12.7 * 0.0 4\n92007 2019-06-05 04:59:59 11.8 12.2 12.5 ... 12.8 12.6 * 0.0 4\n\n[92008 rows x 37 columns]\n"
],
[
"#fonction qui calcule les nuances de gris à partir des températures \ndef f(x):\n return ((2550*x/97) - (27285/97))",
"_____no_output_____"
],
[
"#création chaque image avec (date/ heure) pour 1 df(4h)\n\narray = np.zeros([24, 320], dtype=np.uint8)\nl=len(df)\nfor j in range(l):\n\n x = []\n y = []\n for k in range(1,33):\n x.append(pd.to_numeric(df[str(k)][j]))\n for i in range(len(x)): \n y.append(int(f(x[i])))\n for k in range(32):\n array[:,k*10:(k+1)*10] = y[k] \n x = []\n y = []\n fnt = ImageFont.load_default()\n img = Image.fromarray(array)\n d = ImageDraw.Draw(img)\n d.text((1,0), text= df['date'][j] + ' ' + df['heure'][j], font=fnt, fill=(255))\n print(img)\n img.save('/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac1/images/testrgb_' + str(j) + '.png')",
"_____no_output_____"
]
],
[
[
"Exemple des images créées: https://drive.google.com/drive/folders/1if-jxrwy4Y2rFvAIgmMf7Swk59J5tqMW?usp=sharing",
"_____no_output_____"
]
],
[
[
"#fonction pour créer la vidéo à partir des images \n\ndef convert_to_video(pathIn, pathOut, fps):\n frame_array=[]\n files=[f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]\n for i in range(len(files)):\n filename=pathIn+files[i]\n img=cv2.imread(filename)\n print(img.shape)\n height = img.shape[0]\n width = img.shape[1]\n size=(width , height)\n \n #for k in range(time):\n frame_array.append(img)\n out = cv2.VideoWriter(pathOut, cv2.VideoWriter_fourcc(*'mp4v'), fps, size)\n for i in range(len(frame_array)):\n out.write(frame_array[i])\n out.release()\n \n\ndirectory= '/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac1'\npathIn=directory + '/images/'\npathOut=directory + '/videos/video_test.avi'\nfps=10\ntime= 0.1\n\nconvert_to_video(pathIn, pathOut, fps)",
"_____no_output_____"
]
],
[
[
"Exemple d'une vidéo créée: https://drive.google.com/file/d/1Wc_rOay4D5d6BJ9nJ7QaTN7ImGU1krur/view?usp=sharing",
"_____no_output_____"
]
],
[
[
"#fonction pour transformer les dates de \"str\" en \"datetime\"\r\ndef time_diff(start, end):\r\n start=datetime.datetime.strptime(start,'%H:%M:%S')\r\n end=datetime.datetime.strptime(end,'%H:%M:%S')\r\n return start,end",
"_____no_output_____"
],
[
"#traitement des données mac1 pour un seul patient ",
"_____no_output_____"
]
],
[
[
"**Fonction d'optimisation qui permet de conserver les lignes ayant une température maximale supérieure à 17 pour mac1**",
"_____no_output_____"
]
],
[
[
"def optim(dataf):\n min=17\n l =len(dataf)\n df_f1=pd.DataFrame(columns=col)\n for j in range(l):\n x =[]\n for k in range(2,34):\n x.append(dataf.iloc[j][k])\n max1=max(x)\n if max1>min :\n df_f1=df_f1.append(dataf.iloc[j])\n return df_f1\n\n",
"_____no_output_____"
]
],
[
[
"**Fonction qui calcule la différence de temps entre 2 enregistrements succéssifs**",
"_____no_output_____"
]
],
[
[
"def time_diff(start, end):\n start=datetime.datetime.strptime(start,'%H:%M:%S')\n end=datetime.datetime.strptime(end,'%H:%M:%S')\n return end-start",
"_____no_output_____"
]
],
[
[
"**Fonction qui calcule le nombre de passage du patient, en se basant sur un temps minimal statique (5 secondes) pour mac1**\n\n",
"_____no_output_____"
]
],
[
[
"tmin =5\ndef nombre_passage(df1):\n nt=0\n n=len(df1)\n for i in range(n-1):\n s=time_diff(df1.iloc[i][2],df1.iloc[i+1][2])\n if(s.seconds>tmin):\n nt+=1\n return nt\n\n",
"_____no_output_____"
],
[
"#création d'un dataframe pour ajouter le nombre de passage associé à chaque intervalle\nh=['0-3','4-7','8-11','12-15','16-19','20-23']\ndf_final=pd.DataFrame({'hours':h})\ndf_final",
"_____no_output_____"
],
[
"directory= '/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac1'",
"_____no_output_____"
]
],
[
[
"**Calcul du nombre de passage pour tous les intervalles du temps durant 1 jour**\n\n\n",
"_____no_output_____"
]
],
[
[
"files=[f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]\nfiles.sort()\ns=[]\nfor i in range(len(files)):\n df= pd.read_csv(directory +'/jour3/hour'+str(i)+'.csv')\n df= optim(df)\n np=nombre_passage(df)\n s.append(np)\ndf_final['jour3']=s\nprint(df_final) \n",
"_____no_output_____"
]
],
[
[
"**On sauvegarde les données mac1 obtenues pour tous les jours dans**\n**un fichier csv**\n\n",
"_____no_output_____"
]
],
[
[
"df_final.to_csv('/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac1/resultat.csv')",
"_____no_output_____"
]
],
[
[
"**Visualisation des résultats en histogrammes**",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 6, figsize=(30, 3), sharey=True)\nax[0].bar(df_final['hours'], df_final['jour1'], label=\"jour1\")\nax[0].set_title('03/06/2019_Lundi')\nax[0].set(xlabel='heures', ylabel='nombre de passage')\nax[1].bar(df_final['hours'], df_final['jour2'], label=\"jour2\")\nax[1].set_title('04/06/2019_Mardi')\nax[1].set(xlabel='heures', ylabel='nombre de passage')\n\nax[2].bar(df_final['hours'], df_final['jour3'], label=\"jour3\")\nax[2].set_title('05/06/2019_Mercredi')\nax[2].set(xlabel='heures', ylabel='nombre de passage')\n\nax[3].bar(df_final['hours'], df_final['jour4'], label=\"jour4\")\nax[3].set_title('06/06/2019_Jeudi')\nax[3].set(xlabel='heures', ylabel='nombre de passage')\n\nax[4].bar(df_final['hours'], df_final['jour5'], label=\"jour5\")\nax[4].set_title('07/06/2019_Vendredi')\nax[4].set(xlabel='heures', ylabel='nombre de passage')\n\nax[5].bar(df_final['hours'], df_final['jour6'], label=\"jour6\")\nax[5].set_title('08/06/2019_Samedi')\nax[5].set(xlabel='heures', ylabel='nombre de passage')\n\nfig.suptitle('Nombre de Passage par jour')\nplt.show()",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"#traitement des données mac2 pour un seul patient ",
"_____no_output_____"
]
],
[
[
"**On ajoute une colonne pour les données mac2**",
"_____no_output_____"
]
],
[
[
"col = ['x','date', 'heure']\nfor k in range(1,33):\n col.append(str(k))\ncol+= ['*', 'NaN','hour']\n",
"_____no_output_____"
],
[
"directory= '/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac2'",
"_____no_output_____"
],
[
"df= pd.read_csv(directory +'/jour1/hour'+str(2)+'.csv')\n",
"_____no_output_____"
]
],
[
[
"**Fonction d'optimisation qui permet de conserver les lignes ayant une température maximale supérieure à 17 pour mac2**",
"_____no_output_____"
]
],
[
[
"def optim2(dataf):\n min=17\n l =len(dataf)\n df_f1=pd.DataFrame(columns=col)\n for j in range(l):\n x =[]\n for k in range(3,35):\n x.append(dataf.iloc[j][k])\n max1=max(x)\n if max1>min :\n df_f1=df_f1.append(dataf.iloc[j])\n return df_f1",
"_____no_output_____"
]
],
[
[
"**Fonction qui calcule le nombre de passage du patient, en se basant sur un temps minimal statique(5 secondes) pour mac2**",
"_____no_output_____"
]
],
[
[
"tmin =5\ndef nombre_passage2(df1):\n nt=0\n n=len(df1)\n for i in range(n-1):\n s=time_diff(df1.iloc[i][4],df1.iloc[i+1][4])\n if(s.seconds>tmin):\n nt+=1\n return nt\n#np=nombre_passage(df)\n#np",
"_____no_output_____"
],
[
"#création d'un dataframe pour ajouter le nombre de passage associé à chaque intervalle\nh=['0-3','4-7','8-11','12-15','16-19','20-23']\ndf_final=pd.DataFrame({'hours':h})\ndf_final",
"_____no_output_____"
]
],
[
[
"**Calcul du nombre de passage pour tous les intervalles du temps durant 1 jour**",
"_____no_output_____"
]
],
[
[
"files=[f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]\nfiles.sort()\ns=[]\nfor i in range(len(files)):\n df= pd.read_csv(directory +'/jour3/hour'+str(i)+'.csv')\n df= optim2(df)\n np=nombre_passage2(df)\n s.append(np)\ndf_final['jour3']=s\nprint(df_final)",
"_____no_output_____"
]
],
[
[
"**On sauvegarde les données mac2 obtenues pour tous les jours dans**\n**un fichier csv**",
"_____no_output_____"
]
],
[
[
"df_final.to_csv('/content/drive/MyDrive/fewdays_mik_zt-master-DataOtherMonth-3669959/test/mac2/resultat2.csv')",
"_____no_output_____"
]
],
[
[
"**Visualisation des résultats en histogrammes**",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 14, figsize=(60, 3), sharey=True)\nax[0].bar(df_final['hours'], df_final['jour1'], label=\"jour1\")\nax[0].set_title('27/05/2019_Lundi')\nax[0].set(xlabel='heures', ylabel='nombre de passage')\nax[1].bar(df_final['hours'], df_final['jour2'], label=\"jour2\")\nax[1].set_title('29/05/2019_Mercredi')\nax[1].set(xlabel='heures', ylabel='nombre de passage')\n\nax[2].bar(df_final['hours'], df_final['jour3'], label=\"jour3\")\nax[2].set_title('30/05/2019_Jeudi')\nax[2].set(xlabel='heures', ylabel='nombre de passage')\n\nax[3].bar(df_final['hours'], df_final['jour4'], label=\"jour4\")\nax[3].set_title('31/05/2019_Vendredi')\nax[3].set(xlabel='heures', ylabel='nombre de passage')\n\nax[4].bar(df_final['hours'], df_final['jour5'], label=\"jour5\")\nax[4].set_title('01/06/2019_Samedi')\nax[4].set(xlabel='heures', ylabel='nombre de passage')\n\nax[5].bar(df_final['hours'], df_final['jour6'], label=\"jour6\")\nax[5].set_title('02/06/2019_Dimanche')\nax[5].set(xlabel='heures', ylabel='nombre de passage')\n\nax[6].bar(df_final['hours'], df_final['jour7'], label=\"jour1\")\nax[6].set_title('03/06/2019_Lundi')\nax[6].set(xlabel='heures', ylabel='nombre de passage')\nax[7].bar(df_final['hours'], df_final['jour8'], label=\"jour2\")\nax[7].set_title('04/06/2019_Mardi')\nax[7].set(xlabel='heures', ylabel='nombre de passage')\n\nax[8].bar(df_final['hours'], df_final['jour9'], label=\"jour3\")\nax[8].set_title('05/06/2019_Mercredi')\nax[8].set(xlabel='heures', ylabel='nombre de passage')\n\nax[9].bar(df_final['hours'], df_final['jour10'], label=\"jour4\")\nax[9].set_title('06/06/2019_Jeudi')\nax[9].set(xlabel='heures', ylabel='nombre de passage')\n\nax[10].bar(df_final['hours'], df_final['jour11'], label=\"jour5\")\nax[10].set_title('07/06/2019_Vendredi')\nax[10].set(xlabel='heures', ylabel='nombre de passage')\n\nax[11].bar(df_final['hours'], df_final['jour12'], label=\"jour6\")\nax[11].set_title('08/06/2019_Samedi')\nax[11].set(xlabel='heures', ylabel='nombre de passage')\n\nax[12].bar(df_final['hours'], df_final['jour13'], label=\"jour5\")\nax[12].set_title('10/06/2019_Lundi')\nax[12].set(xlabel='heures', ylabel='nombre de passage')\n\nax[13].bar(df_final['hours'], df_final['jour14'], label=\"jour6\")\nax[13].set_title('11/06/2019_Mardi')\nax[13].set(xlabel='heures', ylabel='nombre de passage')\nfig.suptitle('Nombre de Passage par jour')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"**Interprétaion des résultats :**",
"_____no_output_____"
],
[
"* Au niveau du premier capteur on remarque qu’il a enregistré les températures pendant 6 jours contrairement au deuxième capteur qui était opérationnel pendant 14 jours.\r\n* On remarque qu'il y a une différence dans le nombre de passages calculés par les 2 capteurs dans les intervalles de temps du 03/06 jusqu'au 08/06, ce qui pourrait être expliqué par la présence des 2 capteurs dans différentes positions.\r\n* Absence des données, et remplacement des intervalles non enregistrés par 0 comme valeur par défaut, ce qui vient influencer les résultats obtenus.\r\n* Pour le capteur 1, souvent, le patient enregistre un nombre de passage élevé par jour, dans l'intervalle du temps entre 20 et minuit.\r\n* Pour le capteur 2, on obtient des jours avec un nombre de passage proche de 0, on remarque aussi une absence de mouvement pendant la nuit contrairement au capteur 1, ceci peut indiquer une anomalie dans le comportement du patient.\r\n\r\n",
"_____no_output_____"
],
[
"**Exploitation des résultats:**",
"_____no_output_____"
],
[
"* A la fin de notre étude, nous sommes arrivés à calculer le nombre de passage du patient devant les deux capteurs. \r\n* Avec ces résultats, nous pouvons calculer la durée moyenne de passage d'un individu devant le capteur, analyser le nombre de passage par jours, et créer des fonctions qui permettent d'identifier des anomalies de comportement du patient.\r\n\r\n* Pour la détection du passage du patient, nous avons défini un seuil statique. Ce qui en a résulté est un nombre de passage très élevé durant la journée. Ceci correspond aux changements de la température du milieu extérieur.\r\n* Pour résoudre ce problème, nous pouvons établir un seuil dynamique à partir duquel on peut considérer que l'on détecte une personne devant le capteur.\r\n\r\n\r\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecf590fea3a494f247a2636965394aac04b4d639 | 65,895 | ipynb | Jupyter Notebook | DQN Train.ipynb | joeworsh/drl_agent_navigation | 2cc748a549f1b0463bb416a37666f07724927b20 | [
"MIT"
] | null | null | null | DQN Train.ipynb | joeworsh/drl_agent_navigation | 2cc748a549f1b0463bb416a37666f07724927b20 | [
"MIT"
] | null | null | null | DQN Train.ipynb | joeworsh/drl_agent_navigation | 2cc748a549f1b0463bb416a37666f07724927b20 | [
"MIT"
] | null | null | null | 206.567398 | 17,100 | 0.91752 | [
[
[
"# Train DQN",
"_____no_output_____"
]
],
[
[
"# autoreload code changes\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"from banana_env import BananaEnv\nfrom joe_agents.dqn_agent import DqnAgent",
"_____no_output_____"
]
],
[
[
"## Train a Default Network",
"_____no_output_____"
]
],
[
[
"# create the environment\nexe = \"../../deep-reinforcement-learning/p1_navigation/Banana_Windows_x86_64/Banana.exe\"\nevn_config = {\"executable\": exe, \"train_mode\": True}\nenv = BananaEnv(evn_config)",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: BananaBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 37\n Number of stacked Vector Observation: 1\n Vector Action space type: discrete\n Vector Action space size (per agent): 4\n Vector Action descriptions: , , , \n"
],
[
"params = {\n \"episodes\": 5000,\n \"batch_size\": 64,\n \"buffer_size\": 10000,\n \"learning_rate\": 5e-4,\n \"discount_rate\": 0.99,\n \"update_rate\": 4,\n \"epsilon_decay\": 0.99,\n \"epsilon_decay_rate\": 1,\n \"min_epsilon\": 0.01,\n \"replay\": \"prioritized\",\n \"prioritized_replay_damp\": 0.6,\n \"e_constant\": 1e-6,\n \"prioritized_replay_beta_anneal_rate\": 100,\n \"learning_start\": 64,\n \"double_dqn\": True,\n \"deuling_dqn\": True\n}",
"_____no_output_____"
],
[
"agent = DqnAgent(37, 4, params)",
"_____no_output_____"
],
[
"scores, epsilons, buffer_stats = agent.train(env)",
"100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5000/5000 [4:50:44<00:00, 3.49s/it]\n"
],
[
"plt.plot(scores)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(epsilons)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(buffer_stats[\"batch_reward_sums\"])\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(buffer_stats[\"batch_buffer_len\"])\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(buffer_stats[\"prioritized_replay_beta\"])\nplt.show()",
"_____no_output_____"
],
[
"env.close()",
"_____no_output_____"
],
[
"# save the agent for playback\nagent.save()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf59a3f4cae526ee326e52a7cbd29b9eb04bcb1 | 24,717 | ipynb | Jupyter Notebook | notebooks/MLA-NLP-Lecture3-Recurrent-Neural-Networks.ipynb | dsMOOC/MLA-NLP | 2bef5f28b0098df815849ecbbd1737db599a1bec | [
"MIT-0",
"MIT"
] | null | null | null | notebooks/MLA-NLP-Lecture3-Recurrent-Neural-Networks.ipynb | dsMOOC/MLA-NLP | 2bef5f28b0098df815849ecbbd1737db599a1bec | [
"MIT-0",
"MIT"
] | null | null | null | notebooks/MLA-NLP-Lecture3-Recurrent-Neural-Networks.ipynb | dsMOOC/MLA-NLP | 2bef5f28b0098df815849ecbbd1737db599a1bec | [
"MIT-0",
"MIT"
] | null | null | null | 31.97542 | 360 | 0.532346 | [
[
[
"",
"_____no_output_____"
],
[
"# <a name=\"0\">Machine Learning Accelerator - Natural Language Processing - Lecture 3</a>\n\n## Recurrent Neural Networks (RNNs) for the Product Review Problem - Classify Product Reviews as Positive or Not\n\nIn this exercise, we will learn how to use Recurrent Neural Networks. \n\nWe will follow these steps:\n1. <a href=\"#1\">Reading the dataset</a>\n2. <a href=\"#2\">Exploratory data analysis</a>\n3. <a href=\"#3\">Train-validation dataset split</a>\n4. <a href=\"#4\">Text processing and transformation</a>\n5. <a href=\"#5\">Using GloVe Word Embeddings</a>\n6. <a href=\"#6\">Training and validating model</a>\n7. <a href=\"#7\">Improvement ideas</a>\n\nOverall dataset schema:\n* __reviewText:__ Text of the review\n* __summary:__ Summary of the review\n* __verified:__ Whether the purchase was verified (True or False)\n* __time:__ UNIX timestamp for the review\n* __log_votes:__ Logarithm-adjusted votes log(1+votes)\n* __isPositive:__ Whether the review is positive or negative (1 or 0)\n\n__Important note:__ One big distinction betweeen the regular neural networks and RNNs is that RNNs work with sequential data. In our case, RNNs will help us with the text field. If we also want to consider other fields such as time, log_votes, verified, etc. , we need to use the regular neural networks with the RNN network.",
"_____no_output_____"
]
],
[
[
"! pip install -q gluonnlp mxnet",
"\u001b[33mYou are using pip version 10.0.1, however version 20.2b1 is available.\r\nYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\r\n"
],
[
"import re\nimport numpy as np\nimport mxnet as mx\nfrom mxnet import gluon, nd, autograd\nfrom mxnet.gluon import nn, rnn, Trainer\nfrom mxnet.gluon.loss import SigmoidBinaryCrossEntropyLoss \nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
]
],
[
[
"## 1. <a name=\"1\">Reading the dataset</a>\n(<a href=\"#0\">Go to top</a>)",
"_____no_output_____"
],
[
"Let's read the dataset below and fill-in the reviewText field. We will use this field as input to our ML model.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv('../data/examples/AMAZON-REVIEW-DATA-CLASSIFICATION.csv')",
"_____no_output_____"
]
],
[
[
"Let's look at the first five rows in the dataset. As you can see the __log_votes__ field is numeric. That's why we will build a regression model.",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"## 2. <a name=\"2\">Exploratory Data Analysis</a>\n(<a href=\"#0\">Go to top</a>)",
"_____no_output_____"
],
[
"Let's look at the range and distribution of log_votes",
"_____no_output_____"
]
],
[
[
"df[\"isPositive\"].value_counts()",
"_____no_output_____"
]
],
[
[
"We can check the number of missing values for each columm below.",
"_____no_output_____"
]
],
[
[
"print(df.isna().sum())",
"reviewText 11\nsummary 14\nverified 0\ntime 0\nlog_votes 0\nisPositive 0\ndtype: int64\n"
]
],
[
[
"We have missing values in our text fields.",
"_____no_output_____"
],
[
"## 3. <a name=\"3\">Train-validation split</a>\n(<a href=\"#0\">Go to top</a>)",
"_____no_output_____"
],
[
"Let's split the dataset into training and validation",
"_____no_output_____"
]
],
[
[
"# This separates 15% of the entire dataset into validation dataset.\ntrain_text, val_text, train_label, val_label = \\\n train_test_split(df[\"reviewText\"].tolist(),\n df[\"isPositive\"].tolist(),\n test_size=0.10,\n shuffle=True,\n random_state=324)",
"_____no_output_____"
]
],
[
[
"## 4. <a name=\"4\">Text processing and Transformation</a>\n(<a href=\"#0\">Go to top</a>)\n\nWe will apply the following processes here:\n* __Text cleaning:__ Simple text cleaning operations. We won't do stemming or lemmatization as our word vectors already cover different forms of words. We are using GloVe word embeddings for 6 billion words, phrases or punctuations in this example.\n* __Tokenization:__ Tokenizing all sentences\n* __Creating vocabulary:__ We will create a vocabulary of the tokens. In this vocabulary, tokens will map to unique ids, such as \"car\"->32, \"house\"->651, etc.\n* __Transforming text:__ Tokenized sentences will be mapped to unique ids. For example: [\"this\", \"is\", \"sentence\"] -> [13, 54, 412].",
"_____no_output_____"
]
],
[
[
"import nltk, gluonnlp\nfrom nltk.tokenize import word_tokenize\n\nnltk.download('punkt')\n\ndef cleanStr(text):\n \n # Check if the sentence is a missing value\n if isinstance(text, str) == False:\n text = \"\"\n \n # Remove leading/trailing whitespace\n text = text.lower().strip()\n # Remove extra space and tabs\n text = re.sub('\\s+', ' ', text)\n # Remove HTML tags/markups\n text = re.compile('<.*?>').sub('', text)\n return text\n\ndef tokenize(text):\n tokens = []\n text = cleanStr(text)\n words = word_tokenize(text)\n for word in words:\n tokens.append(word)\n return tokens\n\ndef createVocabulary(text_list, min_freq):\n all_tokens = []\n for sentence in text_list:\n all_tokens += tokenize(sentence)\n # Calculate token frequencies\n counter = gluonnlp.data.count_tokens(all_tokens)\n # Create the vocabulary\n vocab = gluonnlp.Vocab(counter,\n min_freq = min_freq,\n unknown_token = '<unk>',\n padding_token = None,\n bos_token = None,\n eos_token = None)\n \n return vocab\n\ndef transformText(text, vocab, max_length):\n token_arr = np.zeros((max_length,))\n tokens = tokenize(text)[0:max_length]\n for idx, token in enumerate(tokens):\n try:\n # Use the vocabulary index of the token\n token_arr[idx] = vocab.token_to_idx[token]\n except:\n token_arr[idx] = 0 # Unknown word\n return token_arr",
"[nltk_data] Downloading package punkt to /home/ec2-user/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n"
]
],
[
[
"In order to keep the training time low, we only consider the first 250 words (max_length) in sentences. We also only use words that occur more than 5 times in the all sentences (min_freq).",
"_____no_output_____"
]
],
[
[
"min_freq = 5\nmax_length = 250\n\nprint(\"Creating the vocabulary\")\nvocab = createVocabulary(train_text, min_freq)\nprint(\"Transforming training texts\")\ntrain_text_transformed = nd.array([transformText(text, vocab, max_length) for text in train_text])\nprint(\"Transforming validation texts\")\nval_text_transformed = nd.array([transformText(text, vocab, max_length) for text in val_text])",
"Creating the vocabulary\nTransforming training texts\nTransforming validation texts\n"
]
],
[
[
"Let's see some unique ids for some words.",
"_____no_output_____"
]
],
[
[
"print(\"Vocabulary index for computer:\", vocab['computer'])\nprint(\"Vocabulary index for beautiful:\", vocab['beautiful'])\nprint(\"Vocabulary index for code:\", vocab['code'])",
"Vocabulary index for computer: 67\nVocabulary index for beautiful: 1929\nVocabulary index for code: 402\n"
]
],
[
[
"## 5. <a name=\"5\">Using pre-trained GloVe Word Embeddings</a>\n(<a href=\"#0\">Go to top</a>)\n\nIn this example, we will use GloVe word vectors. `'glove.6B.50d.txt'` file gives us 6 billion words/phrases vectors. Each word vector has 50 numbers in it. The following code shows how to get the word vectors and create an embedding matrix from them. We will connect our vocabulary indexes to the GloVe embedding with the `get_vecs_by_tokens()` function.",
"_____no_output_____"
]
],
[
[
"from mxnet.contrib import text\nglove = text.embedding.create('glove',\n pretrained_file_name = 'glove.6B.50d.txt')\nembedding_matrix = glove.get_vecs_by_tokens(vocab.idx_to_token)",
"_____no_output_____"
]
],
[
[
"## 6. <a name=\"6\">Training and validation</a>\n(<a href=\"#0\">Go to top</a>)\n\nWe have processed our text data and also created our embedding matrixes from GloVe. Now, it is time to start the training process.",
"_____no_output_____"
],
[
"We will set our parameters below",
"_____no_output_____"
]
],
[
[
"# Size of the state vectors\nhidden_size = 12\n\n# General NN training parameters\nlearning_rate = 0.01\nepochs = 15\nbatch_size = 32\n\n# Embedding vector and vocabulary sizes\nnum_embed = 50 # glove.6B.50d.txt\nvocab_size = len(vocab.token_to_idx.keys())",
"_____no_output_____"
]
],
[
[
"We need to put our data into correct format before the process.",
"_____no_output_____"
]
],
[
[
"from mxnet.gluon.data import ArrayDataset, DataLoader\n\ntrain_label = nd.array(train_label)\nval_label = nd.array(val_label)\n\ntrain_dataset = ArrayDataset(train_text_transformed, train_label)\ntrain_loader = DataLoader(train_dataset, batch_size=batch_size)",
"_____no_output_____"
]
],
[
[
"Our sequential model is made of these layers:\n* Embedding layer: This is where our words/tokens are mapped to word vectors.\n* RNN layer: We will be using a simple RNN model. We won't stack RNN units in this example. It uses a sinle RNN unit with its hidden state size of 12. More details about the RNN is available [here](https://mxnet.incubator.apache.org/api/python/docs/api/gluon/rnn/index.html#mxnet.gluon.rnn.RNN).\n* Dense layer: A dense layer with a single neuron is used to output our log_votes prediction.",
"_____no_output_____"
]
],
[
[
"context = mx.cpu() # use mx.gpu() if you are using GPU\n\nmodel = nn.Sequential()\nmodel.add(nn.Embedding(vocab_size, num_embed), # Embedding layer\n rnn.RNN(hidden_size, num_layers=1), # Recurrent layer\n nn.Dense(1, activation='sigmoid')) # Output layer",
"_____no_output_____"
]
],
[
[
"Let's initialize this network. Then, we will need to make the embedding layer use our GloVe word vectors.",
"_____no_output_____"
]
],
[
[
"# Initialize networks parameters\nmodel.collect_params().initialize(mx.init.Xavier(), ctx=context)\n\n# We set the embedding layer's parameters from GloVe\nmodel[0].weight.set_data(embedding_matrix.as_in_context(context))\n# We won't change/train the embedding layer\nmodel[0].collect_params().setattr('grad_req', 'null')",
"_____no_output_____"
]
],
[
[
"We will define the trainer and loss function below. __Binary cross-entropy loss__ is used as this is a binary classification problem.\n$$\n\\mathrm{BinaryCrossEntropyLoss} = -\\sum_{examples}{(y\\log(p) + (1 - y)\\log(1 - p))}\n$$",
"_____no_output_____"
]
],
[
[
"# Setting our trainer\ntrainer = Trainer(model.collect_params(),\n 'sgd',\n {'learning_rate': learning_rate})\n\n# We will use Binary Cross-entropy loss\ncross_ent_loss = SigmoidBinaryCrossEntropyLoss(from_sigmoid=True) ",
"_____no_output_____"
]
],
[
[
"Now, it is time to start the training process. We will print the Binary cross-entropy loss loss after each epoch.",
"_____no_output_____"
]
],
[
[
"import time\nfor epoch in range(epochs):\n start = time.time()\n training_loss = 0\n # Training loop, train the network\n for idx, (data, target) in enumerate(train_loader):\n\n data = data.as_in_context(context)\n target = target.as_in_context(context)\n \n with autograd.record():\n output = model(data)\n L = cross_ent_loss(output, target)\n training_loss += nd.sum(L).asscalar()\n L.backward()\n trainer.step(data.shape[0])\n \n # Calculate validation loss\n val_predictions = model(val_text_transformed.as_in_context(context))\n val_loss = nd.sum(cross_ent_loss(val_predictions, val_label)).asscalar()\n \n # Let's take the average losses\n training_loss = training_loss / len(train_label)\n val_loss = val_loss / len(val_label)\n \n end = time.time()\n print(\"Epoch %s. Train_loss %f Validation_loss %f Seconds %f\" % \\\n (epoch, training_loss, val_loss, end-start))",
"Epoch 0. Train_loss 0.615788 Validation_loss 0.571962 Seconds 12.600270\nEpoch 1. Train_loss 0.547803 Validation_loss 0.528089 Seconds 12.478138\nEpoch 2. Train_loss 0.517537 Validation_loss 0.506733 Seconds 12.451277\nEpoch 3. Train_loss 0.495160 Validation_loss 0.488392 Seconds 12.529691\nEpoch 4. Train_loss 0.479041 Validation_loss 0.474149 Seconds 14.158768\nEpoch 5. Train_loss 0.465080 Validation_loss 0.463242 Seconds 12.570302\nEpoch 6. Train_loss 0.453633 Validation_loss 0.454745 Seconds 12.555365\nEpoch 7. Train_loss 0.444815 Validation_loss 0.448084 Seconds 12.429899\nEpoch 8. Train_loss 0.437525 Validation_loss 0.442780 Seconds 12.556150\nEpoch 9. Train_loss 0.431338 Validation_loss 0.438234 Seconds 12.401985\nEpoch 10. Train_loss 0.425976 Validation_loss 0.434568 Seconds 12.335888\nEpoch 11. Train_loss 0.421125 Validation_loss 0.431524 Seconds 12.500989\nEpoch 12. Train_loss 0.416877 Validation_loss 0.428808 Seconds 12.503373\nEpoch 13. Train_loss 0.413092 Validation_loss 0.426535 Seconds 12.503922\nEpoch 14. Train_loss 0.409693 Validation_loss 0.424555 Seconds 12.439021\n"
]
],
[
[
"Let's see some validation results below",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import classification_report, accuracy_score\n\n# Get validation predictions\nval_predictions = model(val_text_transformed.as_in_context(context))\n\nval_label = nd.array(val_label)\n\n# Round predictions: 1 if pred>0.5, 0 otherwise\nval_predictions = np.round(val_predictions.asnumpy())\n\nprint(\"Classification Report\")\nprint(classification_report(val_label.asnumpy(), val_predictions))\nprint(\"Accuracy\")\nprint(accuracy_score(val_label.asnumpy(), val_predictions))",
"Classification Report\n precision recall f1-score support\n\n 0.0 0.77 0.69 0.73 2605\n 1.0 0.83 0.88 0.85 4395\n\n micro avg 0.81 0.81 0.81 7000\n macro avg 0.80 0.78 0.79 7000\nweighted avg 0.81 0.81 0.81 7000\n\nAccuracy\n0.8072857142857143\n"
]
],
[
[
"## 7. <a name=\"7\">Improvement ideas</a>\n(<a href=\"#0\">Go to top</a>)\n\nWe can improve our model by\n* Changing hyper-parameters\n* Using more advanced architetures such as Gated Recurrent Units (GRU) and Long Short-term Memory networks (LSTM).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecf5ba2ddec6716017033288d665d494ea483c58 | 320,007 | ipynb | Jupyter Notebook | fred_inflation.ipynb | aarongilman/research | 39a2848c840e131d8e633bc3841fb43662a50ec3 | [
"BSD-2-Clause"
] | 1 | 2021-12-29T07:10:02.000Z | 2021-12-29T07:10:02.000Z | fred_inflation.ipynb | aarongilman/research | 39a2848c840e131d8e633bc3841fb43662a50ec3 | [
"BSD-2-Clause"
] | null | null | null | fred_inflation.ipynb | aarongilman/research | 39a2848c840e131d8e633bc3841fb43662a50ec3 | [
"BSD-2-Clause"
] | 1 | 2021-07-16T02:28:18.000Z | 2021-07-16T02:28:18.000Z | 195.12622 | 161,382 | 0.876512 | [
[
[
"<a href=\"https://colab.research.google.com/github/aarongilman/research/blob/master/fred_inflation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Analysis and Forecast of US Inflation\n\n**We examine inflation data: CPI and PCE, including the core versions, \nalong with the 10-year BEI rate (break-even inflation). \nAn unified inflation statistic *m4infl* is defined, \nand we make a univariate forecast using the Holt-Winters \nmodel with optimized robust parameters.\nThe stochastic characteristics of *m4infl* are\ndiscussed, including its geometric mean rate.**\n\nAppendix 2 presents function `foreinfl()` which concisely \nimplements the forecasting process derived in this notebook.",
"_____no_output_____"
]
],
[
[
"!pip install --pre fecon236",
"Collecting fecon236\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/95/66/d7039d3ab2d27fa1d63b40eb8fee1b2cb6f7bad167cab21425b18197e24d/fecon236-10.8.0-py2.py3-none-any.whl (94kB)\n\r\u001b[K |███▌ | 10kB 16.9MB/s eta 0:00:01\r\u001b[K |███████ | 20kB 1.5MB/s eta 0:00:01\r\u001b[K |██████████▍ | 30kB 1.9MB/s eta 0:00:01\r\u001b[K |█████████████▉ | 40kB 2.2MB/s eta 0:00:01\r\u001b[K |█████████████████▍ | 51kB 1.9MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 61kB 2.1MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 71kB 2.3MB/s eta 0:00:01\r\u001b[K |███████████████████████████▊ | 81kB 2.6MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▎| 92kB 2.8MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 102kB 2.5MB/s \n\u001b[?25hInstalling collected packages: fecon236\nSuccessfully installed fecon236-10.8.0\n"
],
[
"from fecon236 import *",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
],
[
"# PREAMBLE-p6.15.1223d :: Settings and system details\nfrom __future__ import absolute_import, print_function, division\nsystem.specs()\npwd = system.getpwd() # present working directory as variable.\nprint(\" :: $pwd:\", pwd)\n# If a module is modified, automatically reload it:\n%load_ext autoreload\n%autoreload 2\n# Use 0 to disable this feature.\n\n# Notebook DISPLAY options:\n# Represent pandas DataFrames as text; not HTML representation:\nimport pandas as pd\npd.set_option( 'display.notebook_repr_html', False )\nfrom IPython.display import HTML # useful for snippets\n# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')\nfrom IPython.display import Image \n# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works\nfrom IPython.display import YouTubeVideo\n# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)\nfrom IPython.core import page\nget_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)\n# Or equivalently in config file: \"InteractiveShell.display_page = True\", \n# which will display results in secondary notebook pager frame in a cell.\n\n# Generate PLOTS inside notebook, \"inline\" generates static png:\n%matplotlib inline \n# \"notebook\" argument allows interactive zoom and resize.",
" !: Code for this project straddles python27 and python3.\n :: Python 3.6.9\n :: IPython None\n :: jupyter_core None\n :: notebook None\n :: matplotlib None\n :: numpy None\n :: scipy None\n :: statsmodels None\n :: sympy None\n :: pandas None\n :: pandas_datareader None\n :: fecon236 None\n :: Repository: git_repo_None tag_None branch_None\n :: Timestamp: 2020-09-16T15:55:58Z\n :: $pwd: /content\n"
]
],
[
[
"### Retrieving data\n\nEach economic time series has its own *\"fredcode\"* which is listed \nat the FRED site, see \n[yi_fred module](https://github.com/rsvp/fecon235/blob/master/lib/yi_fred.py) \nfor details.\n\nLet's create a dictionary `ts` of time series in pandas DataFrame \nformat from our codelists.",
"_____no_output_____"
]
],
[
[
"print(ml_infl)\nprint(ml_long)",
"['CPIAUCSL', 'CPILFESL', 'PCEPI', 'PCEPILFE']\n['GS10', 'FII10', 'm4spx']\n"
],
[
"# Download updated inflation and bond statistics:\nts = {}\nfor i in ml_infl + ml_long:\n ts[i] = get(i)",
" :: S&P 500 for last 10 years (1957-archive not found).\n"
],
[
"# Collection of historical inflation levels:\ntsinf = [ ts[i] for i in ml_infl ]\n\n# Then make a DataFrame consisting of those levels:\ninf_levels = paste( tsinf )\n# ... paste() does a side-by-side mash-up\n# of individual time series.",
"_____no_output_____"
],
[
"# Label the column names:\ninf_levels.columns = ['CPI', 'CPIc', 'PCE', 'PCEc']",
"_____no_output_____"
],
[
"# Compute year-over-year inflation rates, expressed in percent:\ninf = pcent(inf_levels, 12).dropna()\n# since data is monthly ^",
"_____no_output_____"
],
[
"boxplot( inf, 'Inflation' )",
"_____no_output_____"
]
],
[
[
"**The small appended \"c\" denotes *core* version of headline versions of inflation. \nThe red dot represents the most recent data point.**\n\nOur inflation rates go back to **1960-01-01**. \nCPI looks slightly more volatile than PCE versions.",
"_____no_output_____"
],
[
"### Re: Consumer Price Index and Personal Consumption Expenditures\n\n> \"Two different price indexes are popular for measuring inflation: the consumer price index (CPI) from the Bureau of Labor Statistics and the personal consumption expenditures price index (PCE) from the Bureau of Economic Analysis. [A]n accurate measure of inflation is important for both the U.S. federal government and the Federal Reserve's **Federal Open Market Committee** (FOMC), but they focus on different measures. For example, the federal government uses the CPI to make inflation adjustments to certain kinds of benefits, such as Social Security. In contrast, the **FOMC focuses on PCE inflation in its quarterly economic projections and also states its longer-run inflation goal in terms of headline PCE**. The FOMC focused on CPI inflation prior to 2000 but, after extensive analysis, changed to PCE inflation for three main reasons: The expenditure weights in the PCE can change as people substitute away from some goods and services toward others, the PCE includes more comprehensive coverage of goods and services, and historical PCE data can be revised (more than for seasonal factors only).\" --James Bullard, president of the Federal Reserve Bank of St. Louis. ",
"_____no_output_____"
]
],
[
[
"stats(inf)",
" CPI CPIc PCE PCEc\ncount 727.000000 727.000000 727.000000 727.000000\nmean 3.698789 3.687740 3.248277 3.205297\nstd 2.828294 2.559234 2.434197 2.149177\nmin -1.958761 0.602718 -1.237341 0.902110\n25% 1.758568 1.988231 1.603916 1.621224\n50% 2.962963 2.699725 2.459607 2.200058\n75% 4.480185 4.623574 4.134898 4.286866\nmax 14.592275 13.604488 11.593359 10.216285\n\n :: Index on min:\nCPI 2009-07-01\nCPIc 2010-10-01\nPCE 2009-07-01\nPCEc 2010-12-01\ndtype: datetime64[ns]\n\n :: Index on max:\nCPI 1980-03-01\nCPIc 1980-06-01\nPCE 1980-03-01\nPCEc 1975-02-01\ndtype: datetime64[ns]\n\n :: Head:\n CPI CPIc PCE PCEc\nT \n1960-01-01 1.240951 2.006689 1.692174 2.068512\n1960-02-01 1.413793 2.341137 1.703027 2.180406\n1960-03-01 1.518813 2.000000 1.689441 2.076496\n :: Tail:\n CPI CPIc PCE PCEc\nT \n2020-05-01 0.235532 1.236465 0.490339 0.952236\n2020-06-01 0.709470 1.194257 0.857541 1.067272\n2020-07-01 1.029338 1.566086 1.003253 1.254719\n\n :: Correlation matrix:\n CPI CPIc PCE PCEc\nCPI 1.000000 0.929110 0.982415 0.917317\nCPIc 0.929110 1.000000 0.924610 0.965742\nPCE 0.982415 0.924610 1.000000 0.955348\nPCEc 0.917317 0.965742 0.955348 1.000000\n"
]
],
[
[
"### Unified inflation\n\nThe numbers confirm the core version is less volatile than headline inflation. \nMoreover, headline versions are most correlated. \nNote how the dates of the minimum and maximum values do not coincide.\n\nSo what is the appropriate inflation rate among the contenders? \nWe shall take the *average of the contenders* to arrive at **unified inflation**.",
"_____no_output_____"
]
],
[
[
"# Compute unified inflation:\ninf_av = todf(( inf['CPI'] + inf['CPIc'] + inf['PCE'] + inf['PCEc'] ) / 4 )",
"_____no_output_____"
],
[
"stats( inf_av )",
" Y\ncount 727.000000\nmean 3.460026\nstd 2.441337\nmin -0.189304\n25% 1.734267\n50% 2.608131\n75% 4.378253\nmax 12.035391\n\n :: Index on min:\nY 2009-07-01\ndtype: datetime64[ns]\n\n :: Index on max:\nY 1980-03-01\ndtype: datetime64[ns]\n\n :: Head:\n Y\nT \n1960-01-01 1.752082\n1960-02-01 1.909591\n1960-03-01 1.821187\n :: Tail:\n Y\nT \n2020-05-01 0.728643\n2020-06-01 0.957135\n2020-07-01 1.213349\n\n :: Correlation matrix:\n Y\nY 1.0\n"
]
],
[
[
"Speaking of unified inflation rates, we can now say \nthat the maximum occurred in March 1980 at 12%, \nand that minimum occurred in July 2009 during the \nGreat Recession at -0.16% (slight deflation).",
"_____no_output_____"
]
],
[
[
"# The shortest of our time series under consideration, m4tips10, \n# starts at 2003-01-01, so let\nstart = '2003-01-01'",
"_____no_output_____"
],
[
"# Plot unified inflation, given start:\nplot( inf_av[start:] )",
"_____no_output_____"
]
],
[
[
"We can see the dramatic drop in the unified inflation rate \nduring the Great Recession.",
"_____no_output_____"
],
[
"## Break-even inflation \n\nBEI is computed using only data from the bond market. \nThe key BEI uses 10-year US government bonds, \ntaking the difference in rates between the \nusual on-the-run bond and the TIPS issue. \nTIPS is an abbreviation for \"Treasury Inflation Protected Security.\"\n\nTIPS allow us to observe *real* interest rates\nbeing traded in the market. \n(Notebook https://git.io/gold makes a conjecture \nthat real gold prices is a stationary time-series \nbound by real interest rates.) ",
"_____no_output_____"
]
],
[
[
"bei = todf( ts[m4bond10] - ts[m4tips10] )",
"_____no_output_____"
],
[
"# Plot Break-even inflation rate\nplot( bei[start:] )",
"_____no_output_____"
],
[
"tail(bei)",
"_____no_output_____"
]
],
[
[
"Studies comparing forward-looking BEI and realized inflation generally show \n*BEI overestimates realized inflation*. \n\nWhat is the correlation between BEI and average inflation? \nNot much. BEI seems to lead unified inflation, cf. circa 2009. \nAlso, BEI appears more stable relative to unified inflation.\n\nIt is worth emphasizing that \n**unified inflation is a \"rear view\"** while ***BEI is \"forward looking.\"*** \nReal money bets are made on the spread of the latter. \n\n#### Present: unified inflation and BEI\n\nLet us mix the rear and forward views equally at arrive at a **\"present view\"**, \n`bei_inf_av`.",
"_____no_output_____"
]
],
[
[
"bei_inf_av = todf((bei + inf_av) / 2) ",
"_____no_output_____"
],
[
"# Plot present view:\nplot( bei_inf_av[start:] )",
"_____no_output_____"
],
[
"# Probably the best accessment of current inflation:\ntail(bei_inf_av)",
"_____no_output_____"
]
],
[
[
"Our present inflation measure consisting of 1 part each of \nCPI, CPI core, PCE, PCE core -- plus 4 parts of BEI. \nIt does have strong correlation with the Fed's favored inflation measure, \nsee Appendix 1.",
"_____no_output_____"
],
[
"## Unified inflation *level*\n\nTo readily access our findings above in other studies, \nwe developed the synthetic pre-fabricated **m4infl** series. \nEach measure of inflation *levels* is first rescaled such that \nthe most recent point is equal to 1. \nThis eliminates the base year problem and the \narbitrarily set level of 100 somewhere in time. \nThen we can take the average among inflation levels which \nwill be by mathematical construction, *equally weighted*. \nThe levels originate from fredcodes: \n`['CPIAUCSL', 'CPILFESL', 'PCEPI', 'PCEPILFE']`.\n\nAs a very convenient by-product, the recipricol of m4infl \nyields *multiplicative factors useful for deflating prices*, see **m4defl**.\n\n(API note: The average between backward and forward-looking inflation *rates* \nis codified as **m4inflbei**.)",
"_____no_output_____"
]
],
[
[
"infl = get(m4infl)\n\n# m4infl represents unified inflation level, synthesized\n# from four different US government time-series.",
"_____no_output_____"
],
[
"# Latest levels of unified inflation:\ntail(infl)",
"_____no_output_____"
],
[
"# From the level, we compute annual rates:\ninflrate = pcent(infl, 12)",
"_____no_output_____"
],
[
"# VERIFY if our two methods agree, by linear regression:\nstat2( inf_av['Y'], inflrate['Y'] )\n# (Think of this as an unit test, visible in a notebook.)\n# 'Y' is the default column label.",
" :: FIRST variable:\ncount 727.000000\nmean 3.460026\nstd 2.441337\nmin -0.189304\n25% 1.734267\n50% 2.608131\n75% 4.378253\nmax 12.035391\nName: Y, dtype: float64\n\n :: SECOND variable:\ncount 727.000000\nmean 3.442063\nstd 2.418909\nmin -0.217823\n25% 1.734721\n50% 2.598858\n75% 4.363663\nmax 11.889298\nName: Y, dtype: float64\n\n :: CORRELATION\n0.9999558152481136\n OLS Regression Results \n==============================================================================\nDep. Variable: Y R-squared: 1.000\nModel: OLS Adj. R-squared: 1.000\nMethod: Least Squares F-statistic: 8.204e+06\nDate: Wed, 16 Sep 2020 Prob (F-statistic): 0.00\nTime: 15:56:05 Log-Likelihood: 1713.0\nNo. Observations: 727 AIC: -3422.\nDf Residuals: 725 BIC: -3413.\nDf Model: 1 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nIntercept -0.0138 0.001 -9.310 0.000 -0.017 -0.011\nX 1.0092 0.000 2864.200 0.000 1.009 1.010\n==============================================================================\nOmnibus: 61.025 Durbin-Watson: 0.149\nProb(Omnibus): 0.000 Jarque-Bera (JB): 308.102\nSkew: 0.108 Prob(JB): 1.25e-67\nKurtosis: 6.182 Cond. No. 7.60\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
]
],
[
[
"The result shows perfect correlation since 1960 (full dataset), \nthus our synthetic series **m4infl** works as intended.",
"_____no_output_____"
],
[
"### Geometric mean rate of unified inflation\n\nThe level form of unified inflation allows us to \ndirectly use the tools developed for financial assets.",
"_____no_output_____"
]
],
[
[
"gemrat(infl[start:], yearly=12)",
"_____no_output_____"
]
],
[
[
"The geometric mean rate of unified inflation is 1.83% \nsince `start`, the volatility is 0.53%, \nthus a 50 basis point move in a year would not be surprising. \nThe most interesting statistic reveals that *inflation is leptokurtotic*.",
"_____no_output_____"
],
[
"## Holt-Winters forecast for inflation\n\nFor predicting inflation, it is preferable to use levels, rather than rates, \nas primary form of time-series due to linearity considerations \n(see plot in the next cell).\n\nForecasting will be covered in detail in another notebook. \nBut here we demonstrate the **Holt-Winters model**.",
"_____no_output_____"
]
],
[
[
"# Visualize unified inflation levels:\nplot(infl[start:])",
"_____no_output_____"
],
[
"Image(url='https://github.com/rsvp/fecon235/raw/master/nb/holt-winters-equations.png', embed=True)",
"_____no_output_____"
]
],
[
[
"We can safely ignore the seasonal portion of the model \nsince the relevant data has been deseasonalized at the upstream source.\n\nTwo important parameters, $\\alpha$ and $\\beta$, must estimated. \nWe developed a robust L1 estimation technique which \nminimizes one-step ahead forecast errors, \nconditional on specific data. See \n[ys_opt_holt module](https://github.com/rsvp/fecon235/blob/master/lib/ys_opt_holt.py) \nfor details.",
"_____no_output_____"
]
],
[
[
"# This optimization procedure may be computationally intense, \n# depending on data size and the number of grids.\nfrom fecon236.tsa.holtwinters import optimize_holt\nab = optimize_holt( infl, grids=50 )\n\nab",
" !. WARNING: ipykernel_launcher.py Optimizing Holt-Winters alphabetaloss may take TIME!\n"
]
],
[
[
"The first two elements in the `ab` list are $\\alpha$ and $\\beta$ respectively. \nThe third element gives the median absolute loss as percent for an \none-step ahead forecast given those parameters. \nThe fourth element is the median absolute loss \n(due to our L1 loss function for robust optimization).\n\nThe median absolute loss is 0.00031 \nwhich gives us confidence that the model has performed well.\n\nUsing the estimated parameters, we can now proceed to \ngenerate our 12-step ahead forecast.",
"_____no_output_____"
]
],
[
[
"from fecon236.tsa.holtwinters import holtforecast, holt\n\nholtdf = holt( infl, alpha=ab[0], beta=ab[1] )\nholtforecast( holtdf, h=12 )",
"_____no_output_____"
]
],
[
[
"Since the most current level of unified inflation \nis 1.00 by construction, it is easy to discern that: \n***the 12-months ahead forecast implies an annualized inflation rate of 2.78%.***",
"_____no_output_____"
],
[
"## Forecasting summary\n\nForecasting is an art. \nA pearl of wisdom is to combine orthogonal methods \nto arrive at excellence.\n\nHere are three non-related ways which characterized inflation:\n\n- BEI, Break-even Inflation: a long-term forecast implied by the bond market\n- Geometric mean rate: internal growth rate of a stochastic trend\n- Holt-Winters model: projection obtained by robust prediction error minimization\n\nThe respective results are simply averaged...",
"_____no_output_____"
]
],
[
[
"( 1.33 + 1.83 + 2.78 ) / 3",
"_____no_output_____"
]
],
[
[
"There are many sorts of inflation measures, \nbut we devised a mathematically unified version. \nThe bond market has its own version, \nso for the **one-year ahead forecast**, \nall in all, we shall settle for:\n**1.98%**.",
"_____no_output_____"
],
[
"---\n\n## Appendix 1: Overall Correlations",
"_____no_output_____"
]
],
[
[
"# Conversions to dataframe with label before paste:\ndfa = todf( inf_av, 'Iav')\ndfb = todf( bei, 'BEI' )\ndfc = todf( bei_inf_av, 'BEI_Iav')",
"_____no_output_____"
],
[
"# Mother of all annualized inflation rates:\ninfall = paste( [inf, dfa, dfb, dfc] )",
"_____no_output_____"
],
[
"# CORRELATION matrix going back to 2003-01-01 (start of BEI):\ncormatrix(infall)",
"_____no_output_____"
]
],
[
[
"CPI and PCE are tightly correlated, as are CPIc and PCEc. \nSurprisingly, CPI and CPIc are only moderately correlated, only 40%. \nRecall that headline inflation includes food and energy, \nwhereas core inflation excludes those components.\n\nThe market traded BEI break-even inflation is modestly correlated \nwith government-released statistics of inflation.",
"_____no_output_____"
],
[
"#### Most recent computed data\n\nGiven in annualized percentage form:",
"_____no_output_____"
]
],
[
[
"# Inflation RATES\ntail(infall, n=12)",
"_____no_output_____"
]
],
[
[
"## Appendix 2: Forecasting process as a function\n\nWe can think of this notebook as the documentation \nwhich encapsulates its findings and process \ninto a single function: `foreinfl()`.",
"_____no_output_____"
]
],
[
[
"# Query into its details:\nforeinfl??",
"_____no_output_____"
],
[
"# See it in action:\nforeinfl()",
"_____no_output_____"
]
],
[
[
"There are slight variations from the notebook derivation.\n\n- Instead of `start` we use a rolling window of `n` monthly datapoints.\n- By default, `n` is set to 120, i.e. the last ten years.\n- Given new data, we do not re-optimize the Holt-Winters parameters, since that would be computationally expensive. The default values for alpha and beta are battle-tested using data dating back to 1960.\n- Bond market data does not suffer from release lag, unlike inflation statistics, so the most *current* BEI is used in the `foreinfl()` function.\n\nTo obtain an inflation forecast in practice, \nexecuting a single Python function is far more convenient \nthan re-running a Jupyter notebook. \nThe list output also gives a summary from the orthogonal methods \nwhich were utilized.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecf5cba272c70475f5910ee6746513bb58e16e87 | 28,634 | ipynb | Jupyter Notebook | My_Rnn_Model.ipynb | jinkyukim-me/Back_to_Basic | e7d8878055c46aa06f3cc57854dccc28a5d5542b | [
"MIT"
] | null | null | null | My_Rnn_Model.ipynb | jinkyukim-me/Back_to_Basic | e7d8878055c46aa06f3cc57854dccc28a5d5542b | [
"MIT"
] | null | null | null | My_Rnn_Model.ipynb | jinkyukim-me/Back_to_Basic | e7d8878055c46aa06f3cc57854dccc28a5d5542b | [
"MIT"
] | null | null | null | 70.181373 | 11,392 | 0.718167 | [
[
[
"# Packages\nimport keras\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, Dropout\nfrom sklearn.preprocessing import MinMaxScaler\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport random\n\ndef create_dataset(signal_data, look_back=1):\n dataX, dataY = [], []\n for i in range(len(signal_data)-look_back):\n dataX.append(signal_data[i:(i+look_back), 0])\n dataY.append(signal_data[i + look_back, 0])\n return np.array(dataX), np.array(dataY)\n\nclass CustomHistory(keras.callbacks.Callback):\n def init(self):\n self.train_loss = []\n self.test_loss = []\n \n def on_epoch_end(self, batch, logs={}):\n self.train_loss.append(logs.get('loss'))\n self.test_loss.append(logs.get('test_loss'))\n\nlook_back = 10",
"Using TensorFlow backend.\n"
],
[
"# Making Dataset\nsignal_data = []\nfor i in range(365):\n random_5 = random.randint(1,5)\n signal_data.append(random_5)\nsignal_data=np.array(signal_data)\nsignal_data=signal_data[:,None]",
"_____no_output_____"
],
[
"# Preprocessing\nscaler = MinMaxScaler(feature_range=(1, 5))\nsignal_data = scaler.fit_transform(signal_data)",
"_____no_output_____"
],
[
"# Train & Test\ntrain = signal_data[0:290]\ntest = signal_data[290:]",
"_____no_output_____"
],
[
"# Dataset\nx_train, y_train = create_dataset(train, look_back)\nx_test, y_test = create_dataset(test, look_back)",
"_____no_output_____"
],
[
"# Preprocessing\nx_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))\nx_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))",
"_____no_output_____"
],
[
"# LSTM Model\nmodel = Sequential()\nfor i in range(2):\n model.add(LSTM(32, batch_input_shape=(1, look_back, 1), stateful=True, return_sequences=True))\n model.add(Dropout(0.5))\nmodel.add(LSTM(32, batch_input_shape=(1, look_back, 1), stateful=True))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(1))",
"_____no_output_____"
],
[
"# model\nmodel.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])",
"_____no_output_____"
],
[
"# learning\ncustom_hist = CustomHistory()\ncustom_hist.init()\n\nfor i in range(30):\n model.fit(x_train, y_train, epochs=1, batch_size=1, shuffle=False, callbacks=[custom_hist])\n model.reset_states()",
"Epoch 1/1\n280/280 [==============================] - 5s 17ms/step - loss: 3.5341 - acc: 0.1786\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.5313 - acc: 0.1786: 0s - loss: 2\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.5416 - acc: 0.1857\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4277 - acc: 0.2071\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3464 - acc: 0.2000\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4930 - acc: 0.2000\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3111 - acc: 0.2429\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.5662 - acc: 0.1607\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4426 - acc: 0.1857\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.1795 - acc: 0.2214\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4759 - acc: 0.1714\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4908 - acc: 0.1893\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.2603 - acc: 0.2179\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4604 - acc: 0.1786\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.5064 - acc: 0.2000\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4650 - acc: 0.2571\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3938 - acc: 0.2036\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3344 - acc: 0.2179\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.2978 - acc: 0.2393\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.4736 - acc: 0.2071\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3854 - acc: 0.1821\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.2216 - acc: 0.1893\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.2833 - acc: 0.2143\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3542 - acc: 0.1929\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3557 - acc: 0.2071\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.2816 - acc: 0.2071\nEpoch 1/1\n280/280 [==============================] - 4s 13ms/step - loss: 2.3005 - acc: 0.2286\nEpoch 1/1\n280/280 [==============================] - 7s 25ms/step - loss: 2.2977 - acc: 0.2286\nEpoch 1/1\n280/280 [==============================] - 6s 21ms/step - loss: 2.3929 - acc: 0.2179\nEpoch 1/1\n280/280 [==============================] - 5s 18ms/step - loss: 2.2821 - acc: 0.2464\n"
],
[
"plt.plot(custom_hist.train_loss)\nplt.plot(custom_hist.test_loss)\nplt.ylim(0.0, 5.0)\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"trainScore = model.evaluate(x_train, y_train)\nmodel.reset_states()\nprint('Train Score: ', trainScore)\ntestScore = model.evaluate(x_test, y_test)\nmodel.reset_states()\nprint('Test Score: ', testScore)",
"_____no_output_____"
],
[
"look_ahead = 10\nxhat = x_test[0]\npredictions = np.zeros((look_ahead,1))\nfor i in range(look_ahead):\n prediction = model.predict(np.array([xhat]), batch_size=1)\n predictions[i] = prediction\n xhat = np.vstack([xhat[1:],prediction])",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,5))\nplt.plot(np.arange(look_ahead),predictions,'r',label=\"prediction\")\nplt.plot(np.arange(look_ahead),y_test[:look_ahead],label=\"test function\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf5dd4b572dd5d5013d374cb23d3fb319ce7ea7 | 48,306 | ipynb | Jupyter Notebook | notebooks/validation.ipynb | tuguldurs/lacuna | 6e81bbd898f852f9af696b3c44e10a97f84d2736 | [
"MIT"
] | null | null | null | notebooks/validation.ipynb | tuguldurs/lacuna | 6e81bbd898f852f9af696b3c44e10a97f84d2736 | [
"MIT"
] | null | null | null | notebooks/validation.ipynb | tuguldurs/lacuna | 6e81bbd898f852f9af696b3c44e10a97f84d2736 | [
"MIT"
] | null | null | null | 111.304147 | 22,580 | 0.852089 | [
[
[
"## Validation demo\n\nauthors: Tuguldur Sukhbold\n\nOur training set has 366 fields, where 300 are used in training and the rest for validation. In this notebook we perform validations on the last 66 fields using trained weights.\n\nFirst we load the trained weights:",
"_____no_output_____"
]
],
[
[
"datapath = '../../d/'\nPATH2weight = '../../weights/circle_ep15_small.pth'\n\nimport torch\nimport torchvision\nfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictor\nfrom torchvision.models.detection.mask_rcnn import MaskRCNNPredictor\nfrom torchvision.transforms import functional as F\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\ndef getInstanceSegmentationModel(num_classes):\n \"\"\"\n the exact instance segmentation model used for training\n note: do not change anything here, or else weights will not properly load\n \"\"\"\n\n # pre-trained model on COCO\n model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)\n\n # number of input features for the classifier\n in_features = model.roi_heads.box_predictor.cls_score.in_features\n # replace the pre-trained head\n model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)\n\n # number of input features for the mask classifier\n in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels\n hidden_layer = 256\n # replace the mask predictor\n model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,\n hidden_layer,\n num_classes)\n\n return model\n\n\ndef loadTrainedModel(PATH2weights):\n \"\"\"\n load the weight from given path\n \"\"\"\n\n num_classes = 2\n model = getInstanceSegmentationModel(num_classes)\n model.load_state_dict(torch.load(PATH2weights, map_location=device))\n\n return model\n\n\nprint(device)",
"cpu\n"
],
[
"def getPlanet(fieldID):\n j17 = plt.imread(f'{datapath}planet-jun17/{fieldID}.png')\n j18 = plt.imread(f'{datapath}planet-jun18/{fieldID}.png')\n d17 = plt.imread(f'{datapath}planet-dec17/{fieldID}.png')\n d18 = plt.imread(f'{datapath}planet-dec18/{fieldID}.png')\n return j17, j18, d17, d18\n\ndef getHighContrast(j17, j18, d17, d18):\n summer = j17 + j18\n summer = summer / np.amax(summer)\n winter = d17 + d18\n winter = winter / np.amax(winter)\n diff = winter * summer\n return diff\n",
"_____no_output_____"
],
[
"def getGrayNNmask(model):\n \"\"\"\n evaluation based on loaded weight\n \"\"\"\n\n # convert image to tensor\n img = Image.open('tmp.png').convert(\"RGB\")\n imgT = F.to_tensor(img)\n\n # evaluate\n model.eval()\n with torch.no_grad(): prediction = model([imgT.to(device)])\n\n # convert result back to gray image\n img = Image.fromarray(prediction[0]['masks'][0, 0].mul(255).byte().cpu().numpy())\n img = img.convert('L')\n\n return np.array(img)\n\n\ndef makeEnhancedImage(fieldID):\n j17, j18, d17, d18 = getPlanet(fieldID)\n img = getHighContrast(j17, j18, d17, d18)\n img = (img * 255).astype(np.uint8)\n img = Image.fromarray(img)\n img.save('tmp.png')\n",
"_____no_output_____"
],
[
"def getDisplacement1(mask):\n \"\"\"\n based on dist to mask center\n \"\"\"\n \n # just avg point for center for simple shapes\n xindx = np.where(mask > 0)[-1]\n yindx = np.where(mask > 0)[0]\n xmin, xmax = min(xindx), max(xindx)\n ymin, ymax = min(yindx), max(yindx)\n x1 = xmin + (xmax - xmin) / 2.\n y1 = ymin + (ymax - ymin) / 2.\n \n # displacement\n cX = 10.986328125 / 2\n cY = 10.985731758 / 2\n x0, y0 = mask.shape[1]//2, mask.shape[0]//2\n x = (x0-np.round(x1)) / mask.shape[1] *cX\n y = (y1-np.round(y0)) / mask.shape[0] *cY\n \n return x, y\n\ndef getDisplacement2(mask):\n \"\"\"\n based on dist to closest vertex of bbox\n \"\"\"\n \n # image center\n x0, y0 = mask.shape[1]//2, mask.shape[0]//2\n\n # bbox vertices\n xindx = np.where(mask > 0)[-1]\n yindx = np.where(mask > 0)[0]\n xmin, xmax = min(xindx), max(xindx)\n ymin, ymax = min(yindx), max(yindx)\n\n # vertex with min distance\n xs = np.array([xmin, xmin, xmax, xmax])\n ys = np.array([ymax, ymin, ymin, ymax])\n ds = np.sqrt((xs-x0)**2 + (ys-y0)**2)\n imin = np.where(ds == min(ds))[0][0]\n x1, y1 = xs[imin], ys[imin]\n \n # displacement\n cX = 10.986328125 / 2\n cY = 10.985731758 / 2\n x = (x0-np.round(x1)) / mask.shape[1] *cX\n y = (y1-np.round(y0)) / mask.shape[0] *cY\n \n return x, y",
"_____no_output_____"
],
[
"model = loadTrainedModel(PATH2weight)\ntrain = pd.read_csv(f'{datapath}train-unique.csv')\n\nxPred, yPred = [], []\nxChk, yChk = [], []\nfor index, row in train.iterrows():\n if index > 299:\n fieldID = row.ID.split('_')[-1]\n makeEnhancedImage(fieldID)\n mask = getGrayNNmask(model)\n x, y = getDisplacement2(mask)\n xPred.append(x)\n yPred.append(y)\n xChk.append(row['x'])\n yChk.append(row['y'])\n print(index)\n\nxPred = np.array(xPred)\nyPred = np.array(yPred)\nxChk = np.array(xChk)\nyChk = np.array(yChk)",
"300\n301\n302\n303\n304\n305\n306\n307\n308\n309\n310\n311\n312\n313\n314\n315\n316\n317\n318\n319\n320\n321\n322\n323\n324\n325\n326\n327\n328\n329\n330\n331\n332\n333\n334\n335\n336\n337\n338\n339\n340\n341\n342\n343\n344\n345\n346\n347\n348\n349\n350\n351\n352\n353\n354\n355\n356\n357\n358\n359\n360\n361\n362\n363\n364\n365\n"
],
[
"fig, ax = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10,5))\nax[0].scatter(xChk, xPred)\nax[0].set_title('x-prediction vs x-validation')\nax[0].set_xlim(min(xChk), max(xChk))\nax[0].plot(xChk, xChk, c='black')\n\nax[1].scatter(yChk, yPred)\nax[1].set_title('y-prediction vs y-validation')\nax[1].set_xlim(min(yChk), max(yChk))\nax[1].plot(yChk, yChk, c='black')\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(ncols=2, sharex=True, sharey=True)\nax[0].scatter(np.arange(len(xChk)), abs(xChk - xPred))\nax[0].set_title('x-prediction vs x-validation')\nax[1].scatter(np.arange(len(xChk)), abs(yChk - yPred))\nax[1].set_title('y-prediction vs y-validation')\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_absolute_error\n\ntst = np.column_stack((xChk, yChk))\npred = np.column_stack((xPred, yPred))\n\nmean_absolute_error(tst, pred)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf5e2d576a07470c7dff1613c20804f7d0a9ad2 | 99,600 | ipynb | Jupyter Notebook | PyCitySchools/.ipynb_checkpoints/PyCitySchools_starter-checkpoint.ipynb | BrynnaBridges/pandas-challenge | b549a85040c283383a934e545688ebf2a6ed1f7d | [
"ADSL"
] | null | null | null | PyCitySchools/.ipynb_checkpoints/PyCitySchools_starter-checkpoint.ipynb | BrynnaBridges/pandas-challenge | b549a85040c283383a934e545688ebf2a6ed1f7d | [
"ADSL"
] | null | null | null | PyCitySchools/.ipynb_checkpoints/PyCitySchools_starter-checkpoint.ipynb | BrynnaBridges/pandas-challenge | b549a85040c283383a934e545688ebf2a6ed1f7d | [
"ADSL"
] | null | null | null | 35.66058 | 202 | 0.399448 | [
[
[
"### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport pandas as pd\nimport numpy as np\n\n# File to Load\nschool_data_csv = \"Resources/schools_complete.csv\"\nstudent_data_csv = \"Resources/students_complete.csv\"\n\n# Read School and Student Data File and store into Pandas Data Frames\nschool_data = pd.read_csv(school_data_csv)\nstudent_data = pd.read_csv(student_data_csv)\n\n# Combine the data into a single dataset\ncombined_data = pd.merge(student_data, school_data, how=\"left\", on=[\"school_name\", \"school_name\"])\n#combined_data.head() ",
"_____no_output_____"
]
],
[
[
"## District Summary\n\n* Calculate the total number of schools\n\n* Calculate the total number of students\n\n* Calculate the total budget\n\n* Calculate the average math score \n\n* Calculate the average reading score\n\n* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2\n\n* Calculate the percentage of students with a passing math score (70 or greater)\n\n* Calculate the percentage of students with a passing reading score (70 or greater)\n\n* Create a dataframe to hold the above results\n\n* Optional: give the displayed data cleaner formatting",
"_____no_output_____"
]
],
[
[
"#Find total list of schools\ntotal_schools = combined_data[\"School ID\"].nunique()\n#print(total_schools)\n\n#Find the total number of students\ntotal_students = combined_data[\"Student ID\"].count()\n#print(total_students)\n\n#Find the total budget for all schools\ntotal = combined_data[\"budget\"].astype(int)\ntotal_budget = combined_data[\"budget\"].unique()\n#print(total_budget)\nbudget = sum(total_budget)\n#print(budget)\n\n#Average math score for all students\naverage_math = combined_data[\"math_score\"].mean()\n#print(average_math)\n\n#Average reading score for all students\naverage_reading = combined_data[\"reading_score\"].mean()\n#print(average_reading)\n\n#Overall passing\noverall_passing = (average_math + average_reading)/2\n#print(overall_passing)\n\n#Find the % of passing students for math\nmath = combined_data[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\npercent_passing_math = (math_passing/total_students)*100\n#print(math_passing)\n#print(percent_passing_math)\n\n\n#Find % passing for reading\nread = combined_data[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\n#print(reading_passing)\npercent_passing_reading = (reading_passing/total_students)*100\n#print(percent_passing_reading)",
"_____no_output_____"
],
[
"district_summary = pd.DataFrame({\"Total Schools\": [total_schools],\n \"Total Students\": [total_students],\n \"Total Budget\": [budget],\n \"Avg Math Score\": [average_math],\n \"Avg Reading Score\": [average_reading],\n \"% Passing Math\": [percent_passing_math],\n \"% Passing Reading\": [percent_passing_reading],\n \"Overall Passing\": [overall_passing]\n })\ndistrict_summary = district_summary[[\"Total Schools\",\n \"Total Students\",\n \"Total Budget\",\n \"Avg Math Score\",\n \"Avg Reading Score\",\n \"% Passing Math\",\n \"% Passing Reading\",\n \"Overall Passing\"]]\ndistrict_summary = district_summary.round(2)\n\ndistrict_summary",
"_____no_output_____"
],
[
"#district_summary[\"% Passing Math\"] = district_summary[\"% Passing Math\"].map(\"{0:,.0f}%\".format)\n#district_summary[\"% Passing Reading\"] = district_summary[\"% Passing Reading\"].map(\"{0:,.0f}%\".format)\n#district_summary[\"Total Budget\"] = district_summary[\"Total Budget\"].map(\"${0:,.0f}\".format)\n#district_summary",
"_____no_output_____"
]
],
[
[
"## School Summary",
"_____no_output_____"
],
[
"* Create an overview table that summarizes key metrics about each school, including:\n * School Name\n * School Type\n * Total Students\n * Total School Budget\n * Per Student Budget\n * Average Math Score\n * Average Reading Score\n * % Passing Math\n * % Passing Reading\n * Overall Passing Rate (Average of the above two)\n \n* Create a dataframe to hold the above results",
"_____no_output_____"
]
],
[
[
"#school_names = combined_data.groupby(\"school_name\")\n#school_names.head()\n#school_names = school_names.mean()\n#school_summary = school_names[[\"size\", \"math_score\", \"reading_score\"]]\n#print(school_summary)\n\n#Find the % of passing students for math\nmath = combined_data[\"math_score\"].astype(int)\ntotal_students = combined_data[\"size\"].astype(int)\ntotal_student = total_students.mean()\nmath_passing = sum(i >= 70 for i in math)\npercent_passing_math = (math_passing/total_student)*100\n\n\n#Find % passing for reading\nread = combined_data[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\npercent_passing_reading = (reading_passing/total_student)*100\n\n#Find overall passing\noverall_passing = (percent_passing_reading + percent_passing_math)/2\n\n#Find budget for each school and each student\nBudget = combined_data[\"budget\"].astype(int)\nbudget_per_student = Budget/total_student\n\ntypes = school_data[\"type\"]\nschools1 = combined_data[\"school_name\"]\nmath1 = combined_data[\"math_score\"]\nread1 = combined_data[\"reading_score\"]\nsize = combined_data[\"size\"]\nschool = school_data[\"school_name\"]",
"_____no_output_____"
],
[
"schools = combined_data[\"school_name\"].unique()\ncolumns = [\"Student ID\", \"student_name\", \"gender\", \"grade\", \"school_name\", \"reading_score\", \"math_score\", \"School ID\", \"type\", \"size\", \"budget\"]\n",
"_____no_output_____"
],
[
"Huang_df = combined_data.loc[combined_data[\"school_name\"] == \"Huang High School\", columns]\nHuang_df\nmath = Huang_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Huang_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Huang_df[\"Student ID\"].count()\nhuang_percent_math = (math_passing/students)*100\nhuang_percent_reading = (reading_passing/students)*100\nhuang_overall_passing = (huang_percent_math + huang_percent_reading)/2\n\nFigueroa_df = combined_data.loc[combined_data[\"school_name\"] == \"Figueroa High School\", columns]\nFigueroa_df\nmath = Figueroa_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Figueroa_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Figueroa_df[\"Student ID\"].count()\nFigueroa_percent_math = (math_passing/students)*100\nFigueroa_percent_reading = (reading_passing/students)*100\nFigueroa_overall_passing = (Figueroa_percent_math + Figueroa_percent_reading)/2\n\nShelton_df = combined_data.loc[combined_data[\"school_name\"] == \"Shelton High School\", columns]\nShelton_df\nmath = Shelton_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Shelton_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Shelton_df[\"Student ID\"].count()\nShelton_percent_math = (math_passing/students)*100\nShelton_percent_reading = (reading_passing/students)*100\nShelton_overall_passing = (Shelton_percent_math + Shelton_percent_reading)/2\n\nHernandez_df = combined_data.loc[combined_data[\"school_name\"] == \"Hernandez High School\", columns]\nHernandez_df\nmath = Hernandez_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Hernandez_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Hernandez_df[\"Student ID\"].count()\nHernandez_percent_math = (math_passing/students)*100\nHernandez_percent_reading = (reading_passing/students)*100\nHernandez_overall_passing = (Hernandez_percent_math + Hernandez_percent_reading)/2\n\nGriffin_df = combined_data.loc[combined_data[\"school_name\"] == \"Griffin High School\", columns]\nGriffin_df\nmath = Griffin_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Griffin_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Griffin_df[\"Student ID\"].count()\nGriffin_percent_math = (math_passing/students)*100\nGriffin_percent_reading = (reading_passing/students)*100\nGriffin_overall_passing = (Griffin_percent_math + Griffin_percent_reading)/2\n\nWilson_df = combined_data.loc[combined_data[\"school_name\"] == \"Wilson High School\", columns]\nWilson_df\nmath = Wilson_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Wilson_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Wilson_df[\"Student ID\"].count()\nWilson_percent_math = (math_passing/students)*100\nWilson_percent_reading = (reading_passing/students)*100\nWilson_overall_passing = (Wilson_percent_math + Wilson_percent_reading)/2\n\nCabrera_df = combined_data.loc[combined_data[\"school_name\"] == \"Cabrera High School\", columns]\nCabrera_df\nmath = Cabrera_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Cabrera_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Cabrera_df[\"Student ID\"].count()\nCabrera_percent_math = (math_passing/students)*100\nCabrera_percent_reading = (reading_passing/students)*100\nCabrera_overall_passing = (Cabrera_percent_math + Cabrera_percent_reading)/2\n\nBailey_df = combined_data.loc[combined_data[\"school_name\"] == \"Bailey High School\", columns]\nBailey_df\nmath = Bailey_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Bailey_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Bailey_df[\"Student ID\"].count()\nBailey_percent_math = (math_passing/students)*100\nBailey_percent_reading = (reading_passing/students)*100\nBailey_overall_passing = (Bailey_percent_math + Bailey_percent_reading)/2\n\nHolden_df = combined_data.loc[combined_data[\"school_name\"] == \"Holden High School\", columns]\nHolden_df\nmath = Holden_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Holden_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Holden_df[\"Student ID\"].count()\nHolden_percent_math = (math_passing/students)*100\nHolden_percent_reading = (reading_passing/students)*100\nHolden_overall_passing = (Holden_percent_math + Holden_percent_reading)/2\n\nPena_df = combined_data.loc[combined_data[\"school_name\"] == \"Pena High School\", columns]\nPena_df\nmath = Pena_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Pena_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Pena_df[\"Student ID\"].count()\nPena_percent_math = (math_passing/students)*100\nPena_percent_reading = (reading_passing/students)*100\nPena_overall_passing = (Pena_percent_math + Pena_percent_reading)/2\n\nWright_df = combined_data.loc[combined_data[\"school_name\"] == \"Wright High School\", columns]\nWright_df\nmath = Wright_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Wright_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Wright_df[\"Student ID\"].count()\nWright_percent_math = (math_passing/students)*100\nWright_percent_reading = (reading_passing/students)*100\nWright_overall_passing = (Wright_percent_math + Wright_percent_reading)/2\n\nRodriguez_df = combined_data.loc[combined_data[\"school_name\"] == \"Rodriguez High School\", columns]\nRodriguez_df\nmath = Rodriguez_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Rodriguez_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Rodriguez_df[\"Student ID\"].count()\nRodriguez_percent_math = (math_passing/students)*100\nRodriguez_percent_reading = (reading_passing/students)*100\nRodriguez_overall_passing = (Rodriguez_percent_math + Rodriguez_percent_reading)/2\n\nJohnson_df = combined_data.loc[combined_data[\"school_name\"] == \"Johnson High School\", columns]\nJohnson_df\nmath = Johnson_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Johnson_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Johnson_df[\"Student ID\"].count()\nJohnson_percent_math = (math_passing/students)*100\nJohnson_percent_reading = (reading_passing/students)*100\nJohnson_overall_passing = (Johnson_percent_math + Johnson_percent_reading)/2\n\nFord_df = combined_data.loc[combined_data[\"school_name\"] == \"Ford High School\", columns]\nFord_df\nmath = Ford_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Ford_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Ford_df[\"Student ID\"].count()\nFord_percent_math = (math_passing/students)*100\nFord_percent_reading = (reading_passing/students)*100\nFord_overall_passing = (Ford_percent_math + Ford_percent_reading)/2\n\nThomas_df = combined_data.loc[combined_data[\"school_name\"] == \"Thomas High School\", columns]\nThomas_df\nmath = Thomas_df[\"math_score\"].astype(int)\nmath_passing = sum(i >= 70 for i in math)\nread = Thomas_df[\"reading_score\"].astype(int)\nreading_passing = sum(i >= 70 for i in read)\nstudents = Thomas_df[\"Student ID\"].count()\nThomas_percent_math = (math_passing/students)*100\nThomas_percent_reading = (reading_passing/students)*100\nThomas_overall_passing = (Thomas_percent_math + Thomas_percent_reading)/2",
"_____no_output_____"
],
[
" \nprint(huang_percent_math)\nprint(Figueroa_percent_math)\nprint(Shelton_percent_math)\nprint(Hernandez_percent_math)\nprint(Griffin_percent_math)\nprint(Wilson_percent_math)\nprint(Cabrera_percent_math)\nprint(Bailey_percent_math)\nprint(Holden_percent_math)\nprint(Pena_percent_math)\nprint(Wright_percent_math)\nprint(Rodriguez_percent_math)\nprint(Johnson_percent_math)\nprint(Ford_percent_math)\nprint(Thomas_percent_math)\n\nprint(\"------------------------------\")\n\nprint(huang_percent_reading)\nprint(Figueroa_percent_reading)\nprint(Shelton_percent_reading)\nprint(Hernandez_percent_reading)\nprint(Griffin_percent_reading)\nprint(Wilson_percent_reading)\nprint(Cabrera_percent_reading)\nprint(Bailey_percent_reading)\nprint(Holden_percent_reading)\nprint(Pena_percent_reading)\nprint(Wright_percent_reading)\nprint(Rodriguez_percent_reading)\nprint(Johnson_percent_reading)\nprint(Ford_percent_reading)\nprint(Thomas_percent_reading)\n\nprint(\"------------------------------\")\n\nprint(huang_overall_passing)\nprint(Figueroa_overall_passing)\nprint(Shelton_overall_passing)\nprint(Hernandez_overall_passing)\nprint(Griffin_overall_passing)\nprint(Wilson_overall_passing)\nprint(Cabrera_overall_passing)\nprint(Bailey_overall_passing)\nprint(Holden_overall_passing)\nprint(Pena_overall_passing)\nprint(Wright_overall_passing)\nprint(Rodriguez_overall_passing)\nprint(Johnson_overall_passing)\nprint(Ford_overall_passing)\nprint(Thomas_overall_passing)",
"65.68392183750429\n65.98847066802306\n93.8671209540034\n66.7529665587918\n93.39237057220708\n93.8677179150241\n94.1334768568353\n66.68006430868168\n92.50585480093677\n94.5945945945946\n93.33333333333333\n66.36659164791197\n66.0575509346776\n68.3096020445418\n93.27217125382263\n------------------------------\n81.31642098045938\n80.73923363852154\n95.85462805224304\n80.86299892125135\n97.13896457765668\n96.53964082347788\n97.03982777179763\n81.93327974276528\n96.25292740046838\n95.94594594594594\n96.61111111111111\n80.22005501375344\n81.2224322621298\n79.29901423877328\n97.30886850152906\n------------------------------\n73.50017140898183\n73.36385215327229\n94.86087450312323\n73.80798274002157\n95.26566757493188\n95.20367936925099\n95.58665231431647\n74.30667202572349\n94.37939110070258\n95.27027027027026\n94.97222222222223\n73.2933233308327\n73.6399915984037\n73.80430814165754\n95.29051987767585\n"
],
[
"add_school_summary = pd.DataFrame({\n \"school_name\":(schools1),\n \"math_score\": (math1),\n \"reading_score\": (read1),\n \"Budget per School\": (Budget),\n \"Budget per Student\": (budget_per_student),\n \"size\": (size),\n })\nadd_school_summary = add_school_summary[[\n \"school_name\",\n \"math_score\",\n \"reading_score\",\n \"Budget per Student\",\n \"Budget per School\",\n \"size\",\n ]]\nadd_school_summary\n\nschool_names = add_school_summary.groupby(\"school_name\")\nschool_names.head()\nschool_names = school_names.mean()\nschool_summary = school_names[[\"size\", \n \"math_score\", \n \"reading_score\", \n \"Budget per Student\", \n \"Budget per School\",]] \nschool_summary = school_summary.round(2)\n\ntotal_school = pd.DataFrame({\n \"school_name\":(school),\n \"type\":(types)})\ntotal_school = total_school [[\"school_name\",\n \"type\"]]\n\ntotal_school = total_school.round(2)\n\ncombined_schools = pd.merge(school_summary, total_school, how='outer', on='school_name')\ncombined_schools\n\ncombined_schools_df = pd.DataFrame(combined_schools)\ncombined_schools_df\n\nPassing = {\"school_name\": ['Huang High School', 'Figueroa High School', 'Shelton High School',\n 'Hernandez High School', 'Griffin High School', 'Wilson High School',\n 'Cabrera High School', 'Bailey High School', 'Holden High School',\n 'Pena High School', 'Wright High School', 'Rodriguez High School',\n 'Johnson High School', 'Ford High School', 'Thomas High School'],\n \"% Passing Math\": [65.68392183750429,\n65.98847066802306,\n93.8671209540034,\n66.7529665587918,\n93.39237057220708,\n93.8677179150241,\n94.1334768568353,\n66.68006430868168,\n92.50585480093677,\n94.5945945945946,\n93.33333333333333,\n66.36659164791197,\n66.0575509346776,\n68.3096020445418,\n93.27217125382263],\n \"% Passing Reading\": [81.31642098045938,\n80.73923363852154,\n95.85462805224304,\n80.86299892125135,\n97.13896457765668,\n96.53964082347788,\n97.03982777179763,\n81.93327974276528,\n96.25292740046838,\n95.94594594594594,\n96.61111111111111,\n80.22005501375344,\n81.2224322621298,\n79.29901423877328,\n97.30886850152906],\n \"Overall Passing\": [73.50017140898183,\n73.36385215327229,\n94.86087450312323,\n73.80798274002157,\n95.26566757493188,\n95.20367936925099,\n95.58665231431647,\n74.30667202572349,\n94.37939110070258,\n95.27027027027026,\n94.97222222222223,\n73.2933233308327,\n73.6399915984037,\n73.80430814165754,\n95.29051987767585]}\n\nPassing_df = pd.DataFrame(Passing)\nPassing_df = Passing_df.round(2)\n\ntotal_combined_schools = pd.merge(combined_schools, Passing_df, how='outer', on='school_name')\ntotal_combined_schools\n",
"_____no_output_____"
]
],
[
[
"## Top Performing Schools (By Passing Rate)",
"_____no_output_____"
],
[
"* Sort and display the top five schools in overall passing rate",
"_____no_output_____"
]
],
[
[
"total_combined_schools = total_combined_schools.sort_values(\"Overall Passing\", ascending=False)\ntotal_combined_schools.head()\ntotal_combined_schools = total_combined_schools.reset_index(drop=True)\ntotal_combined_schools.head()",
"_____no_output_____"
]
],
[
[
"## Bottom Performing Schools (By Passing Rate)",
"_____no_output_____"
],
[
"* Sort and display the five worst-performing schools",
"_____no_output_____"
]
],
[
[
"total_combined_schools = total_combined_schools.sort_values(\"Overall Passing\")\ntotal_combined_schools.head()\ntotal_combined_schools = total_combined_schools.reset_index(drop=True)\ntotal_combined_schools.head()",
"_____no_output_____"
]
],
[
[
"## Math Scores by Grade",
"_____no_output_____"
],
[
"* Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.\n\n * Create a pandas series for each grade. Hint: use a conditional statement.\n \n * Group each series by school\n \n * Combine the series into a dataframe\n \n * Optional: give the displayed data cleaner formatting",
"_____no_output_____"
]
],
[
[
"grades = combined_data[\"grade\"]\ngrade_school_summary = pd.DataFrame({\n \"school_name\":(schools1),\n \"math_score\": (math1),\n \"grade\": (grades),\n })\ngrade_school_summary = grade_school_summary[[\n \"school_name\",\n \"math_score\",\n \"grade\",\n ]]\ngrade_school_summary\ntotal_grade_math = grade_school_summary.groupby([\"school_name\", \"grade\"])\ntotal_grade_math.mean()",
"_____no_output_____"
]
],
[
[
"## Reading Score by Grade ",
"_____no_output_____"
],
[
"* Perform the same operations as above for reading scores",
"_____no_output_____"
]
],
[
[
"grades = combined_data[\"grade\"]\ngrade_school_summary2 = pd.DataFrame({\n \"school_name\":(schools1),\n \"reading_score\": (read1),\n \"grade\": (grades),\n })\ngrade_school_summary2 = grade_school_summary2[[\n \"school_name\",\n \"reading_score\",\n \"grade\",\n ]]\ngrade_school_summary2\ntotal_grade_read = grade_school_summary2.groupby([\"school_name\", \"grade\"])\ntotal_grade_read.mean()",
"_____no_output_____"
]
],
[
[
"## Scores by School Spending",
"_____no_output_____"
],
[
"* Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:\n * Average Math Score\n * Average Reading Score\n * % Passing Math\n * % Passing Reading\n * Overall Passing Rate (Average of the above two)",
"_____no_output_____"
]
],
[
[
"per_student_bins = [0, 301, 601, 901, 1000]\nbudget_labels = [\"0-300\", \"301-600\", \"601-900\", \"900-1000\"]",
"_____no_output_____"
],
[
"pd.cut(total_combined_schools[\"Budget per Student\"], per_student_bins, labels=budget_labels)\ntotal_combined_schools[\"Budget Bins\"]=pd.cut(total_combined_schools[\"Budget per Student\"], per_student_bins, labels=budget_labels)\ntotal_combined_schools\nreduced_combined_schools = total_combined_schools.iloc[:, [\n 0, 2, 3, 7, 8, 9, 10,]]\nreduced_combined_schools",
"_____no_output_____"
]
],
[
[
"## Scores by School Size",
"_____no_output_____"
],
[
"* Perform the same operations as above, based on school size.",
"_____no_output_____"
]
],
[
[
"# Sample bins. Feel free to create your own bins.\nsize_bins = [0, 1000, 2000, 5000]\ngroup_names = [\"Small (<1000)\", \"Medium (1000-2000)\", \"Large (2000-5000)\"]",
"_____no_output_____"
],
[
"pd.cut(total_combined_schools[\"size\"], size_bins, labels=group_names)\ntotal_combined_schools[\"Size Bins\"]=pd.cut(total_combined_schools[\"size\"], size_bins, labels=group_names)\ntotal_combined_schools\nreduced_size_schools = total_combined_schools.iloc[:, [\n 0, 2, 3, 7, 8, 9, 11,]]\nreduced_size_schools",
"_____no_output_____"
]
],
[
[
"## Scores by School Type",
"_____no_output_____"
],
[
"* Perform the same operations as above, based on school type.",
"_____no_output_____"
]
],
[
[
"school_types = total_combined_schools.groupby(\"type\")\nschool_types.head()\nschool_types = school_types.mean()\ntypes_total = school_types[[\"math_score\", \n \"reading_score\", \n \"% Passing Math\",\n \"% Passing Reading\",\n \"Overall Passing\",]] \ntypes_total",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecf5f10e06a86277c0de0f2a4bf797aa572cd4f1 | 6,416 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/non uniform grid-checkpoint.ipynb | zhaonat/eigenwell | 0b2bc3f2800587fa8311460040309e6ea89ab812 | [
"MIT"
] | 6 | 2021-05-03T20:59:28.000Z | 2022-01-06T16:27:26.000Z | notebooks/.ipynb_checkpoints/non uniform grid-checkpoint.ipynb | zhaonat/eigenwell | 0b2bc3f2800587fa8311460040309e6ea89ab812 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/non uniform grid-checkpoint.ipynb | zhaonat/eigenwell | 0b2bc3f2800587fa8311460040309e6ea89ab812 | [
"MIT"
] | 2 | 2021-05-04T14:21:55.000Z | 2022-03-22T21:07:08.000Z | 29.56682 | 262 | 0.481297 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom eigenwell.src.grid import *",
"/Users/nathanzhao/src/eigenwell/src/grid.py:23: SyntaxWarning: assertion is always true, perhaps remove parentheses?\n assert (len(dL)==len(N) == 2, 'must specify 2 elem arr even for 1d sims')\n"
]
],
[
[
"## gradually scaled non-uniform grid",
"_____no_output_____"
]
],
[
[
"\nNfine = [100,100]; #%specify nx, ny for each region\nNcoarse = [50,50];\nNtran = [50,50];\n\n\n\n# 2) specify the dx and dy of each region\ndx1 = 0.02; dy1 = 0.02;\ndx2 = 0.005; dy2 = 0.005;\ndfine = [dx2, dy2];\ndcoarse = [dx1, dy1];\ndtran = [0 ,0];\n\n#3) stack the vectors\n#drt does not have a value...\n\nNft = np.vstack((Ncoarse, Ntran, Nfine, Ntran, Ncoarse));\ndrt = np.vstack((dcoarse, dtran, dfine, dtran, dcoarse));\n\ndr_mask = np.ones((np.sum(Nft[:,0]),np.sum(Nft[:,1]),2)); #mask stores dx, dy for every grid cell?\n\nprint(Nft)\n\nprint(Nft,np.sum(Nft[:,0]),np.sum(Nft[:,1]))\n\n# # we need a base scale dl\n# #scale is arbitrary, just take dcoarse;\n# dr_reference = dcoarse;\n\n# #4) construct scaling vectors from this information\n# [dx_scale, dy_scale] = generate_nonuniform_scaling(Nft, drt./dr_reference);\n\n# ## calculate Ntot and Ltot\n# N = sum(Nft);\n# Lx = sum(dr_reference(1)*dx_scale);\n# Ly = sum(dr_reference(2)*dy_scale);\n# xrange = 0.5*[-Lx, Lx];\n# yrange = 0.5*[-Ly, Ly];\n# xrange_array = cumsum(dr_reference(1)*dx_scale)-Lx/2;\n# yrange_array = cumsum(dr_reference(1)*dy_scale)-Ly/2;\n# Nx = N(1); Ny = N(2);\n# ## output is a dxscale...dyscale\n\n\n\n",
"[[ 50 50]\n [ 50 50]\n [100 100]\n [ 50 50]\n [ 50 50]]\n[[ 50 50]\n [ 50 50]\n [100 100]\n [ 50 50]\n [ 50 50]] 300 300\n"
],
[
"Nx = np.sum(Nft[:,0]);\nNy = np.sum(Nft[:,1]);\ndx_scale = np.ones(Nx)\ndy_scale = np.ones(Ny);\n\nnum_regions = Nft.shape[0]; #iterate through 0,2,4\nx0 = y0 = 0;\nfor i in range(0,num_regions,2):\n dx_scale[x0:x0+Nft[i,0]] = drt[i,0];\n dy_scale[y0:y0+Nft[i,1]] = drt[i,1];\n if(i==num_regions-1): #%no transition after last region\n x0 = x0+Nft[i,0];\n y0 = y0+Nft[i,1];\n else:\n x0 = x0+Nft[i,0]+Nft[i+1,0];\n y0 = y0+Nft[i,1]+Nft[i+1,1];\n\nprint(dx_scale) \n\nx0 = Nft[1,0]; y0 = Nft[1,1];\nfor i in range(1, num_regions,2): #2:2:num_regions\n dx1 = drt[i-1,0]; dx2 = drt[i+1,0];\n dy1 = drt[i-1,1]; dy2 = drt[i+1,1];\n nxt = Nft[i,0]; nyt = Nft[i,1];\n\n grading_x = np.logspace(np.log10(dx1), np.log10(dx2), nxt+1);\n grading_y = np.logspace(np.log10(dy1), np.log10(dy2), nyt+1);\n\n dx_scale[x0-1:x0+nxt] = grading_x;\n dy_scale[y0-1:y0+nyt] = grading_y;\n \n x0 = x0+Nft[i,0]+Nft[i+1,0];\n y0 = y0+Nft[i,1]+Nft[i+1,1];\n \nprint(dx_scale)\n\nplt.plot(dx_scale)\n## ========================================================================\n## integrate into an operator\n## ========================================================================\n\n[Xs, Ys] = np.meshgrid(dx_scale, dy_scale);\n#meshgrid isn't right for y\nM = np.prod(Xs.shape)\n\n# we have to this kind of flip because the flattening\n# operation (:) doesn't retain row-major order\nYs=Ys.T; Xs = Xs.T;\nFsy = spdiags(Ys.flatten(),0,M,M);\nFsx = spdiags(Xs.flatten(),0,M,M);\n\n# might as well construct the conjugate grid. What is the conjugate grid?\nxc = (dx_scale+np.roll(dx_scale,[0,1]))/2;\nyc = (dy_scale+np.roll(dy_scale,[0,1]))/2;\n\n[Xc, Yc] = np.meshgrid(xc, yc);\nXc = Xc.T;\nYc = Yc.T;\nFsy_conj = sp.spdiags(Yc.flatten(),0,M,M);\nFsx_conj = sp.spdiags(Xc.flatten(),0,M,M);\n \n\n# Dxf = Fsx^-1*createDws('x', 'f', dL, N);%*Fsx; \n# Dyf = Fsy^-1*createDws('y', 'f', dL, N);%*Fsy;\n# Dyb = Fsy_conj^-1*createDws('y', 'b', dL, N);%*Fsx_conj; \n# Dxb = Fsx_conj^-1*createDws('x', 'b', dL, N);%*Fsy_conj; \n",
"_____no_output_____"
],
[
"## PML specification\nNpml = [20,20];",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecf60243a7662b87e10549283a264d98d2b77071 | 26,083 | ipynb | Jupyter Notebook | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | 65.2075 | 3,512 | 0.502626 | [
[
[
"# In-Class Coding Lab: Data Analysis with Pandas\n\nIn this lab, we will perform a data analysis on the **RMS Titanic** passenger list. The RMS Titanic is one of the most famous ocean liners in history. On April 15, 1912 it sank after colliding with an iceberg in the North Atlantic Ocean. To learn more, read here: https://en.wikipedia.org/wiki/RMS_Titanic \n\nOur goal today is to perform a data analysis on a subset of the passenger list. We're looking for insights as to which types of passengers did and didn't survive. Women? Children? 1st Class Passengers? 3rd class? Etc. \n\nI'm sure you've heard the expression often said during emergencies: \"Women and Children first\" Let's explore this data set and find out if that's true!\n\nBefore we begin you should read up on what each of the columns mean in the data dictionary. You can find this information on this page: https://www.kaggle.com/c/titanic/data \n\n\n## Loading the data set\n\nFirst we load the dataset into a Pandas `DataFrame` variable. The `sample(10)` method takes a random sample of 10 passengers from the data set.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\n# this turns off warning messages\nimport warnings\nwarnings.filterwarnings('ignore')\n\npassengers = pd.read_csv('CCL-titanic.csv')\npassengers.sample(10)",
"_____no_output_____"
]
],
[
[
"## How many survived?\n\nOne of the first things we should do is figure out how many of the passengers in this data set survived. Let's start with isolating just the `'Survivied'` column into a series:",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].sample(10)",
"_____no_output_____"
]
],
[
[
"There's too many to display so we just display a random sample of 10 passengers. \n\n- 1 means the passenger survivied\n- 0 means the passenger died\n\nWhat we really want is to count the number of survivors and deaths. We do this by querying the `value_counts()` of the `['Survived']` column, which returns a `Series` of counts, like this:",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].value_counts()",
"_____no_output_____"
]
],
[
[
"Only 342 passengers survived, and 549 perished. Let's observe this same data as percentages of the whole. We do this by adding the `normalize=True` named argument to the `value_counts()` method.",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"**Just 38% of passengers in this dataset survived.**",
"_____no_output_____"
],
[
"### Now you Try it!\n\n**FIRST** Write a Pandas expression to display counts of males and female passengers using the `Sex` variable:",
"_____no_output_____"
]
],
[
[
"# todo write code here\npassengers['Sex'].value_counts()",
"_____no_output_____"
]
],
[
[
"**NEXT** Write a Pandas expression to display male /female passenger counts as a percentage of the whole number of passengers in the data set.",
"_____no_output_____"
]
],
[
[
"# todo write code here\npassengers['Sex'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"If you got things working, you now know that **35% of passengers were female**.",
"_____no_output_____"
],
[
"## Who survivies? Men or Women?\n\nWe now know that 35% of the passengers were female, and 65% we male. \n\n**The next think to think about is how do survivial rates affect these numbers? **\n\nIf the ratio is about the same for surviviors only, then we can conclude that your **Sex** did not play a role in your survival on the RMS Titanic. \n\nLet's find out.",
"_____no_output_____"
]
],
[
[
"survivors = passengers[passengers['Survived'] ==1]\nsurvivors['PassengerId'].count()",
"_____no_output_____"
]
],
[
[
"Still **342** like we discovered originally. Now let's check the **Sex** split among survivors only:",
"_____no_output_____"
]
],
[
[
"survivors['Sex'].value_counts()",
"_____no_output_____"
]
],
[
[
"WOW! That is a huge difference! But you probably can't see it easily. Let's represent it in a `DataFrame`, so that it's easier to visualize:",
"_____no_output_____"
]
],
[
[
"sex_all_series = passengers['Sex'].value_counts()\nsex_survivor_series = survivors['Sex'].value_counts()\n\nsex_comparision_df = pd.DataFrame({ 'AllPassengers' : sex_all_series, 'Survivors' : sex_survivor_series })\nsex_comparision_df['SexSurvivialRate'] = sex_comparision_df['Survivors'] / sex_comparision_df['AllPassengers']\nsex_comparision_df",
"_____no_output_____"
]
],
[
[
" **So, females had a 74% survival rate. Much better than the overall rate of 38%**\n \nWe should probably briefly explain the code above. \n\n- The first two lines get a series count of all passengers by Sex (male / female) and count of survivors by sex\n- The third line creates DataFrame. Recall a pandas dataframe is just a dict of series. We have two keys 'AllPassengers' and 'Survivors'\n- The fourth line creates a new column in the dataframe which is just the survivors / all passengers to get the rate of survival for that Sex.\n\n## Feature Engineering: Adults and Children\n\nSometimes the variable we want to analyze is not readily available, but can be created from existing data. This is commonly referred to as **feature engineering**. The name comes from machine learning where we use data called *features* to predict an outcome. \n\nLet's create a new feature called `'AgeCat'` as follows:\n\n- When **Age** <=18 then 'Child'\n- When **Age** >18 then 'Adult'\n\nThis is easy to do in pandas. First we create the column and set all values to `np.nan` which means 'Not a number'. This is Pandas way of saying no value. Then we set the values based on the rules we set for the feature.",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'] = np.nan # Not a number\npassengers['AgeCat'][ passengers['Age'] <=18 ] = 'Child'\npassengers['AgeCat'][ passengers['Age'] > 18 ] = 'Adult'\npassengers.sample(5)",
"_____no_output_____"
]
],
[
[
"Let's get the count and distrubutions of Adults and Children on the passenger list.",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'].value_counts()",
"_____no_output_____"
]
],
[
[
"And here's the percentage as a whole:",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"So close to **80%** of the passengers were adults. Once again let's look at the ratio of `AgeCat` for survivors only. If your age has no bearing of survivial, then the rates should be the same. \n\nHere's the counts of Adult / Children among the survivors only:",
"_____no_output_____"
]
],
[
[
"survivors = passengers[passengers['Survived'] ==1]\nsurvivors['AgeCat'].value_counts()",
"_____no_output_____"
]
],
[
[
"### Now You Try it!\n\nCalculate the `AgeCat` survival rate, similar to how we did for the `SexSurvivalRate`. ",
"_____no_output_____"
]
],
[
[
"agecat_all_series = passengers['AgeCat'].value_counts()\nagecat_survivor_series = survivors['AgeCat'].value_counts()\n\n# todo make a data frame, add AgeCatSurvivialRate column, display dataframe \nagecat_df = pd.DataFrame({\"All\" : agecat_all_series, \"Survivors\" : agecat_survivor_series})\nagecat_df['Age Survival Rate'] = agecat_df['Survivors'] / agecat_df ['All']\nagecat_df",
"_____no_output_____"
]
],
[
[
"**So, children had a 50% survival rate, better than the overall rate of 38%**\n\n## So, women and children first?\n\nIt looks like the RMS really did have the motto: \"Women and Children First.\"\n\nHere's our insights. We know:\n\n- If you were a passenger, you had a 38% chance of survival.\n- If you were a female passenger, you had a 74% chance of survival.\n- If you were a child passenger, you had a 50% chance of survival. \n\n\n### Now you try it for Passenger Class\n\nRepeat this process for `Pclass` The passenger class variable. Display the survival rates for each passenger class. What does the information tell you about passenger class and survival rates?\n\nI'll give you a hint... \"Money Talks\"\n",
"_____no_output_____"
]
],
[
[
"# todo: repeat the analysis in the previous cell for Pclass \npclass_series = passengers['Pclass'].value_counts()\npclass_survivor_series = survivors['Pclass'].value_counts()\n\npclass_df = pd.DataFrame({'All' : pclass_series, 'Survivors' : pclass_survivor_series})\npclass_df['Class Survival Rate'] = pclass_df['Survivors'] / pclass_df['All']\npclass_df",
"_____no_output_____"
]
],
[
[
"https://youtu.be/dqB-EMqpsUA",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecf6028e8c732650435abaf28be0b6486fb82bfb | 838,100 | ipynb | Jupyter Notebook | Experiments/3/notebook3.ipynb | Jahnavi-Majji/Detecting-Depression-in-social-media-using-Deep-Learning | 70824e6aa4e6e3e222160741331ed087d1981b8c | [
"MIT"
] | null | null | null | Experiments/3/notebook3.ipynb | Jahnavi-Majji/Detecting-Depression-in-social-media-using-Deep-Learning | 70824e6aa4e6e3e222160741331ed087d1981b8c | [
"MIT"
] | null | null | null | Experiments/3/notebook3.ipynb | Jahnavi-Majji/Detecting-Depression-in-social-media-using-Deep-Learning | 70824e6aa4e6e3e222160741331ed087d1981b8c | [
"MIT"
] | null | null | null | 746.969697 | 348,860 | 0.944929 | [
[
[
"!pip install wordcloud",
"Requirement already satisfied: wordcloud in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (1.8.1)\nRequirement already satisfied: matplotlib in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from wordcloud) (3.3.4)\nRequirement already satisfied: numpy>=1.6.1 in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from wordcloud) (1.19.5)\nRequirement already satisfied: pillow in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from wordcloud) (8.2.0)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from matplotlib->wordcloud) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from matplotlib->wordcloud) (1.3.1)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from matplotlib->wordcloud) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from matplotlib->wordcloud) (2.4.7)\nRequirement already satisfied: six in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from cycler>=0.10->matplotlib->wordcloud) (1.15.0)\n"
],
[
"!pip install nltk",
"Requirement already satisfied: nltk in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (3.6.2)\nRequirement already satisfied: regex in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from nltk) (2021.4.4)\nRequirement already satisfied: joblib in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from nltk) (1.0.1)\nRequirement already satisfied: click in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from nltk) (8.0.0)\nRequirement already satisfied: tqdm in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from nltk) (4.60.0)\nRequirement already satisfied: colorama in c:\\users\\hp\\anaconda3\\envs\\tf_env\\lib\\site-packages (from click->nltk) (0.4.4)\n"
],
[
"import nltk\nnltk.download('punkt')",
"[nltk_data] Downloading package punkt to\n[nltk_data] C:\\Users\\HP\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n"
],
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"import numpy as np\nimport itertools\nimport matplotlib.pyplot as plt \nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn import metrics\nfrom nltk.tokenize import word_tokenize\n\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.metrics import log_loss\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.manifold import TSNE\n\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout\n\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.initializers import Constant",
"_____no_output_____"
],
[
"tweets = pd.read_csv(\"./d2.csv\")",
"_____no_output_____"
],
[
"tweets.head()",
"_____no_output_____"
],
[
"for i in range(10):\n tw = tweets.Post[i]\n lb = tweets.Depressed[i]\n print(\"{} - {}\".format(tw, lb))",
"Our most-broken and least-understood rules is \"helpers may not invite private contact as a first resort\", so we've made a new wiki to explain it - 1\nRegular Check-In Post. Plus, a reminder about the No-Activism Rule. - 1\nFuck - 1\nEvery day is a funeral that I mourn the person I used to be. - 1\ni hate myself - 1\nThe other day my car died outside of a drive thru, and it locked the whole system when it shut down, meaning I couldn't push it out of the drive thru completely. Everyone honking at me in the background telling me to move, when I literally could not have pushed my car anymore. Too often I feel... - 1\nAfter struggling for 30 years, I think I'm finally able to explain what my depression feels like. - 1\nFuck humanity - 1\nI feel like the only reason I'm still alive is an obligation to others... - 1\nFor everyone out there. - 1\n"
],
[
"for i in range(len(tweets)-1, len(tweets)-10, -1):\n tw = tweets.Post[i]\n lb = tweets.Depressed[i]\n print(\"{} - {}\".format(tw, lb))",
" Why do we feel more tired doing nothing than when we are doing something. - 0\n Why is the URL of google searches so long, what does it all mean? - 0\n What the hell are birds doing screaming at 5am? - 0\nELI5 why is it easier for men to build muscle mass compared to woman? - 0\nEli5 file vs object vs block storage ( not cloud based ) - 0\n: What physically happens in a hard drive when data is stored? - 0\n When and how did people start using the name \"Jane/John Doe\" as the default for an unidentified person? - 0\n How do Ozone machines work? - 0\n Why is it that when something pushes/pulls on me, I feel acceleration, but I don't feel acceleration from the earth's gravity? I feel a force pushing upward on me, but I don't feel a downward force. - 0\n"
],
[
"tweets = tweets.rename(columns={\"Post\":\"tweet\", \"Depressed\":\"label\"})",
"_____no_output_____"
],
[
"tweets.tweet = tweets.tweet.apply(lambda x : str(x))",
"_____no_output_____"
],
[
"tweets.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 13636 entries, 0 to 13635\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tweet 13636 non-null object\n 1 label 13636 non-null int64 \ndtypes: int64(1), object(1)\nmemory usage: 213.2+ KB\n"
],
[
"def plot_word_cloud(word_cloud):\n plt.figure(figsize=(10, 8), facecolor='k')\n plt.imshow(word_cloud)\n plt.axis(\"off\")\n plt.tight_layout(pad=0)\n plt.savefig(\"./s1.png\")\n plt.show()",
"_____no_output_____"
],
[
"from wordcloud import WordCloud\n\nword_cloud = WordCloud(width = 512, height = 256, collocations=False, colormap=\"Blues\")",
"_____no_output_____"
],
[
"depressive_words = \" \".join(list(tweets[tweets.label == 1].tweet))\ndepressive_wc = word_cloud.generate(depressive_words)\nplot_word_cloud(depressive_wc)",
"_____no_output_____"
],
[
"random_words = \" \".join(list(tweets[tweets.label==0].tweet))\nrandom_wc = word_cloud.generate(random_words)\nplot_word_cloud(random_wc)",
"_____no_output_____"
],
[
"tweets = tweets.sample(frac=1)",
"_____no_output_____"
],
[
"tweets.head(10)",
"_____no_output_____"
],
[
"from nltk.corpus import stopwords \nfrom nltk.tokenize import word_tokenize\nimport re\nimport time\nimport string\n\ndef clean_text(text_data): \n stop_words = set(stopwords.words('english')) \n \n print()\n print(\"Processing.....\")\n print()\n time.sleep(1)\n x=[]\n RE_EMOJI = re.compile('[\\U00010000-\\U0010ffff]', flags=re.UNICODE)\n # print(RE_EMOJI)\n for i in range(len(text_data)):\n q = text_data[i]\n q = RE_EMOJI.sub(r'', q)\n i = q.translate(str.maketrans('','',string.punctuation))\n \n x.append(i)\n print()\n print(\"Completed the processing of the text......\")\n print()\n return x",
"_____no_output_____"
],
[
"x = tweets.tweet\ny = tweets.label\n\nx = clean_text(x)",
"\nProcessing.....\n\n\nCompleted the processing of the text......\n\n"
],
[
"tweets.tweet.head(10)",
"_____no_output_____"
],
[
"# TODO: barplot of count_values()",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"print(\"Shape of x_train: \", len(x_train))\nprint(\"Shape of y_train: \", len(y_train))\nprint()\nprint(\"Shape of x_test: \", len(x_test))\nprint(\"Shape of y_test: \", len(y_test))",
"Shape of x_train: 10908\nShape of y_train: 10908\n\nShape of x_test: 2728\nShape of y_test: 2728\n"
],
[
"# TODO: plot distribution on train set",
"_____no_output_____"
],
[
"# TODO: plot distribution on test sets",
"_____no_output_____"
],
[
"def getaccuracy(predictions,actual):\n actual = np.array(actual)\n accuracy = np.count_nonzero((predictions==actual) == True)/len(actual)\n return accuracy*100",
"_____no_output_____"
],
[
"def plot_confusion_matrix(cm, classes,normalize=False,title='Confusion matrix',cmap=plt.cm.Blues):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label') ",
"_____no_output_____"
],
[
"def getconfusionmatrix(predictions,actual,classes,title):\n nb_matrix = confusion_matrix(actual, predictions)\n precision = nb_matrix[1][1]/(nb_matrix[1][1]+nb_matrix[0][1])\n recall = nb_matrix[1][1]/(nb_matrix[1][1]+nb_matrix[1][0])\n f1_score = 2*precision*recall/(precision+recall)\n print(title)\n print(nb_matrix)\n print('Recall',recall)\n print('Precison',precision)\n print('F1 score',f1_score)\n print()\n plt.figure(figsize=(8,8))\n plot_confusion_matrix(nb_matrix, classes=classes, title=title)",
"_____no_output_____"
],
[
"vectorizer = CountVectorizer(stop_words='english')\n\ndef mlmodel(x,y,x_test,y_test,mltype,vect):\n start_time = time.time()\n if vect==2:\n #vectorizer = CountVectorizer(stop_words='english')\n train_features = vectorizer.fit_transform(x)\n test_features = vectorizer.transform(x_test)\n# test_features = pad_sequences(test_features.toarray().tolist(), maxlen=train_features.shape[1], truncating=\"post\", padding=\"post\")\n# test_features = np.array(test_features)\n print(train_features.shape)\n print(test_features.shape)\n elif vect==1:\n tokenizer = Tokenizer()\n tokenizer.fit_on_texts(x)\n MAX_LENGTH=250\n train_features = tokenizer.texts_to_sequences(x)\n train_padded = pad_sequences(train_features, maxlen=MAX_LENGTH, truncating=\"post\", padding=\"post\")\n test_features = tokenizer.texts_to_sequences(x_test)\n test_padded = pad_sequences(test_features, maxlen=MAX_LENGTH, truncating=\"post\", padding=\"post\")\n train_features = np.array(train_padded)\n test_features = np.array(test_padded)\n \n actual = y\n if mltype=='Naive Bayes':\n from sklearn.naive_bayes import MultinomialNB\n model = MultinomialNB()\n model.fit(train_features, [int(r) for r in y])\n elif mltype=='SVM':\n from sklearn.svm import SVC\n model = SVC()\n model = model.fit(train_features, [int(r) for r in y])\n elif mltype=='Random Forest':\n from sklearn.ensemble import RandomForestClassifier\n model = RandomForestClassifier(max_depth=2, random_state=0)\n model = model.fit(train_features, [int(i) for i in y])\n \n# train_predictions = model.predict(vectorizer.transform(x))\n# test_predictions = model.predict(vectorizer.transform(x_test))\n \n train_predictions = model.predict(train_features)\n test_predictions = model.predict(test_features)\n\n train_accuracy = getaccuracy(train_predictions,actual)\n test_accuracy = getaccuracy(test_predictions,y_test)\n\n classes = [0,1]\n getconfusionmatrix(train_predictions,actual,classes,'Confusion matrix For '+mltype+' classifier for training dataset')\n print(\"\\n\")\n getconfusionmatrix(test_predictions,y_test,classes,'Confusion matrix For '+mltype+' classifier for testing dataset')\n print(\"\\n\")\n # rrr, fff, thresholds = metrics.roc_curve(actual4, prediction4, pos_label=1)\n # kn = format(metrics.auc(rrr, fff))\n # kn = float(kn)*100\n\n print(mltype,\" Accuracy for training datatset: \\n\", train_accuracy, \"%\")\n print(mltype,\" Accuracy for testing datatset: \\n\", test_accuracy, \"%\")\n print(\" Completion Speed\", round((time.time() - start_time),5))\n print()\n print()\n return [model,mltype,train_accuracy,test_accuracy]",
"_____no_output_____"
],
[
"tokenizer_choice = 1 # 1 for Tokenizer, 2 for CountVectorizer\ntokenizer_name = 'Tokenizer' if tokenizer_choice==1 else 'Count Tokenizer'\n\nanalysis=[]\ntrained_models = {}\nmltype_list =['Naive Bayes','SVM','Random Forest']\nfor t in mltype_list:\n temp = mlmodel(x_train,y_train,x_test,y_test,t,vect=tokenizer_choice)\n trained_models[t] = temp[0]\n analysis.append(temp[1:])",
"Confusion matrix For Naive Bayes classifier for training dataset\n[[1690 3590]\n [1668 3960]]\nRecall 0.7036247334754797\nPrecison 0.5245033112582781\nF1 score 0.6010016694490817\n\n\n\nConfusion matrix For Naive Bayes classifier for testing dataset\n[[ 386 902]\n [ 424 1016]]\nRecall 0.7055555555555556\nPrecison 0.529718456725756\nF1 score 0.6051220964860037\n\n\n\nNaive Bayes Accuracy for training datatset: \n 51.796846351301795 %\nNaive Bayes Accuracy for testing datatset: \n 51.39296187683284 %\n Completion Speed 1.01293\n\n\nConfusion matrix For SVM classifier for training dataset\n[[1341 3939]\n [ 352 5276]]\nRecall 0.937455579246624\nPrecison 0.5725447639717851\nF1 score 0.710907498484134\n\n\n\nConfusion matrix For SVM classifier for testing dataset\n[[ 146 1142]\n [ 154 1286]]\nRecall 0.8930555555555556\nPrecison 0.5296540362438221\nF1 score 0.6649431230610134\n\n\n\nSVM Accuracy for training datatset: \n 60.66189952328567 %\nSVM Accuracy for testing datatset: \n 52.49266862170088 %\n Completion Speed 69.87689\n\n\nConfusion matrix For Random Forest classifier for training dataset\n[[ 139 5141]\n [ 28 5600]]\nRecall 0.9950248756218906\nPrecison 0.5213667256307606\nF1 score 0.6842201722768647\n\n\n\nConfusion matrix For Random Forest classifier for testing dataset\n[[ 6 1282]\n [ 7 1433]]\nRecall 0.9951388888888889\nPrecison 0.5278084714548803\nF1 score 0.6897713598074608\n\n\n\nRandom Forest Accuracy for training datatset: \n 52.612761276127614 %\nRandom Forest Accuracy for testing datatset: \n 52.74926686217009 %\n Completion Speed 1.12003\n\n\n"
],
[
"#Random_Forest(x_train, y_train, x_test, y_test, vectorizer_choice)",
"_____no_output_____"
],
[
"#SVM(x_train, y_train, x_test, y_test, vectorizer_choice)",
"_____no_output_____"
],
[
"#Naive_Bayes(x_train, y_train, x_test, y_test, vectorizer_choice)",
"_____no_output_____"
],
[
"analysis_df = pd.DataFrame(analysis,columns=['Model Name','Train Accuracy','Test Accuracy'])\nprint(analysis_df)\n#analysis_df.to_csv('generated/analysis_df'+tokenizer_name+'.csv',index=False)",
" Model Name Train Accuracy Test Accuracy\n0 Naive Bayes 51.796846 51.392962\n1 SVM 60.661900 52.492669\n2 Random Forest 52.612761 52.749267\n"
],
[
"targets = {\"1\" : \"depressed\", \"0\" : \"normal\"}\n\ndef predict_label(text):\n for name, model in trained_models.items():\n prediction = model.predict(vectorizer.transform([text]))\n print()\n print(\"{} predicted the text as: {}\".format(name, targets[prediction]))",
"_____no_output_____"
],
[
"neg_sent = \"5 signs, you are suffering from depression\"\npredict_label(neg_sent)",
"_____no_output_____"
],
[
"pos_sent = \"I am normal. This is just to let you know.\"\npredict_label(pos_sent)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf61328e5f9ea7d3d74f8a759ad58ce286b4e2c | 18,135 | ipynb | Jupyter Notebook | matplotlib/gallery_jupyter/text_labels_and_annotations/annotation_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | matplotlib/gallery_jupyter/text_labels_and_annotations/annotation_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | matplotlib/gallery_jupyter/text_labels_and_annotations/annotation_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | 143.928571 | 6,211 | 0.539234 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Annotating Plots\n\n\nThe following examples show how it is possible to annotate plots in Matplotlib.\nThis includes highlighting specific points of interest and using various\nvisual tools to call attention to this point. For a more complete and in-depth\ndescription of the annotation and text tools in Matplotlib, see the\n:doc:`tutorial on annotation </tutorials/text/annotations>`.\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\nimport numpy as np\nfrom matplotlib.text import OffsetFrom",
"_____no_output_____"
]
],
[
[
"Specifying text points and annotation points\n--------------------------------------------\n\nYou must specify an annotation point ``xy=(x, y)`` to annotate this point.\nAdditionally, you may specify a text point ``xytext=(x, y)`` for the location\nof the text for this annotation. Optionally, you can specify the coordinate\nsystem of *xy* and *xytext* with one of the following strings for *xycoords*\nand *textcoords* (default is 'data')::\n\n 'figure points' : points from the lower left corner of the figure\n 'figure pixels' : pixels from the lower left corner of the figure\n 'figure fraction' : (0, 0) is lower left of figure and (1, 1) is upper right\n 'axes points' : points from lower left corner of axes\n 'axes pixels' : pixels from lower left corner of axes\n 'axes fraction' : (0, 0) is lower left of axes and (1, 1) is upper right\n 'offset points' : Specify an offset (in points) from the xy value\n 'offset pixels' : Specify an offset (in pixels) from the xy value\n 'data' : use the axes data coordinate system\n\nNote: for physical coordinate systems (points or pixels) the origin is the\n(bottom, left) of the figure or axes.\n\nOptionally, you can specify arrow properties which draws and arrow\nfrom the text to the annotated point by giving a dictionary of arrow\nproperties\n\nValid keys are::\n\n width : the width of the arrow in points\n frac : the fraction of the arrow length occupied by the head\n headwidth : the width of the base of the arrow head in points\n shrink : move the tip and base some percent away from the\n annotated point and text\n any key for matplotlib.patches.polygon (e.g., facecolor)\n\n",
"_____no_output_____"
]
],
[
[
"# Create our figure and data we'll use for plotting\nfig, ax = plt.subplots(figsize=(3, 3))\n\nt = np.arange(0.0, 5.0, 0.01)\ns = np.cos(2*np.pi*t)\n\n# Plot a line and add some simple annotations\nline, = ax.plot(t, s)\nax.annotate('figure pixels',\n xy=(10, 10), xycoords='figure pixels')\nax.annotate('figure points',\n xy=(80, 80), xycoords='figure points')\nax.annotate('figure fraction',\n xy=(.025, .975), xycoords='figure fraction',\n horizontalalignment='left', verticalalignment='top',\n fontsize=20)\n\n# The following examples show off how these arrows are drawn.\n\nax.annotate('point offset from data',\n xy=(2, 1), xycoords='data',\n xytext=(-15, 25), textcoords='offset points',\n arrowprops=dict(facecolor='black', shrink=0.05),\n horizontalalignment='right', verticalalignment='bottom')\n\nax.annotate('axes fraction',\n xy=(3, 1), xycoords='data',\n xytext=(0.8, 0.95), textcoords='axes fraction',\n arrowprops=dict(facecolor='black', shrink=0.05),\n horizontalalignment='right', verticalalignment='top')\n\n# You may also use negative points or pixels to specify from (right, top).\n# E.g., (-10, 10) is 10 points to the left of the right side of the axes and 10\n# points above the bottom\n\nax.annotate('pixel offset from axes fraction',\n xy=(1, 0), xycoords='axes fraction',\n xytext=(-20, 20), textcoords='offset pixels',\n horizontalalignment='right',\n verticalalignment='bottom')\n\nax.set(xlim=(-1, 5), ylim=(-3, 5))",
"_____no_output_____"
]
],
[
[
"Using multiple coordinate systems and axis types\n------------------------------------------------\n\nYou can specify the *xypoint* and the *xytext* in different positions and\ncoordinate systems, and optionally turn on a connecting line and mark the\npoint with a marker. Annotations work on polar axes too.\n\nIn the example below, the *xy* point is in native coordinates (*xycoords*\ndefaults to 'data'). For a polar axes, this is in (theta, radius) space.\nThe text in the example is placed in the fractional figure coordinate system.\nText keyword args like horizontal and vertical alignment are respected.\n\n",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(subplot_kw=dict(projection='polar'), figsize=(3, 3))\nr = np.arange(0, 1, 0.001)\ntheta = 2*2*np.pi*r\nline, = ax.plot(theta, r)\n\nind = 800\nthisr, thistheta = r[ind], theta[ind]\nax.plot([thistheta], [thisr], 'o')\nax.annotate('a polar annotation',\n xy=(thistheta, thisr), # theta, radius\n xytext=(0.05, 0.05), # fraction, fraction\n textcoords='figure fraction',\n arrowprops=dict(facecolor='black', shrink=0.05),\n horizontalalignment='left',\n verticalalignment='bottom')\n\n# You can also use polar notation on a cartesian axes. Here the native\n# coordinate system ('data') is cartesian, so you need to specify the\n# xycoords and textcoords as 'polar' if you want to use (theta, radius).\n\nel = Ellipse((0, 0), 10, 20, facecolor='r', alpha=0.5)\n\nfig, ax = plt.subplots(subplot_kw=dict(aspect='equal'))\nax.add_artist(el)\nel.set_clip_box(ax.bbox)\nax.annotate('the top',\n xy=(np.pi/2., 10.), # theta, radius\n xytext=(np.pi/3, 20.), # theta, radius\n xycoords='polar',\n textcoords='polar',\n arrowprops=dict(facecolor='black', shrink=0.05),\n horizontalalignment='left',\n verticalalignment='bottom',\n clip_on=True) # clip to the axes bounding box\n\nax.set(xlim=[-20, 20], ylim=[-20, 20])",
"_____no_output_____"
]
],
[
[
"Customizing arrow and bubble styles\n-----------------------------------\n\nThe arrow between *xytext* and the annotation point, as well as the bubble\nthat covers the annotation text, are highly customizable. Below are a few\nparameter options as well as their resulting output.\n\n",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8, 5))\n\nt = np.arange(0.0, 5.0, 0.01)\ns = np.cos(2*np.pi*t)\nline, = ax.plot(t, s, lw=3)\n\nax.annotate('straight',\n xy=(0, 1), xycoords='data',\n xytext=(-50, 30), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\"))\n\nax.annotate('arc3,\\nrad 0.2',\n xy=(0.5, -1), xycoords='data',\n xytext=(-80, -60), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"arc3,rad=.2\"))\n\nax.annotate('arc,\\nangle 50',\n xy=(1., 1), xycoords='data',\n xytext=(-90, 50), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"arc,angleA=0,armA=50,rad=10\"))\n\nax.annotate('arc,\\narms',\n xy=(1.5, -1), xycoords='data',\n xytext=(-80, -60), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"arc,angleA=0,armA=40,angleB=-90,armB=30,rad=7\"))\n\nax.annotate('angle,\\nangle 90',\n xy=(2., 1), xycoords='data',\n xytext=(-70, 30), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"angle,angleA=0,angleB=90,rad=10\"))\n\nax.annotate('angle3,\\nangle -90',\n xy=(2.5, -1), xycoords='data',\n xytext=(-80, -60), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"angle3,angleA=0,angleB=-90\"))\n\nax.annotate('angle,\\nround',\n xy=(3., 1), xycoords='data',\n xytext=(-60, 30), textcoords='offset points',\n bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"angle,angleA=0,angleB=90,rad=10\"))\n\nax.annotate('angle,\\nround4',\n xy=(3.5, -1), xycoords='data',\n xytext=(-70, -80), textcoords='offset points',\n size=20,\n bbox=dict(boxstyle=\"round4,pad=.5\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"angle,angleA=0,angleB=-90,rad=10\"))\n\nax.annotate('angle,\\nshrink',\n xy=(4., 1), xycoords='data',\n xytext=(-60, 30), textcoords='offset points',\n bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"->\",\n shrinkA=0, shrinkB=10,\n connectionstyle=\"angle,angleA=0,angleB=90,rad=10\"))\n\n# You can pass an empty string to get only annotation arrows rendered\nann = ax.annotate('', xy=(4., 1.), xycoords='data',\n xytext=(4.5, -1), textcoords='data',\n arrowprops=dict(arrowstyle=\"<->\",\n connectionstyle=\"bar\",\n ec=\"k\",\n shrinkA=5, shrinkB=5))\n\nax.set(xlim=(-1, 5), ylim=(-4, 3))\n\n# We'll create another figure so that it doesn't get too cluttered\nfig, ax = plt.subplots()\n\nel = Ellipse((2, -1), 0.5, 0.5)\nax.add_patch(el)\n\nax.annotate('$->$',\n xy=(2., -1), xycoords='data',\n xytext=(-150, -140), textcoords='offset points',\n bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"->\",\n patchB=el,\n connectionstyle=\"angle,angleA=90,angleB=0,rad=10\"))\n\nax.annotate('arrow\\nfancy',\n xy=(2., -1), xycoords='data',\n xytext=(-100, 60), textcoords='offset points',\n size=20,\n # bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"fancy\",\n fc=\"0.6\", ec=\"none\",\n patchB=el,\n connectionstyle=\"angle3,angleA=0,angleB=-90\"))\n\nax.annotate('arrow\\nsimple',\n xy=(2., -1), xycoords='data',\n xytext=(100, 60), textcoords='offset points',\n size=20,\n # bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"simple\",\n fc=\"0.6\", ec=\"none\",\n patchB=el,\n connectionstyle=\"arc3,rad=0.3\"))\n\nax.annotate('wedge',\n xy=(2., -1), xycoords='data',\n xytext=(-100, -100), textcoords='offset points',\n size=20,\n # bbox=dict(boxstyle=\"round\", fc=\"0.8\"),\n arrowprops=dict(arrowstyle=\"wedge,tail_width=0.7\",\n fc=\"0.6\", ec=\"none\",\n patchB=el,\n connectionstyle=\"arc3,rad=-0.3\"))\n\nann = ax.annotate('bubble,\\ncontours',\n xy=(2., -1), xycoords='data',\n xytext=(0, -70), textcoords='offset points',\n size=20,\n bbox=dict(boxstyle=\"round\",\n fc=(1.0, 0.7, 0.7),\n ec=(1., .5, .5)),\n arrowprops=dict(arrowstyle=\"wedge,tail_width=1.\",\n fc=(1.0, 0.7, 0.7), ec=(1., .5, .5),\n patchA=None,\n patchB=el,\n relpos=(0.2, 0.8),\n connectionstyle=\"arc3,rad=-0.1\"))\n\nann = ax.annotate('bubble',\n xy=(2., -1), xycoords='data',\n xytext=(55, 0), textcoords='offset points',\n size=20, va=\"center\",\n bbox=dict(boxstyle=\"round\", fc=(1.0, 0.7, 0.7), ec=\"none\"),\n arrowprops=dict(arrowstyle=\"wedge,tail_width=1.\",\n fc=(1.0, 0.7, 0.7), ec=\"none\",\n patchA=None,\n patchB=el,\n relpos=(0.2, 0.5)))\n\nax.set(xlim=(-1, 5), ylim=(-5, 3))",
"_____no_output_____"
]
],
[
[
"More examples of coordinate systems\n-----------------------------------\n\nBelow we'll show a few more examples of coordinate systems and how the\nlocation of annotations may be specified.\n\n",
"_____no_output_____"
]
],
[
[
"fig, (ax1, ax2) = plt.subplots(1, 2)\n\nbbox_args = dict(boxstyle=\"round\", fc=\"0.8\")\narrow_args = dict(arrowstyle=\"->\")\n\n# Here we'll demonstrate the extents of the coordinate system and how\n# we place annotating text.\n\nax1.annotate('figure fraction : 0, 0', xy=(0, 0), xycoords='figure fraction',\n xytext=(20, 20), textcoords='offset points',\n ha=\"left\", va=\"bottom\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax1.annotate('figure fraction : 1, 1', xy=(1, 1), xycoords='figure fraction',\n xytext=(-20, -20), textcoords='offset points',\n ha=\"right\", va=\"top\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax1.annotate('axes fraction : 0, 0', xy=(0, 0), xycoords='axes fraction',\n xytext=(20, 20), textcoords='offset points',\n ha=\"left\", va=\"bottom\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax1.annotate('axes fraction : 1, 1', xy=(1, 1), xycoords='axes fraction',\n xytext=(-20, -20), textcoords='offset points',\n ha=\"right\", va=\"top\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\n# It is also possible to generate draggable annotations\n\nan1 = ax1.annotate('Drag me 1', xy=(.5, .7), xycoords='data',\n #xytext=(.5, .7), textcoords='data',\n ha=\"center\", va=\"center\",\n bbox=bbox_args,\n #arrowprops=arrow_args\n )\n\nan2 = ax1.annotate('Drag me 2', xy=(.5, .5), xycoords=an1,\n xytext=(.5, .3), textcoords='axes fraction',\n ha=\"center\", va=\"center\",\n bbox=bbox_args,\n arrowprops=dict(patchB=an1.get_bbox_patch(),\n connectionstyle=\"arc3,rad=0.2\",\n **arrow_args))\nan1.draggable()\nan2.draggable()\n\nan3 = ax1.annotate('', xy=(.5, .5), xycoords=an2,\n xytext=(.5, .5), textcoords=an1,\n ha=\"center\", va=\"center\",\n bbox=bbox_args,\n arrowprops=dict(patchA=an1.get_bbox_patch(),\n patchB=an2.get_bbox_patch(),\n connectionstyle=\"arc3,rad=0.2\",\n **arrow_args))\n\n# Finally we'll show off some more complex annotation and placement\n\ntext = ax2.annotate('xy=(0, 1)\\nxycoords=(\"data\", \"axes fraction\")',\n xy=(0, 1), xycoords=(\"data\", 'axes fraction'),\n xytext=(0, -20), textcoords='offset points',\n ha=\"center\", va=\"top\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax2.annotate('xy=(0.5, 0)\\nxycoords=artist',\n xy=(0.5, 0.), xycoords=text,\n xytext=(0, -20), textcoords='offset points',\n ha=\"center\", va=\"top\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax2.annotate('xy=(0.8, 0.5)\\nxycoords=ax1.transData',\n xy=(0.8, 0.5), xycoords=ax1.transData,\n xytext=(10, 10),\n textcoords=OffsetFrom(ax2.bbox, (0, 0), \"points\"),\n ha=\"left\", va=\"bottom\",\n bbox=bbox_args,\n arrowprops=arrow_args)\n\nax2.set(xlim=[-2, 2], ylim=[-2, 2])\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf6307f90aa834cc0e180484e4fd7ac572c6cac | 761,505 | ipynb | Jupyter Notebook | notebooks/siwo_ice_animation.ipynb | axiom-data-science/SIWO | 06ed16e77a5a654551031224b8e2ad4f88ab753f | [
"MIT"
] | null | null | null | notebooks/siwo_ice_animation.ipynb | axiom-data-science/SIWO | 06ed16e77a5a654551031224b8e2ad4f88ab753f | [
"MIT"
] | null | null | null | notebooks/siwo_ice_animation.ipynb | axiom-data-science/SIWO | 06ed16e77a5a654551031224b8e2ad4f88ab753f | [
"MIT"
] | null | null | null | 2,239.720588 | 749,152 | 0.957883 | [
[
[
"# Generate animation of ice from HYCOM to support Sea Ice for Walrus Outlook (SIWO)\n\n## Purpose\nThe SIWO community is supported by multiple groups including [ARCUS](https://www.arcus.org/siwo), the [National Weather Service Fairbanks Decision Support office](https://www.weather.gov/afg/SIWO_overview), and an active [Facebook community page](https://www.facebook.com/seaiceforwalrus).\n\nAxiom Data Science was asked by the Alaska Ocean Observing System to explore putting together some supporting imagery that could be regularly produced and sent out each week via email or the web. The requirements:\n\n- show ice movement in and around Alaska waters\n- produce a product small enough for low-bandwidth areas of Alaska\n\n## Inputs\nWe used prediction results from the HYbrid Coordinate Ocean Model (HYCOM). HYCOM produces many different outputs, but their [Global Ocean Forecasting System (GOFS) 3.1 Analysis](https://www.hycom.org/) produces fields for ice area fraction and sea ice velocity, there are snapshots of the model results every 3 hours, and forecasts generally extend a week into the future. They also offer THREDDS end-point, and if you spatially subset the data the aggregated file downloads aren’t too bad (~360 Mb).\n\n## Outputs\nAiming for a gif animation that shows ice for the currently available forecast period (nominally, at least one week).",
"_____no_output_____"
]
],
[
[
"from typing import Tuple\nfrom datetime import datetime\nfrom pandas import Timestamp, Timedelta\nimport numpy as np\nimport xarray as xr\nimport cartopy\nimport cmocean\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nfrom matplotlib.figure import Figure\nimport io\nfrom PIL import Image\nfrom tqdm.notebook import tqdm, trange\nimport imageio",
"_____no_output_____"
]
],
[
[
"## HYCOM THREDDS Server\nForecasts are available from yesterday to ~7 days in the future.",
"_____no_output_____"
]
],
[
[
"loc = 'https://tds.hycom.org/thredds/dodsC/GLBy0.08/expt_93.0/FMRC/ice/GLBy0.08_930_FMRC_ice_best.ncd'\nhycom = xr.open_dataset(loc, drop_variables='tau')\nhycom = hycom.sel(lat=slice(55,72), lon=slice(175,215))",
"_____no_output_____"
],
[
"hycom = hycom[['sic', 'siv', 'siu']].load()",
"CPU times: user 2.48 s, sys: 1.21 s, total: 3.7 s\nWall time: 46 s\n"
],
[
"def make_basemap(\n extent: Tuple = None,\n lat_stride: int = 10,\n lon_stride: int = 30\n):\n \"\"\"Produces the figure's geospatially aware basemap.\"\"\"\n \n figure_background = '#000000'\n plate_crs = cartopy.crs.PlateCarree()\n merc_crs = cartopy.crs.Mercator(central_longitude=-150)\n\n fig = plt.figure(dpi=144, figsize=(14, 14))\n fig.patch.set_facecolor(figure_background)\n\n # setting the anchor means we will lock in the upper left side after the aspect changes\n ax = fig.add_axes([0, 0, 1, 1], projection = merc_crs, frameon=False)\n ax.set_extent(extent, crs=plate_crs)\n ax.spines['geo'].set_edgecolor('white')\n ax.spines['geo'].set_visible(False)\n \n # Contextual Map Elements\n land_color = '#333333'\n coast_color = '#333333'\n ocean_color = '#040613'\n land_10m = cartopy.feature.NaturalEarthFeature('physical', 'land', '10m',\n edgecolor=coast_color,\n facecolor=land_color,\n zorder=10)\n ax.add_feature(land_10m)\n rivers = cartopy.feature.NaturalEarthFeature('physical', 'rivers_lake_centerlines', '10m',\n edgecolor=ocean_color,\n facecolor='none',\n zorder=11)\n ax.add_feature(rivers, alpha=0.2)\n lakes = cartopy.feature.NaturalEarthFeature('physical', 'lakes', '10m',\n edgecolor='#000000',\n facecolor=ocean_color,\n zorder=11)\n ax.add_feature(lakes, alpha=0.2)\n \n return fig, ax\n\ndef make_map(\n hycom: xr.Dataset,\n time: datetime,\n extent: Tuple = (175, 215, 55, 72),\n lat_stride: int = 5,\n lon_stride: int = 10,\n padding: float = 0.01,\n) -> Figure:\n \"\"\"Let's make this map.\"\"\"\n \n fig, ax = make_basemap(extent=extent, lat_stride=lat_stride, lon_stride=lon_stride)\n plate_crs = cartopy.crs.PlateCarree()\n ax.pcolormesh(hycom['lon'], hycom['lat'], hycom['sic'], transform = plate_crs,\n cmap=cmocean.cm.ice, vmin=0, vmax=1, zorder=2)\n \n def xcoords(lat, lon):\n \"\"\"Do the things quiver needs for latitudes and longitudes.\"\"\"\n xlat = np.tile(lat, len(lon))\n xlat = np.reshape(xlat, (len(lon), len(lat)))\n xlat = xlat.T\n xlon = np.tile(lon, len(lat))\n xlon = np.reshape(xlon, (len(lat), len(lon)))\n return xlat, xlon\n \n hycom_xlat, hycom_xlon = xcoords(hycom['lat'].values, hycom['lon'].values)\n icev_xstride=20\n icev_ystride=20\n iceq = ax.quiver(hycom_xlon[::icev_ystride, ::icev_xstride], hycom_xlat[::icev_ystride, ::icev_xstride],\n hycom['siu'][::icev_ystride,::icev_xstride].values, hycom['siv'][::icev_ystride,::icev_xstride].values,\n transform=plate_crs, facecolor='white', alpha=1.0, scale=20, pivot='tip',\n headlength=3, headaxislength=2.8, width=0.005, linewidth=0.5, zorder=5, edgecolor='black')\n levels = [0.15]\n ax.contour(hycom['lon'].values, hycom['lat'].values, hycom['sic'].values, levels, colors='#ffffff',\n linewidths=0.5, transform = plate_crs, zorder=3)\n primary_label = Timestamp(time).strftime('%b %d, %H:%M')\n ax_bottom = ax.get_position().y0\n ax_left = ax.get_position().x0\n y_padding = padding\n x_padding = padding\n ax.text(\n ax_left + x_padding,\n ax_bottom + y_padding,\n primary_label,\n color='white',\n verticalalignment = 'bottom',\n horizontalalignment = 'left',\n fontsize=45,\n transform=ax.transAxes,\n zorder=20\n )\n ax.text(\n ax_left + x_padding,\n ax_bottom + y_padding + 0.06,\n 'Data from HYCOM \\nArrows = Ice Velocity',\n color='white',\n verticalalignment = 'bottom',\n horizontalalignment = 'left',\n fontsize=20,\n transform=ax.transAxes,\n zorder=20\n )\n \n # save and pass back\n buf = io.BytesIO()\n figure_background = '#000000'\n fig.savefig(buf, bbox_inches='tight', format='PNG', facecolor=figure_background)\n plt.close(fig)\n \n return Image.open(buf)",
"_____no_output_____"
],
[
"times = hycom['sic']['time'].values\nmake_map(hycom.sel(time=times[15]), times[15])",
"/opt/conda/envs/py37/lib/python3.7/site-packages/cartopy/io/__init__.py:260: DownloadWarning: Downloading: https://naciscdn.org/naturalearth/10m/physical/ne_10m_land.zip\n warnings.warn('Downloading: {}'.format(url), DownloadWarning)\n/opt/conda/envs/py37/lib/python3.7/site-packages/cartopy/io/__init__.py:260: DownloadWarning: Downloading: https://naciscdn.org/naturalearth/10m/physical/ne_10m_rivers_lake_centerlines.zip\n warnings.warn('Downloading: {}'.format(url), DownloadWarning)\n/opt/conda/envs/py37/lib/python3.7/site-packages/cartopy/io/__init__.py:260: DownloadWarning: Downloading: https://naciscdn.org/naturalearth/10m/physical/ne_10m_lakes.zip\n warnings.warn('Downloading: {}'.format(url), DownloadWarning)\n"
]
],
[
[
"## Loop through all the available times",
"_____no_output_____"
]
],
[
[
"times = hycom['sic']['time'].values\nresults = []\nfor time in tqdm(times):\n result = make_map(hycom.sel(time=time), time)\n results.append(result)",
"_____no_output_____"
]
],
[
[
"## Compress the frames, and then make a gif",
"_____no_output_____"
]
],
[
[
"def resize_image(input_image, width=1280):\n oldwidth = input_image.size[0]\n height = int(input_image.size[1] * (width/oldwidth))\n output_image = input_image.resize((width, height), Image.ANTIALIAS)\n return output_image",
"_____no_output_____"
],
[
"outdir = yes.strftime('%Y%m%d')\nquality=50\nwidth=640\ndownsized = [resize_image(result, width=width) for result in results]\nout_name = f'../gifs/iceforecast-{outdir}-pil-{width}-q{quality}.gif'\ndownsized[0].save(fp=out_name, format='GIF', append_images=downsized[1:],\n quality=quality, save_all=True,\n duration=250, loop=0, optimize=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf631f48697b2decb2808fb5f93746eaa6032d3 | 373,895 | ipynb | Jupyter Notebook | contour.ipynb | lorforlinux/hand-brush | b9f16fdb4163001919960d047e82749c815b4b71 | [
"MIT"
] | 2 | 2020-03-11T06:27:02.000Z | 2020-03-17T10:52:48.000Z | contour.ipynb | deepaklorkhatri007/hand-brush | b9f16fdb4163001919960d047e82749c815b4b71 | [
"MIT"
] | null | null | null | contour.ipynb | deepaklorkhatri007/hand-brush | b9f16fdb4163001919960d047e82749c815b4b71 | [
"MIT"
] | null | null | null | 2,476.125828 | 186,052 | 0.962439 | [
[
[
"# Author : Deepak Khatri\n# Date : 26th Feb 2020\nfrom utils import detector_utils as detector_utils\nimport cv2\nimport tensorflow as tf\nimport datetime\nimport argparse\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math",
"_____no_output_____"
],
[
"detection_graph, sess = detector_utils.load_inference_graph()",
"> ====== loading HAND frozen graph into memory\n> ====== Hand Inference graph loaded.\n"
],
[
"cap = cv2.imread(\"assets/hai_robert.jpg\")\nim_height, im_width = cap.shape[:2]\nimage_np = cv2.cvtColor(cap, cv2.COLOR_BGR2RGB)\ncap = image_np.copy()\n# cv2.imshow(\"\",cap)\nboxes, scores = detector_utils.detect_objects(image_np, detection_graph, sess)\ndetector_utils.draw_box_on_image(\n 1, 0.7, scores, boxes, im_width, im_height, image_np)\nplt.imshow(image_np)",
"0.9999454\n"
],
[
"(left, right, top, bottom) = (boxes[0][1] * im_width, boxes[0][3] * im_width,\n boxes[0][0] * im_height, boxes[0][2] * im_height)\nblack = np.zeros((im_height, im_width, 3), np.uint8) \nblack1 = cv2.rectangle(black,(int(left),int(top)),(int(right),int(bottom)),(255, 255, 255), -1)\ngray = cv2.cvtColor(black,cv2.COLOR_BGR2GRAY) \nret,b_mask = cv2.threshold(gray,127,255, 0) \nfin = cv2.bitwise_and(black1, cap, mask = b_mask)\noutput = cv2.Canny(fin, 100, 200)\nimage, contours, hierarchy = cv2.findContours(output, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)\nimage = cv2.drawContours(image_np, contours, -1, (0, 255, 0), 2)\nplt.imshow(image)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ecf63bd3ac943f0f6b5cb6845c0e7d9f122562da | 6,037 | ipynb | Jupyter Notebook | Make_prediction_on_set.ipynb | m3at/coco-panoptic | 670d8d4e8c24e77394443b0b80b7198892ab26ef | [
"MIT"
] | 16 | 2018-12-20T14:03:54.000Z | 2021-01-22T23:37:31.000Z | Make_prediction_on_set.ipynb | m3at/coco-panoptic | 670d8d4e8c24e77394443b0b80b7198892ab26ef | [
"MIT"
] | 2 | 2018-12-28T04:58:19.000Z | 2019-01-07T03:39:38.000Z | Make_prediction_on_set.ipynb | m3at/coco-panoptic | 670d8d4e8c24e77394443b0b80b7198892ab26ef | [
"MIT"
] | 3 | 2019-02-27T05:06:59.000Z | 2019-07-07T05:56:36.000Z | 30.80102 | 98 | 0.549114 | [
[
[
"import numpy as np\nimport seaborn as sns\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom pathlib import Path\n\nimport toolbox\nimport fcn\nimport yaml\nimport shutil\nimport sys\n\nimport chainer\n\n%matplotlib inline\nsns.set()\n\n%load_ext autoreload\n# Note: this reload all lib at each cell exec, just for convenience\n%autoreload 2 \n\nchainer.print_runtime_info()",
"_____no_output_____"
],
[
"%%time\nfrom models import InstanceSeg, SemanticSeg, PanopticSeg\n\nprint(\"Preparing instance segmentation model...\", end=\" \")\ninstaseg = InstanceSeg(\n \"./2018-12-03_23-24/20181203_232507/params.yaml\",\n \"./2018-12-03_23-24/20181203_232507/snapshot_model_45000.npz\",\n gpu=0,\n)\nprint(\"Done\")\n\nprint(\"Preparing semantic segmentation model...\", end=\" \")\nsemaseg = SemanticSeg(\n \"./toolbox/deeplab/config/cocostuff164k.yaml\",\n \"./cocostuff164k_iter100k.pth\",\n gpu=0,\n)\nprint(\"Done\")\n\nprint(\"Preparing panotpic model...\", end=\" \")\npanoseg = PanopticSeg(instaseg, semaseg, thresh=0.7, frac=0.2)\nprint(\"Done\")",
"_____no_output_____"
],
[
"# Create a minival dataset, to quickly confirm things work\ndest_name = \"minival\"\nMINIVAL_SIZE = 100\n\nroot = Path(\"./dataset/annotations/panoptic_val2017/\")\nr_labels = np.random.choice(list(root.glob(\"*.png\")), size=MINIVAL_SIZE, replace=False)\nr_images = [f.parent.parent.parent.joinpath(\n \"val2017\").joinpath(\"{}.jpg\".format(f.stem)) for f in r_labels]\n\nr_labels[0].parent.parent.joinpath(dest_name).mkdir(exist_ok=True, parents=True)\nr_images[0].parent.parent.joinpath(dest_name).mkdir(exist_ok=True, parents=True)\n\ncp_r_labels = [f.parent.parent.joinpath(dest_name).joinpath(f.name) for f in r_labels]\ncp_r_images = [f.parent.parent.joinpath(dest_name).joinpath(f.name) for f in r_images]\n\nr_labels[0], cp_r_labels[0], r_images[0], cp_r_images[0]\n\nimport shutil\nfor src, dst in zip(r_labels, cp_r_labels):\n shutil.copyfile(src, dst)\n \nfor src, dst in zip(r_images, cp_r_images):\n shutil.copyfile(src, dst)",
"_____no_output_____"
],
[
"from tqdm import tqdm\nimport warnings\nimport json\n\ndef predict_and_save(img, img_id, img_dir, tmp_json): \n segment, RGB = panoseg.predict(img, img_id)\n \n skimage.io.imsave(img_dir.joinpath(segment[\"file_name\"]), RGB)\n with open(tmp_json.joinpath(\"{}.json\".format(img_id)), \"w\") as f:\n json.dump(segment, f)\n\n# Make predictions for all test set.\n# Takes a while (~30h), so to make it more practical\n# it can be interrupted and continued.\n\nout_path = Path(\"./panopticapi/sample_data/\")\nsplit_name = \"panoptic_test2017_drcnn\"\n\n# Prepare directories for intermediate results\nimg_dir = out_path.joinpath(split_name)\nimg_dir.mkdir(parents=True, exist_ok=True)\ntmp_json = out_path.joinpath(\"{}_separated_json\".format(split_name))\ntmp_json.mkdir(parents=True, exist_ok=True)\n\n# Input images\nin_path = Path(\"./dataset/test2017/\")\nimgs = list(in_path.glob(\"*.jpg\"))\n\nerrs = []\n\n_json_cache = out_path.joinpath(\"{}_separated_json\".format(split_name))\nfor ip in tqdm(imgs):\n id_img = int(ip.stem.lstrip(\"0\"))\n \n if _json_cache.joinpath(\"{}.json\".format(id_img)).exists():\n continue\n try:\n img = skimage.io.imread(str(ip))\n if img.ndim == 2:\n img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n predict_and_save(img, id_img, img_dir, tmp_json)\n except Exception as e:\n errs.append({\"image\": str(ip), \"error\": str(e)})\n\nif len(errs) > 0:\n with open(out_path.joinpath(\"{}_error_log.json\".format(split_name)), \"w\") as f:\n json.dump(errs, f)",
"_____no_output_____"
],
[
"# Combine all predictions\n\nbuff = []\nfor j in tmp_json.glob(\"*.json\"):\n with j.open() as f:\n buff.append(json.load(f))\n\nwith open(out_path.joinpath(\"{}.json\".format(split_name)), \"w\") as f:\n json.dump({\"annotations\": buff}, f)\n\n# shutil.rmtree(tmp_json)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf64cc5136a5d994de258dda9c5b2336fe43dc9 | 11,263 | ipynb | Jupyter Notebook | examples/JSON/json example.ipynb | visiologyofficial/vixtract | 8235d6022fa0e1baa1cf5b05667e45c4d498cdc4 | [
"BSD-3-Clause"
] | 37 | 2020-07-05T19:27:49.000Z | 2021-12-30T20:56:47.000Z | examples/JSON/json example.ipynb | visiologyofficial/vixtract | 8235d6022fa0e1baa1cf5b05667e45c4d498cdc4 | [
"BSD-3-Clause"
] | 2 | 2020-11-25T05:23:52.000Z | 2022-03-30T10:15:59.000Z | examples/JSON/json example.ipynb | visiologyofficial/vixtract | 8235d6022fa0e1baa1cf5b05667e45c4d498cdc4 | [
"BSD-3-Clause"
] | 8 | 2020-07-15T11:20:28.000Z | 2021-12-19T09:30:05.000Z | 30.276882 | 135 | 0.412501 | [
[
[
"import petl as etl # для загрузки и обработки данных\nimport json # для чтения файла json \nimport pandas as pd # для выгрузки таблицы в postgresql\nimport sqlalchemy # для создания подключения к базе данных",
"_____no_output_____"
],
[
"# выгружаем данные из json файла\nwith open('example.json', 'r', encoding='utf-8') as f: \n text = json.load(f)",
"_____no_output_____"
],
[
"type(text[0])",
"_____no_output_____"
],
[
"# функция для установки правильного формата дат\ndef make_date(item):\n day = item['bibliography']['publication']['day']\n month = item['bibliography']['publication']['month']\n year = item['bibliography']['publication']['year']\n return f\"{year:04d}-{month:02d}-{day:02d}\"",
"_____no_output_____"
],
[
"# переструктурирование json файла в словарь python\ndf_dict = [{\n 'name': item['bibliography']['author']['name'], \n 'title': item['bibliography']['title'], \n 'publication': make_date(item),\n 'downloads': item['metadata']['downloads'],\n 'words': item['metrics']['statistics']['words']}\n for item in text]",
"_____no_output_____"
],
[
"table = etl.fromdicts(df_dict) # создание таблицы petl",
"_____no_output_____"
],
[
"table = etl.addfield(table, 'Last_name_letter', lambda x: x['name'][0]) # добавление столбца с первой буквой фамилии",
"_____no_output_____"
],
[
"table.toxlsx('example.xlsx', mode='overwrite') # экспорт в Excel файл",
"_____no_output_____"
],
[
"engine = sqlalchemy.create_engine('postgresql://{user}:{user_password}@{url}:{port}/{database_name}') # подключение к базе данных",
"_____no_output_____"
],
[
"df = pd.DataFrame(columns=table[0], data=table[1:]) # создание DataFrame из petl-таблицы",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"# экспорт в базу данных\ndf.to_sql('books_json', engine, index=False, if_exists='replace', dtype={\n 'name': sqlalchemy.VARCHAR(50),\n 'title': sqlalchemy.VARCHAR(255),\n 'downloads': sqlalchemy.Integer(),\n 'words': sqlalchemy.Integer(),\n 'publication': sqlalchemy.Date(),\n 'Last_name_letter': sqlalchemy.CHAR()\n})",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf650d03a0f02f6a40549356e7fdfd900111a14 | 399,220 | ipynb | Jupyter Notebook | notebooks/scenario/fanout/example-graphml-fanout.ipynb | cfalguiere/dependencynet | 485a9d7fe39988b7b733f04e7980382a1e0db8f8 | [
"MIT"
] | null | null | null | notebooks/scenario/fanout/example-graphml-fanout.ipynb | cfalguiere/dependencynet | 485a9d7fe39988b7b733f04e7980382a1e0db8f8 | [
"MIT"
] | 16 | 2021-05-01T11:44:35.000Z | 2021-06-12T22:25:23.000Z | notebooks/scenario/fanout/example-graphml-fanout.ipynb | cfalguiere/dependencynet | 485a9d7fe39988b7b733f04e7980382a1e0db8f8 | [
"MIT"
] | null | null | null | 28.149767 | 210 | 0.507951 | [
[
[
"# Full Scenario for one output to many input\n\n### Purpose\n- Demonstrate graph connecting one output to multiple input\n- tests issue https://github.com/cfalguiere/dependencynet/issues/3",
"_____no_output_____"
],
[
"## Imports modules and does some configurations",
"_____no_output_____"
]
],
[
[
"from os import path, makedirs\n\nimport pandas as pd",
"_____no_output_____"
],
[
"import logging\nlogging.basicConfig()\nlogger = logging.getLogger('dependencynet')\nlogger.setLevel(logging.WARN)",
"_____no_output_____"
],
[
"# clarification for linter\nfrom IPython.display import display",
"_____no_output_____"
],
[
"# remove these lines if you use the pypi package\nimport sys\nsys.path.append(\"../../..\") # go to parent dir",
"_____no_output_____"
],
[
"from dependencynet.schema import SchemaBuilder\nfrom dependencynet.model import ModelBuilder\nfrom dependencynet.network.graphbuilder import GraphBuilder, LevelNode, InputNode, OutputNode\nfrom dependencynet.network.stylebuilder import StyleBuilder\nfrom dependencynet.network.graphviewer import GraphViewer\nfrom dependencynet.network.graphml import GraphMLConverter",
"_____no_output_____"
]
],
[
[
"## Loads some source data\n\nThe following code load some file. The properties list declares the names of the columns of the csv file. Finally a pandas datasource is created.",
"_____no_output_____"
]
],
[
[
"filename = path.join('resources', 'fanout.csv')\ndata = pd.read_csv(filename, delimiter=';')\nproperties = ['L1', 'L2', 'L3', 'RO', 'RI']\nsource_df = pd.DataFrame(data, columns=properties)",
"_____no_output_____"
]
],
[
[
"#### loaded datasource",
"_____no_output_____"
]
],
[
[
"display(source_df)",
"_____no_output_____"
]
],
[
[
"## Describes and build the data model\n\nThis section turn the data source into a tree by connecting related items.\n\nTree nodes fall on 2 categories, levels and resources.\n- L1, L2, L3 are hierarchival levels.\n- RO and RI are resources of the lower level. They are respectivelly outputs and inputs of the lower level node.\n\nIn the data source, REF-out is the output of REF and the input for the 3 others items A, B, and C\n\nThe schema definition provides additional informations\n- the second parameter is a short name used to compute unique ids for each node\n- RO and RI will be connected so that an RO is connected to an RI having the same name. The parameter role indicates the role in the connection and connect_id_name indicates the type name of the new link.",
"_____no_output_____"
]
],
[
[
"schema_otm = SchemaBuilder().level('L1', 'L1').level('L2', 'L2').level('L3', 'L3') \\\n .resource('RI', 'RI', role='INPUT', connect_id_name='R') \\\n .resource('RO', 'RO', role='OUTPUT', connect_id_name='R') \\\n .connect('RO', 'RI') \\\n .render()\n\nmodel = ModelBuilder().from_compact(source_df) \\\n .with_schema(schema_otm) \\\n .render()",
"_____no_output_____"
]
],
[
[
"## builds the graph network\n\nThis section builds a graph from the Data Model.\n\nIt turns each item into Node object and \n- it connects children to their parent in the hierarchical levels \n- it connects the lower level to its resources\n- it links connected resources, for instance RO and RI\n\nThe followinf code declares the Node class, the mapping from schema type to Node classes and build the graph.",
"_____no_output_____"
],
[
"#### Node classes",
"_____no_output_____"
]
],
[
[
"class L1Node(LevelNode):\n def __init__(self, properties):\n super().__init__(properties, 'L1')",
"_____no_output_____"
],
[
"class L2Node(LevelNode):\n def __init__(self, properties):\n super().__init__(properties, 'L2')",
"_____no_output_____"
],
[
"class L3Node(LevelNode):\n def __init__(self, properties):\n super().__init__(properties, 'L3')",
"_____no_output_____"
],
[
"class RINode(InputNode):\n def __init__(self, properties):\n super().__init__(properties, 'RI', 'R')",
"_____no_output_____"
],
[
"class RONode(OutputNode):\n def __init__(self, properties):\n super().__init__(properties, 'RO', 'R')",
"_____no_output_____"
],
[
"class_mapping = {'L1': L1Node, 'L2': L2Node, 'L3': L3Node, 'RO': RONode, 'RI': RINode}",
"_____no_output_____"
],
[
"graph_model = GraphBuilder().with_types(class_mapping).with_model(model).render()",
"_____no_output_____"
]
],
[
[
"#### resulting graph model",
"_____no_output_____"
]
],
[
[
"graph_model.pretty_print()",
"_____no_output_____"
],
[
"# you may alos check nodes and edges details with the following lines\n# display(graph_model.G.nodes)\n# display(graph_model.G.edges)",
"_____no_output_____"
]
],
[
[
"## Shows the graph in cytoscape\n\nThis section infers a cytoscape rendering style from the schema and show the cytoscape graph. You can check that REF-Out links REF to A, B, and C\n\n",
"_____no_output_____"
],
[
"#### see also\nThe graph use a subset of the Cytoscape graph configuration.\n\nYou may \n- choose a layout : dagre, klay\n- choose an orientation : LR (Left to Right), TB (Top to Bottom)\n\nIpycytoscape documentation https://ipycytoscape.readthedocs.io/en/latest/.\n\nYou may also refer to Cytoscape.js documentation.\n- Cytoscape layouts : https://blog.js.cytoscape.org/2020/05/11/layouts/",
"_____no_output_____"
]
],
[
[
"sb = StyleBuilder(schema_otm)\ngraph_style = sb.render()",
"_____no_output_____"
],
[
"#import matplotlib.pyplot as plt\n#%matplotlib inline",
"_____no_output_____"
],
[
"cytoscapeobj = GraphViewer(graph_model).render('dagre', graph_style, 'LR')",
"_____no_output_____"
],
[
"display(cytoscapeobj)",
"_____no_output_____"
]
],
[
[
"## Exports the graph to GraphML\n\nThe cytoscape widget is uneasy to export as a file. \n\nYou may want to export the graph as GraphML.\n\nThis section exports a PyYed GrapmML file.\n\nHow the show the graph in Yed \n- Open the generated file in Yed\n- Use options Tools / Fit Node to Layer to size the nodes\n- Use options Layout / Hierachical or some other layout to place the nodes ",
"_____no_output_____"
]
],
[
[
"dirname = path.join('output')\nmakedirs(dirname, exist_ok=True)\nfilename = path.join(dirname, 'fanout-yed.graphml')\nGraphMLConverter(graph_model, graph_style, schema_otm).save(filename)",
"_____no_output_____"
],
[
"print('Done')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf65e857ee7bd8fe5c0dc7a4d84a2ea660ea6d8 | 13,725 | ipynb | Jupyter Notebook | examples/example_data/Dev.ipynb | lcopey/node_editor | 04d56ae4c7f2149e46903d5dd2e46f3906ef69e6 | [
"MIT"
] | 1 | 2021-04-30T11:28:42.000Z | 2021-04-30T11:28:42.000Z | examples/example_data/Dev.ipynb | lcopey/node_editor | 04d56ae4c7f2149e46903d5dd2e46f3906ef69e6 | [
"MIT"
] | null | null | null | examples/example_data/Dev.ipynb | lcopey/node_editor | 04d56ae4c7f2149e46903d5dd2e46f3906ef69e6 | [
"MIT"
] | null | null | null | 24.72973 | 137 | 0.498142 | [
[
[
"%config Completer.use_jedi = False\nimport csv\nimport io\nimport pandas as pd\nimport numpy as np\nfrom typing import Union",
"_____no_output_____"
],
[
"sniff = csv.Sniffer()",
"_____no_output_____"
],
[
"# file_path = '../../../Standard_Comparaison/Silane.csv'\nfile_path = '../../../Standard_Comparaison/Export/Test/Raw_datas/Composition.csv'\nwith open(file_path, 'r') as f:\n print(f.encoding)\n dialect = sniff.sniff(f.read(4096), delimiters=';, \\t')\n print(dialect)\ndata = pd.read_csv(file_path, dialect=dialect, encoding=f.encoding, index_col=[0, 1], header=[0, 1])\n# data = pd.read_csv(file_path, dialect=dialect, encoding=f.encoding, )\n# data['MatCode'] = data.MatCode.astype('Float64')\n# data.loc[6, 'MatCode'] = '*'",
"cp1252\n<class 'csv.Sniffer.sniff.<locals>.dialect'>\n"
],
[
"from pandasgui import show",
"_____no_output_____"
],
[
"show(data)",
"_____no_output_____"
],
[
"columns = pd.MultiIndex.from_tuples([(i, f'level_1_{j}', f'level_2_{k}') for i in range(2) for j in range(3) for k in range(5)])\n# level_values = [list(value) for value in zip(*columns)]",
"_____no_output_____"
],
[
"test = columns[0]",
"_____no_output_____"
],
[
"array = [bytes(value) if type(value) != str else bytes(value, 'utf-8') for value in test]",
"_____no_output_____"
],
[
"bytearray()",
"_____no_output_____"
],
[
"level_values = np.array([list(value) for value in zip(*columns)], dtype='O')",
"_____no_output_____"
],
[
"level_values.T.tolist()",
"_____no_output_____"
],
[
"outer_level = np.array(level_values[0])",
"_____no_output_____"
],
[
"outer_level",
"_____no_output_____"
],
[
"set([list(value) for value in zip(*columns)][0])",
"_____no_output_____"
],
[
"outer_level = np.array([list(value) for value in zip(*columns)][0])",
"_____no_output_____"
],
[
"outer_level == 0",
"_____no_output_____"
],
[
"columns = columns[np.random.randint(0, len(columns), size=len(columns))]",
"_____no_output_____"
],
[
"df = pd.DataFrame(np.random.rand(3, len(columns)),)",
"_____no_output_____"
],
[
"df.columns = df.columns.astype(str)",
"_____no_output_____"
],
[
"df = pd.DataFrame(np.random.rand(3, len(columns)), columns=columns)\n\n# get levels \n# nlevels = df.columns.nlevels\ndef _get_spans(index: Union[pd.Index, pd.MultiIndex]):\n if isinstance(index, pd.MultiIndex):\n levels = np.stack([np.array(value) for value in index]).T\n elif isinstance(index, pd.Index):\n levels = np.array([df.columns])\n\n for nlevel, level in enumerate(levels):\n # detect where level are discontinuous\n spans = list(np.where(level[1:] != level[:-1])[0])\n # add the first and last cell if necessary\n if 0 not in spans:\n spans.insert(0, -1)\n if len(level) - 1 not in spans:\n spans.append(len(level) - 1)\n # only check if span if larger thant one cell\n for n in np.where(np.diff(list(spans)) > 1)[0]:\n print(spans[n]+1, spans[n+1])\n \n_get_spans(df.columns)",
"_____no_output_____"
],
[
"if isinstance(df.columns, pd.MultiIndex):\n N = len(df.columns[0])\nelse:\n N = 1\n\nfor level in range(N): # Iterates over the levels\n# print(level)\n # Find how many segments the MultiIndex has\n if isinstance(df.columns, pd.MultiIndex):\n arr = [df.columns[i][level] for i in range(len(df.columns))]\n else:\n arr = df.columns\n \n print(arr)\n\n # Holds the starting index of a range of equal values.\n # None means it is not currently in a range of equal values.\n match_start = None\n\n for col in range(1, len(arr)): # Iterates over cells in row\n # Check if cell matches cell to its left\n if arr[col] == arr[col - 1]:\n if match_start is None:\n match_start = col - 1\n # If this is the last cell, need to end it\n if col == len(arr) - 1:\n match_end = col\n span_size = match_end - match_start + 1\n# self.setSpan(level, match_start, 1, span_size)\n print(match_start, match_end)\n else:\n if match_start is not None:\n match_end = col - 1\n span_size = match_end - match_start + 1\n# self.setSpan(level, match_start, 1, span_size)\n print(match_start, match_end) \n match_start = None\n",
"_____no_output_____"
]
],
[
[
"- attempt to cast to out_type\n- if cast to str, just do\n na type for str ?\n- if cast to numeric :\n - pd.to_numeric with coerce to detect potential issue and store them\n - cast to out_type",
"_____no_output_____"
]
],
[
[
"new_data.MatCode == data.MatCode.astype(int)",
"_____no_output_____"
],
[
"new_data",
"_____no_output_____"
]
],
[
[
"# Import ?",
"_____no_output_____"
]
],
[
[
"import os, glob\nfrom os import walk\nfrom os.path import dirname, join, isfile, basename, isdir, exists, normpath",
"_____no_output_____"
],
[
"basefolder = dirname('./nodes/__init__.py')",
"_____no_output_____"
],
[
"modules = []\nfor path in glob.glob(join(basefolder, '*')):\n if isfile(path) and not path.endswith('__init__.py'):\n modules.append(basename(path)[:-3])\n elif isdir(path) and exists(join(path, '__init__.py')):\n modules.append(normpath(basename(path)))",
"_____no_output_____"
],
[
"modules",
"_____no_output_____"
],
[
"import xml.etree.ElementTree as ET\nimport re\nimport pandas as pd",
"_____no_output_____"
],
[
"pd.read_csv('./Stats/Prop_dyn_generation/Models/Latent_description.csv', index_col=[0]).loc['80%']*pd.DataFrame([[0.5]*6])",
"_____no_output_____"
],
[
"def parse_gpx(file):\n tree = ET.parse(file)\n root = tree.getroot()\n try:\n ns = re.match('\\{.*\\}', root.tag).group(0)\n except:\n ns = ''\n\n coords = {'lat': [], 'lon': [], 'ele': [], 'time': []}\n # Itère sur chaque point du trajet\n for element in tree.iter(ns+'trkpt'):\n coords['lat'].append(element.attrib['lat'])\n coords['lon'].append(element.attrib['lon'])\n child = element.getchildren()\n for value in child:\n if value.tag == ns+'ele': coords['ele'].append(value.text)\n if value.tag == ns+'time': coords['time'].append(value.text)\n \n return pd.DataFrame(coords).apply(pd.to_numeric, errors='ignore')",
"_____no_output_____"
],
[
"file = 'F:/Downloads/Randonnée/iti0584_crete_du_sancy.gpx'\ncoords = parse_gpx(file)",
"_____no_output_____"
],
[
"from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nfrom plotly.graph_objs import *\ninit_notebook_mode(connected=True)\nplotly_config = {'showLink': False, 'displaylogo': False, 'modeBarButtonsToRemove': ['sendDataToCloud']}",
"_____no_output_____"
],
[
"data = [dict(\n type = 'scattergeo',\n locationmode = 'USA-states',\n lon = coords['lon'],\n lat = coords['lat'],\n mode = 'lines',\n line = dict(\n width = 1,\n color = 'red',\n )\n )]\nlayout = dict(\n title = file,\n showlegend = False, \n geo = dict(\n scope='europe',\n projection=dict( type='azimuthal equal area' ),\n showland = True,\n landcolor = 'rgb(243, 243, 243)',\n countrycolor = 'rgb(204, 204, 204)',\n ),\n )\n \nfig = dict( data=data, layout=layout )\niplot(fig, filename='d3-flight-paths', config=plotly_config)",
"_____no_output_____"
],
[
"import pilllow",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf664b70c0e2dc3115436b54ce38315e44c1a94 | 12,228 | ipynb | Jupyter Notebook | Untitled.ipynb | Rakshithadevi/Ex-01_DS_Data_Cleansing | 087c890ae02df06aaf413aea35f6413bef0ad754 | [
"MIT"
] | null | null | null | Untitled.ipynb | Rakshithadevi/Ex-01_DS_Data_Cleansing | 087c890ae02df06aaf413aea35f6413bef0ad754 | [
"MIT"
] | null | null | null | Untitled.ipynb | Rakshithadevi/Ex-01_DS_Data_Cleansing | 087c890ae02df06aaf413aea35f6413bef0ad754 | [
"MIT"
] | null | null | null | 36.831325 | 104 | 0.352143 | [
[
[
"import pandas as pd\ndf = pd.read_csv(\"Data_set.csv\")\ndf.head(10)\ndf.info()\ndf.isnull().sum()\ndf['show_name']=df['show_name'].fillna(df['show_name'].mode()[0])\ndf['aired_on']=df['aired_on'].fillna(df['aired_on'].mode()[0])\ndf['original_network']=df['original_network'].fillna(df['original_network'].mode()[0])\ndf.head()\ndf['rating']=df['rating'].fillna(df['rating'].mean())\ndf['current_overall_rank']=df['current_overall_rank'].fillna(df['current_overall_rank'].mean())\ndf.head()\ndf['watchers']=df['watchers'].fillna(df['watchers'].median())\ndf.head()\ndf.info()\ndf.isnull().sum()\ndf.to_csv('Data_set.csv')\ndf\n\n",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 100 entries, 0 to 99\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 100 non-null int64 \n 1 show_name 100 non-null object \n 2 country 100 non-null object \n 3 num_episodes 100 non-null int64 \n 4 aired_on 100 non-null object \n 5 original_network 100 non-null object \n 6 rating 100 non-null float64\n 7 current_overall_rank 100 non-null float64\n 8 lifetime_popularity_rank 100 non-null int64 \n 9 watchers 100 non-null float64\ndtypes: float64(3), int64(3), object(4)\nmemory usage: 7.9+ KB\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 100 entries, 0 to 99\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 100 non-null int64 \n 1 show_name 100 non-null object \n 2 country 100 non-null object \n 3 num_episodes 100 non-null int64 \n 4 aired_on 100 non-null object \n 5 original_network 100 non-null object \n 6 rating 100 non-null float64\n 7 current_overall_rank 100 non-null float64\n 8 lifetime_popularity_rank 100 non-null int64 \n 9 watchers 100 non-null float64\ndtypes: float64(3), int64(3), object(4)\nmemory usage: 7.9+ KB\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecf665bed4c1ba402e4d45ada8d0b50ff637958d | 16,189 | ipynb | Jupyter Notebook | Documentation/Trial_Codes/.ipynb_checkpoints/operator_evolution-checkpoint.ipynb | IT-Department-Projects/DAA | e11f88965416d6c2128b72941b36a6bc2fb56c44 | [
"MIT"
] | null | null | null | Documentation/Trial_Codes/.ipynb_checkpoints/operator_evolution-checkpoint.ipynb | IT-Department-Projects/DAA | e11f88965416d6c2128b72941b36a6bc2fb56c44 | [
"MIT"
] | null | null | null | Documentation/Trial_Codes/.ipynb_checkpoints/operator_evolution-checkpoint.ipynb | IT-Department-Projects/DAA | e11f88965416d6c2128b72941b36a6bc2fb56c44 | [
"MIT"
] | 2 | 2017-05-07T04:23:06.000Z | 2019-12-05T04:14:13.000Z | 34.815054 | 1,160 | 0.5492 | [
[
[
"from operator import itemgetter, attrgetter\nimport random\nimport sys\nimport os\nimport math\nimport re",
"_____no_output_____"
],
[
"genetic_code = {\n '0000':'0',\n '0001':'1',\n '0010':'2',\n '0011':'3',\n '0100':'4',\n '0101':'5',\n '0110':'6',\n '0111':'7',\n '1000':'8',\n '1001':'9',\n '1010':'+',\n '1011':'-',\n '1100':'*',\n '1101':'/'\n }",
"_____no_output_____"
],
[
"solution_found = False\npopN = 100 # n number of chromos per population\ngenesPerCh = 75\nmax_iterations = 1000\ntarget = 1111.0\ncrossover_rate = 0.7\nmutation_rate = 0.05",
"_____no_output_____"
],
[
"\"\"\"Generates random population of chromos\"\"\"\ndef generatePop ():\n chromos, chromo = [], []\n for eachChromo in range(popN):\n chromo = []\n for bit in range(genesPerCh * 4):\n chromo.append(random.randint(0,1))\n chromos.append(chromo)\n return chromos",
"_____no_output_____"
],
[
"\"\"\"Takes a binary list (chromo) and returns a protein (mathematical expression in string)\"\"\"\ndef translate (chromo):\n protein, chromo_string = '',''\n need_int = True\n a, b = 0, 4 # ie from point a to point b (start to stop point in string)\n for bit in chromo:\n chromo_string += str(bit)\n for gene in range(genesPerCh):\n if chromo_string[a:b] == '1111' or chromo_string[a:b] == '1110': \n continue\n elif chromo_string[a:b] != '1010' and chromo_string[a:b] != '1011' and chromo_string[a:b] != '1100' and chromo_string[a:b] != '1101':\n if need_int == True:\n protein += genetic_code[chromo_string[a:b]]\n need_int = False\n a += 4\n b += 4\n continue\n else:\n a += 4\n b += 4\n continue\n else:\n if need_int == False:\n protein += genetic_code[chromo_string[a:b]]\n need_int = True\n a += 4\n b += 4\n continue\n else:\n a += 4\n b += 4\n continue\n if len(protein) %2 == 0:\n protein = protein[:-1]\n return protein",
"_____no_output_____"
],
[
"\"\"\"Evaluates the mathematical expressions in number + operator blocks of two\"\"\"\ndef evaluate(protein):\n a = 3\n b = 5\n output = -1\n lenprotein = len(protein) # i imagine this is quicker than calling len everytime?\n if lenprotein == 0:\n output = 0\n if lenprotein == 1:\n output = int(protein)\n if lenprotein >= 3:\n try :\n output = eval(protein[0:3])\n except ZeroDivisionError:\n output = 0\n if lenprotein > 4:\n while b != lenprotein+2:\n try :\n output = eval(str(output)+protein[a:b])\n except ZeroDivisionError:\n output = 0\n a+=2\n b+=2 \n return output",
"_____no_output_____"
],
[
"\"\"\"Calulates fitness as a fraction of the total fitness\"\"\"\ndef calcFitness (errors):\n fitnessScores = []\n totalError = sum(errors)\n i = 0\n # fitness scores are a fraction of the total error\n for error in errors:\n fitnessScores.append (float(errors[i])/float(totalError))\n i += 1\n return fitnessScores",
"_____no_output_____"
],
[
"def displayFit (error):\n bestFitDisplay = 100\n dashesN = int(error * bestFitDisplay)\n dashes = ''\n for j in range(bestFitDisplay-dashesN):\n dashes+=' '\n for i in range(dashesN):\n dashes+='+'\n return dashes",
"_____no_output_____"
],
[
"\"\"\"Takes a population of chromosomes and returns a list of tuples where each chromo is paired to its fitness scores and ranked accroding to its fitness\"\"\"\ndef rankPop (chromos):\n proteins, outputs, errors = [], [], []\n i = 1\n # translate each chromo into mathematical expression (protein), evaluate the output of the expression,\n # calculate the inverse error of the output\n print('%s: %s\\t=%s \\t%s %s') %('n'.rjust(5), 'PROTEIN'.rjust(30), 'OUTPUT'.rjust(10), 'INVERSE ERROR'.rjust(17), 'GRAPHICAL INVERSE ERROR'.rjust(105))\n for chromo in chromos: \n protein = translate(chromo)\n proteins.append(protein)\n \n output = evaluate(protein)\n outputs.append(output)\n \n try:\n error = 1/math.fabs(target-output)\n except ZeroDivisionError:\n global solution_found\n solution_found = True\n error = 0\n print('\\nSOLUTION FOUND') \n print('%s: %s \\t=%s %s') %(str(i).rjust(5), protein.rjust(30), str(output).rjust(10), displayFit(1.3).rjust(130))\n break\n else:\n #error = 1/math.fabs(target-output)\n errors.append(error)\n print('%s: %s \\t=%s \\t%s %s') %(str(i).rjust(5), protein.rjust(30), str(output).rjust(10), str(error).rjust(17), displayFit(error).rjust(105))\n i+=1 \n fitnessScores = calcFitness (errors) # calc fitness scores from the erros calculated\n pairedPop = zip ( chromos, proteins, outputs, fitnessScores) # pair each chromo with its protein, ouput and fitness score\n rankedPop = sorted ( pairedPop,key = itemgetter(-1), reverse = True ) # sort the paired pop by ascending fitness score\n return rankedPop",
"_____no_output_____"
],
[
"\"\"\" taking a ranked population selects two of the fittest members using roulette method\"\"\"\ndef selectFittest (fitnessScores, rankedChromos):\n while 1 == 1: # ensure that the chromosomes selected for breeding are have different indexes in the population\n index1 = roulette (fitnessScores)\n index2 = roulette (fitnessScores)\n if index1 == index2:\n continue\n else:\n break\n\n \n ch1 = rankedChromos[index1] # select and return chromosomes for breeding \n ch2 = rankedChromos[index2]\n return ch1, ch2",
"_____no_output_____"
],
[
"\"\"\"Fitness scores are fractions, their sum = 1. Fitter chromosomes have a larger fraction. \"\"\"\ndef roulette (fitnessScores):\n index = 0\n cumalativeFitness = 0.0\n r = random.random()\n \n for i in range(len(fitnessScores)): # for each chromosome's fitness score\n cumalativeFitness += fitnessScores[i] # add each chromosome's fitness score to cumalative fitness\n\n if cumalativeFitness > r: # in the event of cumalative fitness becoming greater than r, return index of that chromo\n return i",
"_____no_output_____"
],
[
"def crossover (ch1, ch2):\n # at a random chiasma\n r = random.randint(0,genesPerCh*4)\n return ch1[:r]+ch2[r:], ch2[:r]+ch1[r:]",
"_____no_output_____"
],
[
"def mutate (ch):\n mutatedCh = []\n for i in ch:\n if random.random() < mutation_rate:\n if i == 1:\n mutatedCh.append(0)\n else:\n mutatedCh.append(1)\n else:\n mutatedCh.append(i)\n #assert mutatedCh != ch\n return mutatedCh",
"_____no_output_____"
],
[
"\"\"\"Using breed and mutate it generates two new chromos from the selected pair\"\"\"\ndef breed (ch1, ch2):\n \n newCh1, newCh2 = [], []\n if random.random() < crossover_rate: # rate dependent crossover of selected chromosomes\n newCh1, newCh2 = crossover(ch1, ch2)\n else:\n newCh1, newCh2 = ch1, ch2\n newnewCh1 = mutate (newCh1) # mutate crossovered chromos\n newnewCh2 = mutate (newCh2)\n \n return newnewCh1, newnewCh2",
"_____no_output_____"
],
[
"\"\"\" Taking a ranked population return a new population by breeding the ranked one\"\"\"\ndef iteratePop (rankedPop):\n fitnessScores = [ item[-1] for item in rankedPop ] # extract fitness scores from ranked population\n rankedChromos = [ item[0] for item in rankedPop ] # extract chromosomes from ranked population\n\n newpop = []\n newpop.extend(rankedChromos[:popN/15]) # known as elitism, conserve the best solutions to new population\n \n while len(newpop) != popN:\n ch1, ch2 = [], []\n ch1, ch2 = selectFittest (fitnessScores, rankedChromos) # select two of the fittest chromos\n \n ch1, ch2 = breed (ch1, ch2) # breed them to create two new chromosomes \n newpop.append(ch1) # and append to new population\n newpop.append(ch2)\n return newpop\n \n \ndef configureSettings ():\n configure = input ('T - Enter Target Number \\tD - Default settings: ')\n match1 = re.search( 't',configure, re.IGNORECASE )\n if match1:\n global target\n target = input('Target int: ' )\n",
"_____no_output_____"
],
[
"def main(): \n configureSettings ()\n chromos = generatePop() #generate new population of random chromosomes\n iterations = 0\n\n while iterations != max_iterations and solution_found != True:\n # take the pop of random chromos and rank them based on their fitness score/proximity to target output\n rankedPop = rankPop(chromos) \n \n print('\\nCurrent iterations:', iterations)\n \n if solution_found != True:\n # if solution is not found iterate a new population from previous ranked population\n chromos = []\n chromos = iteratePop(rankedPop)\n \n iterations += 1\n else:\n break",
"_____no_output_____"
],
[
"if __name__ == \"__main__\":\n main()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf66c0750ec0de66cb8d95ddf3ea83083f29ab9 | 20,171 | ipynb | Jupyter Notebook | .ipynb_checkpoints/borrador-checkpoint.ipynb | pipesalas/resultados_plebiscito | 56af8f728c41db6290005df0a58f81b6f3b30a01 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/borrador-checkpoint.ipynb | pipesalas/resultados_plebiscito | 56af8f728c41db6290005df0a58f81b6f3b30a01 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/borrador-checkpoint.ipynb | pipesalas/resultados_plebiscito | 56af8f728c41db6290005df0a58f81b6f3b30a01 | [
"MIT"
] | null | null | null | 36.019643 | 178 | 0.36959 | [
[
[
"import geopandas as gpd\nimport pandas as pd",
"_____no_output_____"
],
[
"!ls /Users/pipe/Documents/Spike/CR2/datos/mapas_censo/Comunas/comunas.shp",
"\u001b[31mcomunas.CPG\u001b[m\u001b[m \u001b[31mcomunas.prj\u001b[m\u001b[m \u001b[31mcomunas.sbx\u001b[m\u001b[m \u001b[31mcomunas.shp.xml\u001b[m\u001b[m\n\u001b[31mcomunas.dbf\u001b[m\u001b[m \u001b[31mcomunas.sbn\u001b[m\u001b[m \u001b[31mcomunas.shp\u001b[m\u001b[m \u001b[31mcomunas.shx\u001b[m\u001b[m\n"
],
[
"gdf = gpd.read_file('/Users/pipe/Documents/Spike/CR2/datos/mapas_censo/Comunas/comunas.shp')\ngdf.rename(columns= {'cod_comuna': 'cod_com'}, inplace=True)\ngdf.head(1)",
"_____no_output_____"
],
[
"df = pd.read_csv('BBDD Plebiscito 2020 - CPE UDLA en base a SERVEL, 2020 - 2020.csv')\n",
"_____no_output_____"
],
[
"df_consolidado = df.merge(gdf, on=['cod_com'])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf676026a1728e6b8c2a11a17c552ac42f53832 | 53,303 | ipynb | Jupyter Notebook | nonlinear_optimal_control.ipynb | sunericd/optimal_control_of_aging_in_complex_systems | 86621a41ea9610d918857af9a12a38b2ab85c61f | [
"MIT"
] | null | null | null | nonlinear_optimal_control.ipynb | sunericd/optimal_control_of_aging_in_complex_systems | 86621a41ea9610d918857af9a12a38b2ab85c61f | [
"MIT"
] | null | null | null | nonlinear_optimal_control.ipynb | sunericd/optimal_control_of_aging_in_complex_systems | 86621a41ea9610d918857af9a12a38b2ab85c61f | [
"MIT"
] | null | null | null | 168.148265 | 39,680 | 0.851228 | [
[
[
"# Nonlinear Optimal Control\n\nThis notebook contains code for reading in TOMLAB optimized control data along with cost-minimized data obtained from the model simulations (run on Harvad Odyssey2 cluster; please refer to Miscellaneous code files for more information on how to run the simulations on a SLURM-type cluster) and making the Figure 2E plot in the main text.",
"_____no_output_____"
]
],
[
[
"from model import *",
"_____no_output_____"
]
],
[
[
"## Figure 3: Optimal repair protocol for an interdependent network",
"_____no_output_____"
]
],
[
[
"from scipy.signal import savgol_filter\n\ncolors = ['#000080', '#FFA500', 'm', 'k', '#FFC0CB']\nmarkers = ['o', '^', 's', 'D', '*']\ntransparencies = [0.6, 0.6, 0.4, 0.5, 1.0]\n\nfilelist2=['Gilbert/0.1/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelist3=['Gilbert/0.15/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelist1=['Gilbert/0.0/ParamCurvesData/varya_f0.025_r0.01_a0.196_T100_step1_d0_depoff_N1000']\nfilelist=['Gilbert/0.2/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelist4=['Gilbert/0.25/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelist5=['Gilbert/0.05/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\n\n# new files\nfilelista=['Gilbert/0.025/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelistb=['Gilbert/0.075/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelistc=['Gilbert/0.125/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelistd=['Gilbert/0.175/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfileliste=['Gilbert/0.225/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\nfilelistf=['Gilbert/0.275/ParamCurvesData/varya_f0.025_r0.01_a19.6_T100_step1_d0_depoff_N1000']\n\nf = 0.025\nalpha=10\nT = 100\nparameter_list = np.arange(0,20,0.4)\n\n\ndef bin_data (x, y, n):\n '''\n x = array of x-value lists\n y = array of y-value lists\n n = number of points that each binned average will contain\n '''\n k = 0\n new_x = []\n new_y = []\n \n running_avg_x = 0\n running_avg_y = 0\n \n while k < len(x):\n if k%n == 0 and k>0:\n new_x.append(running_avg_x)\n new_y.append(running_avg_y)\n running_avg_x = 0\n running_avg_y = 0\n running_avg_x += x[k]/n\n running_avg_y += y[k]/n\n k+= 1\n \n return (new_x, new_y)\n\n\n#plt.figure(figsize=(6,3))\nplt.figure()\nfrom matplotlib.ticker import FormatStrFormatter\nfig, ax = plt.subplots()\nax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))\n\n\n \nfor n, filename in enumerate(filelist3):\n # open and read file data \n input_file_path = './Nonlinear/' + filename + '.csv'\n with open(input_file_path, 'rt') as tsvin:\n tsvin = csv.reader(tsvin, delimiter=',')\n row_list = list(tsvin)\n T1_list = [float(i) for i in row_list[1]]\n T2_list = [float(i) for i in row_list[2]]\n # Select the T curve to fit over\n #T_list_f = T2_list\nT1_list, new_prm_list = bin_data(T1_list, parameter_list, 3)\nplt.scatter(T1_list, new_prm_list, color=colors[3], marker=markers[3], alpha=transparencies[3], s=30, edgecolors='none', label='$I=0.15$') # 0.15\nT2_list, new_prm_list = bin_data(T2_list, parameter_list, 3)\nplt.scatter(T2_list, new_prm_list, color=colors[3], marker=markers[3], alpha=transparencies[3], s=30, edgecolors='none')#, label='$I=0.15$') # 0.15\n\n\nfor n, filename in enumerate(filelist2):\n # open and read file data \n input_file_path = './Nonlinear/' + filename + '.csv'\n with open(input_file_path, 'rt') as tsvin:\n tsvin = csv.reader(tsvin, delimiter=',')\n row_list = list(tsvin)\n T1_list = [float(i) for i in row_list[1]]\n T2_list = [float(i) for i in row_list[2]]\n # Select the T curve to fit over\n #T_list_e = T2_list\nT1_list, new_prm_list = bin_data(T1_list, parameter_list, 3)\nplt.scatter(T1_list, new_prm_list, color=colors[2], marker=markers[2], alpha=transparencies[2], s=30, edgecolors='none', label='$I=0.10$') # 0.1\nT2_list, new_prm_list = bin_data(T2_list, parameter_list, 3)\nplt.scatter(T2_list, new_prm_list, color=colors[2], marker=markers[2], alpha=transparencies[2], s=30, edgecolors='none')#, label='$I=0.10$') # 0.1\n\n\nfor n, filename in enumerate(filelist5):\n # open and read file data \n input_file_path = './Nonlinear/' + filename + '.csv'\n with open(input_file_path, 'rt') as tsvin:\n tsvin = csv.reader(tsvin, delimiter=',')\n row_list = list(tsvin)\n T1_list = [float(i) for i in row_list[1]]\n T2_list = [float(i) for i in row_list[2]]\n # Select the T curve to fit over\n #T_list_h = T2_list\nT1_list, new_prm_list = bin_data(T1_list, parameter_list, 3)\nplt.scatter(T1_list, new_prm_list, color=colors[1], marker=markers[1], alpha=transparencies[1], s=30, edgecolors='none', label='$I=0.05$') # 0.05\nT2_list, new_prm_list = bin_data(T2_list, parameter_list, 3)\nplt.scatter(T2_list, new_prm_list, color=colors[1], marker=markers[1], alpha=transparencies[1], s=30, edgecolors='none')#, label='$I=0.05$') # 0.05\n\n\n\nfor n, filename in enumerate(filelist1):\n # open and read file data \n input_file_path = './Nonlinear/' + filename + '.csv'\n with open(input_file_path, 'rt') as tsvin:\n tsvin = csv.reader(tsvin, delimiter=',')\n row_list = list(tsvin)\n T1_list = [float(i) for i in row_list[1]]\n T2_list = [float(i) for i in row_list[2]]\n # Select the T curve to fit over\n #T_list_d = T2_list\nT1_list, new_prm_list = bin_data(T1_list, parameter_list, 3)\nplt.scatter(T1_list, new_prm_list, color=colors[0], marker=markers[0], alpha=transparencies[0], s=30, edgecolors='none', label='$I=0.00$') # 0\nT2_list, new_prm_list = bin_data(T2_list, parameter_list, 3)\nplt.scatter(T2_list, new_prm_list, color=colors[0], marker=markers[0], alpha=transparencies[0], s=30, edgecolors='none')#, label='$I=0.00$') # 0\n\n\n# Read in numerical results and plot\nI_thresh = 0.2\n\ndef extract(raw_string, start_marker, end_marker):\n start = raw_string.index(start_marker) + len(start_marker)\n end = raw_string.index(end_marker, start)\n return (raw_string[start:end])\n\nalpha_list = []\nT1_dict= {}\nT2_dict = {}\n\ndirs = [x[0] for x in os.walk('./TOMLAB_data/alpha/')]\ndirs = dirs[1:]\nfor d_idx, d in enumerate(dirs):\n files = [f for f in os.listdir(d)]\n for f_idx, f in enumerate(files):\n if 'nonlin_alpha_' in f:\n # Extract alpha value\n alpha = float(extract(d+'/'+f,'alpha_','.csv'))\n alpha_list.append(alpha)\n # Read I, T1, T2\n results_mat = np.genfromtxt(d+'/'+f,delimiter=',')\n I_vals = results_mat[0,:]\n T1_list = results_mat[1,:]\n T2_list = results_mat[2,:]\n \n for i, I in enumerate(I_vals):\n if I < I_thresh:\n if str(I) not in T1_dict:\n T1_dict[str(I)] = []\n T2_dict[str(I)] = []\n if d_idx == 0:\n T1_dict[str(I)].append(T1_list[i])\n T2_dict[str(I)].append(T2_list[i])\n else:\n T1_dict[str(I)][f_idx] += T1_list[i]\n T2_dict[str(I)][f_idx] += T2_list[i]\n\nk = 0\nfor i, I in enumerate(I_vals[::-1]):\n norm = 1/len(dirs)\n if I in [0., 0.05, 0.1, 0.15, 0.2]:\n if I < I_thresh:\n sorted_T1_lists = [list(x) for x in zip(*sorted(zip(alpha_list, T1_dict[str(I)]), key=lambda pair: pair[0]))]\n sorted_alpha_list = sorted_T1_lists[0]\n sorted_T1_list = sorted_T1_lists[1]\n sorted_T2_lists = [list(x) for x in zip(*sorted(zip(alpha_list, T2_dict[str(I)]), key=lambda pair: pair[0]))]\n sorted_T2_list = sorted_T2_lists[1]\n\n norm_T1 = norm*np.array(sorted_T1_list)\n norm_T2 = norm*np.array(sorted_T2_list)\n \n # Smoothen with SG filter\n norm_T1 = savgol_filter(norm_T1, 11, 2)\n norm_T2 = savgol_filter(norm_T2, 11, 2)\n \n # Prepend trivial case to extend analytic shading\n sorted_alpha_list = [0]+sorted_alpha_list\n norm_T2 = np.concatenate(([100],norm_T2))\n norm_T1 = np.concatenate(([0],norm_T1))\n \n if colors[::-1][k] == 'k':\n plt.plot(norm_T2, sorted_alpha_list, color=colors[::-1][k], alpha=0.65, linewidth=2.5)#, linestyle='--')\n plt.plot(norm_T1, sorted_alpha_list, color=colors[::-1][k], alpha=0.65, linewidth=2.5)#, linestyle='--')\n elif colors[::-1][k] == '#000080':\n plt.plot(norm_T2, sorted_alpha_list, color=colors[::-1][k], alpha=0.65, linewidth=2.5)\n plt.plot(norm_T1, sorted_alpha_list, color=colors[::-1][k], alpha=0.65, linewidth=2.5) \n else:\n plt.plot(norm_T2, sorted_alpha_list, color=colors[::-1][k], linewidth=2.5)#, linestyle='--')\n plt.plot(norm_T1, sorted_alpha_list, color=colors[::-1][k], linewidth=2.5)#, linestyle='--')\n \n # shading\n if I == 0:\n plt.fill_betweenx(sorted_alpha_list, norm_T1, norm_T2, color='#000080', alpha=0.05)\n k+=1\n \nplt.xlabel('Switching Times, $t$', fontsize=14)\nplt.ylabel('Cost of repair, '+r'$\\alpha$', fontsize=14)\nplt.tick_params(axis='both', which='major', labelsize=12)\nplt.tick_params(axis='both', which='minor', labelsize=12)\nplt.xlim(0,100)\nplt.ylim(0,18)\nplt.yticks([round(x,2) for x in np.arange(0,18,5.0)])\nplt.legend(loc='lower center', borderaxespad=0.5, fontsize=14)\nplt.tight_layout()\nplt.savefig('Fig3.png', dpi=800)\nplt.show()",
"C:\\Users\\edsun\\Anaconda3\\lib\\site-packages\\scipy\\signal\\_arraytools.py:45: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n b = a[a_slice]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf6768d138cff7211acb5dac5068c506ff83f23 | 25,550 | ipynb | Jupyter Notebook | Python Absolute Beginner/Module_3.1_Practice.ipynb | smcreset/pythonteachingcode | a4b71167c53dc07ae61a931b58d7313f9d0f6a9d | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_3.1_Practice.ipynb | smcreset/pythonteachingcode | a4b71167c53dc07ae61a931b58d7313f9d0f6a9d | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_3.1_Practice.ipynb | smcreset/pythonteachingcode | a4b71167c53dc07ae61a931b58d7313f9d0f6a9d | [
"MIT"
] | null | null | null | 31.897628 | 166 | 0.49816 | [
[
[
"# Module 3 Practice 1\n## Conditionals \n<font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font> \n- **control code flow with `if`... `else` conditional logic** \n - using Boolean string methods (`.isupper(), .isalpha(), .startswith()...`) \n - using comparision (`>, <, >=, <=, ==, !=`) \n - using Strings in comparisons \n\n## `if else`\n",
"_____no_output_____"
]
],
[
[
"# [ ] input avariable: age as digit and cast to int\n# if age greater than or equal to 12 then print message on age in 10 years \n# or else print message \"It is good to be\" age\n\nuser_age = int(input(\"How old are you? \"))\nif user_age >= 12:\n print(\"In ten years, you'll be\", user_age+10)\nelse:\n print(\"It is good to be\", user_age)\n \n\n",
"How old are you? 23\nIn ten years, you'll be 33\n"
],
[
"# [ ] input a number \n# if number IS a digit string then cast to int\n# print number \"greater than 100 is\" True/False\n# if number is NOT a digit string then message the user that \"only int is accepted\"\n\nrando = input(\"Give me a number: \")\n\nif rando.isdigit():\n rando_int = int(rando)\nelse:\n print(\"Only int is accepted.\")\n \nprint(\"The number you gave is greater than 100 is\", rando_int>100)\n",
"Give me a number: asdf\nOnly int is accepted.\nThe number you gave is greater than 100 is True\n"
]
],
[
[
"### Guessing a letter A-Z \n**check_guess()** takes 2 string arguments: **letter and guess** (both expect single alphabetical character) \n - if guess is not an alpha character print invalid and return False\n - test and print if guess is \"high\" or \"low\" and return False\n - test and print if guess is \"correct\" and return True",
"_____no_output_____"
]
],
[
[
"# [ ] create check_guess()\n# call with test\n\ndef check_guess():\n correct_answer = input(\"Give me a letter: \")\n if correct_answer.isalpha() is False:\n return \"False\"\n user_answer = input(\"Give me one letter: \")\n if user_answer.isalpha() is False:\n return \"False\"\n elif len(user_answer)>1:\n print(\"Sorry. I just asked for one letter.\")\n elif user_answer>correct_answer:\n print(\"Sorry. That's too high.\")\n elif user_answer<correct_answer:\n print(\"Sorry. That's too low.\")\n else:\n print(\"Well done! Good Guess!\")\n\ncheck_guess()\n",
"Give me a letter: 8\n"
],
[
"# [ ] call check_guess with user input\n\n\ndef check_guess():\n correct_answer = input(\"Give me a letter: \")\n if correct_answer.isalpha() is False:\n return \"False\"\n user_answer = input(\"Give me one letter: \")\n if user_answer.isalpha() is False:\n return \"False\"\n elif len(user_answer)>1:\n print(\"Sorry. I just asked for one letter.\")\n elif user_answer>correct_answer:\n print(\"Sorry. That's too high.\")\n elif user_answer<correct_answer:\n print(\"Sorry. That's too low.\")\n else:\n print(\"Well done! Good Guess!\")\n\ncheck_guess()\n\n",
"_____no_output_____"
]
],
[
[
"### Letter Guess\n**create letter_guess() function that gives user 3 guesses**\n- takes a letter character argument for the answer letter\n- gets user input for letter guess \n- calls check_guess() with answer and guess\n- End letter_guess if \n - check_guess() equals True, return True \n - or after 3 failed attempts, return False",
"_____no_output_____"
]
],
[
[
"# [ ] create letter_guess() function, call the function to test\n\ndef letter_guess():\n right_answer = \"c\"\n usr_answer = input(\"Give me a letter: \")\n if usr_answer != right_answer:\n usr_answer = input(\"Try again: \")\n if usr_answer != right_answer:\n usr_answer = input(\"Try one last time: \")\n if usr_answer != right_answer:\n print(\"I'm sorry. You're out of guesses.\")\n else:\n print(\"Well done!\")\n else:\n print(\"Well done!\")\n else:\n print(\"Well done!\")\n\nletter_guess()",
"Give me a letter: a\nTry again: a\nTry one last time: a\nI'm sorry. You're out of chances.\n"
]
],
[
[
"### Pet Conversation\n**ask the user for a sentence about a pet and then reply** \n- get user input in variable: about_pet\n- using a series of **if** statements respond with appropriate conversation\n - check if \"dog\" is in the string about_pet (sample reply \"Ah, a dog\")\n - check if \"cat\" is in the string about_pet\n - check if 1 or more animal is in string about_pet\n- no need for **else**'s\n- finish with thanking for the story",
"_____no_output_____"
]
],
[
[
"# [ ] complete pet conversation\n\nabout_pet = input(\"Tell me about your pet: \")\n\nif \"dog\" in about_pet.lower():\n print(\"Oh, my gosh, I love dogs!\")\nelif \"cat\" in about_pet.lower():\n print(\"Cats are the worst, and so are cat people.\")\nelif \"goat\" in about_pet.lower():\n print(\"OK. I did not expect that answer.\")f\n\nprint(\"Thank you for telling me about your pet.\")",
"Tell me about your pet: Goat\nOK. I did not expect that answer.\nThank you for telling me about your pet.\n"
]
],
[
[
"# Module 3 Practice 2\n## conditionals, type, and mathematics extended \n \n<font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font> \n- code more than two choices using **`elif`** \n- gather numeric input using type casting \n- perform subtraction, multiplication and division operations in code \n",
"_____no_output_____"
],
[
"# \n<font size=\"6\" color=\"#B24C00\" face=\"verdana\"> <B>Tasks</B></font>",
"_____no_output_____"
],
[
"### Rainbow colors\nask for input of a favorite rainbow color first letter: ROYGBIV \n\nUsing `if`, `elif`, and `else`: \n- print the color matching the letter \n - R = Red \n - O = Orange \n - Y = Yellow \n - G = Green\n - B = Blue\n - I = Indigo\n - V = Violet\n - else print \"no match\"\n",
"_____no_output_____"
]
],
[
[
"# [ ] complete rainbow colors\nR = \"Red\"\nO = \"Orange\"\nY = \"Yellow\"\nG = \"Green\"\nB = \"Blue\"\nI = \"Indigo\"\nV = \"Violet\"\n",
"_____no_output_____"
],
[
"# [ ] make the code above into a function rainbow_color() that has a string parameter, \n# get input and call the function and return the matching color as a string or \"no match\" message.\n# Call the function and print the return string.\n\ndef rainbow_color():\n favorite_color = input(\"Tell me the first letter of your favorite rainbow color: \")\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No match\")\n \nrainbow_color()",
"Tell me the first letter of your favorite rainbow color: R\nRed\n"
]
],
[
[
"# \n**Create function age_20() that adds or subtracts 20 from your age for a return value based on current age** (use `if`) \n- call the funtion with user input and then use the return value in a sentence \nexample `age_20(25)` returns **5**: \n> \"5 years old, 20 years difference from now\"",
"_____no_output_____"
]
],
[
[
"# [ ] complete age_20()\n\ndef age_20():\n u_age = input(\"How old are you?: \")\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \nage_20()",
"How old are you?: 20\nBecause you're 20, you'll be 40 in 20 years.\n"
]
],
[
[
"**create a function rainbow_or_age that takes a string argument**\n- if argument is a digit return the value of calling age_20() with the str value cast as **`int`** \n- if argument is an alphabetical character return the value of calling rainbow_color() with the str\n- if neither return FALSE",
"_____no_output_____"
]
],
[
[
"# [ ] create rainbow_or_age()\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n return False\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n age_20(u_input)\n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 3\nBecause you're 3, you'll be 23 in 20 years.\n"
],
[
"# [ ] add 2 numbers from input using a cast to integer and display the answer \n\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No color in the rainbow starts with that.\")\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n u_input2 = input(\"Give me another number: \")\n print(int(u_input) + int(u_input2)) \n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 3\nGive me another number: 44\n132\n"
],
[
"# [ ] Multiply 2 numbers from input using cast and save the answer as part of a string \"the answer is...\"\n# display the string using print\n\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No color in the rainbow starts with that.\")\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n u_input2 = input(\"Give me another number: \")\n u_product = int(u_input) * int(u_input2)\n print(\"The answer is\", u_product)\n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 3\nGive me another number: 4\nThe answer is 12\n"
],
[
"# [ ] get input of 2 numbers and display the average: (num1 + num2) divided by 2\n\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No color in the rainbow starts with that.\")\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n u_input2 = input(\"Give me another number: \")\n u_avg = int(int(u_input) + int(u_input2))/2\n print(\"The average is\", int(u_avg))\n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 4\nGive me another number: 6\nThe average is 5\n"
],
[
"# [ ] get input of 2 numbers and subtract the largest from the smallest (use an if statement to see which is larger)\n# show the answer\n\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No color in the rainbow starts with that.\")\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n u_input2 = input(\"Give me another number: \")\n if u_input>u_input2: \n u_output = int(u_input) - int(u_input2)\n print(\"The answer is\", u_output)\n elif u_input<u_input2:\n u_output = int(u_input2) - int(u_input)\n print(\"The answer is\", u_output)\n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 2\nGive me another number: 9\nThe answer is 7\n"
],
[
"# [ ] Divide a larger number by a smaller number and print the integer part of the result\n# don't divide by zero! if a zero is input make the result zero\n# [ ] cast the answer to an integer to cut off the decimals and print the result\n\ndef age_20(msg):\n u_age = msg\n plus_20 = int(u_age) + 20\n print(\"Because you're\", u_age+\", you'll be\", plus_20, \"in 20 years.\")\n \ndef rainbow_color(msg):\n favorite_color = msg\n if len(favorite_color)>1:\n print(\"Start over. That's too many letters.\")\n elif favorite_color.lower().startswith(\"r\"):\n print(\"Red\")\n elif favorite_color.lower().startswith(\"o\"):\n print(\"Orange\")\n elif favorite_color.lower().startswith(\"y\"):\n print(\"Yellow\")\n elif favorite_color.lower().startswith(\"b\"):\n print(\"Blue\")\n elif favorite_color.lower().startswith(\"g\"):\n print(\"Green\")\n elif favorite_color.lower().startswith(\"i\"):\n print(\"Indigo\")\n elif favorite_color.lower().startswith(\"v\"):\n print(\"Violet\")\n else:\n print(\"No color in the rainbow starts with that.\")\n\ndef rainbow_or_age():\n u_input = input(\"Give me a letter or a number: \")\n if u_input.isnumeric():\n u_input2 = input(\"Give me another number: \")\n if u_input>u_input2: \n if u_input2 == 0:\n print(0)\n else:\n u_output = int(u_input) / int(u_input2)\n print(\"The answer is\", int(u_output))\n elif u_input<u_input2:\n if u_input == 0:\n print(0)\n else:\n u_output = int(u_input2) / int(u_input)\n print(\"The answer is\", int(u_output))\n elif u_input.isalpha():\n rainbow_color(u_input)\n else:\n return False\n \nrainbow_or_age()",
"Give me a letter or a number: 4\nGive me another number: 5\nThe answer is 1\n"
]
],
[
[
"[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecf68ff67a7389de9772ed298a88814df4798b60 | 44,013 | ipynb | Jupyter Notebook | Kmeans.ipynb | bkoyuncu/notes | 0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4 | [
"MIT"
] | null | null | null | Kmeans.ipynb | bkoyuncu/notes | 0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4 | [
"MIT"
] | null | null | null | Kmeans.ipynb | bkoyuncu/notes | 0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4 | [
"MIT"
] | 1 | 2019-09-11T11:46:36.000Z | 2019-09-11T11:46:36.000Z | 330.924812 | 20,464 | 0.919069 | [
[
[
"## K-means clustering\n\nUnsupervised method\n\n- Initialize $k$ cluster centers\n\n- Associations: Find the points closest to each cluster center and form groups\n- Recalculate means: Set the cluster center to the mean of each group",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport pandas as pd\nfrom IPython import display\nimport matplotlib.pylab as plt\n\nimport time\nfrom IPython import display\n\nN = 100\nD = 2\n\n# Generate a data set\nX1 = np.random.randn(D,N/2)\nX2 = 0.8*np.random.randn(D,N/2) + 3*np.ones((D, N/2))\nX = np.hstack((np.mat(X1), np.mat(X2)))\n\nK = 5;\n\nmu = X[:,0:K]\n#plt.plot(mu[0,:],mu[1,:],'ro')\n\n# Number of epochs\nEP = 20\n\nfig = plt.figure(figsize=(8,10))\nplt.plot(X[0,:], X[1,:],'kx')\n\nax = fig.gca()\nln = plt.Line2D(xdata=mu[0,:], ydata=mu[1,:], marker='o', color='r',linestyle=None,linewidth=0)\nax.add_line(ln)\n\nfor e in range(EP):\n \n dist = np.zeros((N,K))\n for i in range(N):\n for c in range(K):\n err = X[:,i]-mu[:,c]\n dist[i,c] = float(err.T*err)\n \n # Assignments\n a = np.argmin(dist, axis=1)\n \n mu = np.mat(np.zeros((D,K)))\n count = np.zeros((K))\n for i,c in enumerate(a):\n count[c] += 1\n mu[:,c] = (count[c]-1)/count[c]*mu[:,c] + 1./count[c]*X[:,i]\n \n ln.set_xdata(mu[0,:])\n ln.set_ydata(mu[1,:])\n \n #plt.subplot(EP,1,e+1)\n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.1)\n plt.plot(X[0,:], X[1,:],'kx')\n #plt.plot(mu[0,:],mu[1,:],'ro')\n\nplt.show()\n \n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ecf6a028b94d9bf2b2744aae392111739159339a | 71,962 | ipynb | Jupyter Notebook | restaurant/openrice_allrestaurant.ipynb | youonf/location_intelligence | d454912de149cac0eec6020d6beed410e1d76914 | [
"MIT"
] | null | null | null | restaurant/openrice_allrestaurant.ipynb | youonf/location_intelligence | d454912de149cac0eec6020d6beed410e1d76914 | [
"MIT"
] | null | null | null | restaurant/openrice_allrestaurant.ipynb | youonf/location_intelligence | d454912de149cac0eec6020d6beed410e1d76914 | [
"MIT"
] | 1 | 2020-12-11T02:16:27.000Z | 2020-12-11T02:16:27.000Z | 67.824694 | 2,102 | 0.592493 | [
[
[
"import pandas as pd\nimport json\nfrom random import randint\nfrom time import sleep\nimport requests\nfrom tqdm import tqdm",
"_____no_output_____"
]
],
[
[
"### Clan the district, cuisine, dish columns",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('restaurant_clean.csv', index_col=0)",
"_____no_output_____"
],
[
"def stripper(text):\n return text.strip()",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df['district'] = df['district'].apply(lambda x: stripper(x))\ndf['cuisine'] = df['cuisine'].apply(lambda x: stripper(x))\ndf['dish'] = df['dish'].apply(lambda x: stripper(x))",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.to_csv('restaurant_clean.csv')",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 27615 entries, 0 to 27614\nData columns (total 9 columns):\nname 27615 non-null object\nreview 27615 non-null int64\nbookmark 27615 non-null int64\nsmile 27615 non-null int64\nsad 27615 non-null int64\naddress 27615 non-null object\ndistrict 27615 non-null object\ncuisine 27615 non-null object\ndish 27615 non-null object\ndtypes: int64(4), object(5)\nmemory usage: 2.1+ MB\n"
],
[
"import requests\n\nurl = \"https://www.als.ogcio.gov.hk/lookup\"\naddress = \"612-618 Nathan Rd\"\nparams = {\n \"q\":address\n}\n\nrequests.get(url, params=params) #200 means success",
"_____no_output_____"
],
[
"addresses = df['address'][:10]",
"_____no_output_____"
],
[
"add_dict = {i:v for i, v in enumerate(list(addresses))}",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"type(addresses.values)",
"_____no_output_____"
],
[
"def ogcioparser(address):\n data = {\"q\":address, \"n\":1} #n means only send 1 \n headers ={\"Accept\": \"application/json\"}\n api_url = \"https://www.als.ogcio.gov.hk/lookup\"\n res = requests.post(api_url, data=data, headers=headers, timeout=(5, 14))\n add = json.loads(res.text)\n \n return add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Northing'],\\\n add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Easting'],\\\n add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Longitude'],\\\n add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Latitude']",
"_____no_output_____"
],
[
"re_dict={}",
"_____no_output_____"
],
[
"def ogcioparser_dict(address):\n data = {\"q\":address, \"n\":1} #n means only send 1 \n headers ={\"Accept\": \"application/json\"}\n api_url = \"https://www.als.ogcio.gov.hk/lookup\"\n res = requests.post(api_url, data=data, headers=headers, timeout=(5, 14))\n add = json.loads(res.text)\n add_dict = {}\n add_dict['Northing'] = add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Northing']\n add_dict['Easting'] = add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Easting']\n add_dict['Longitude'] = add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Longitude']\n add_dict['Latitude'] = add['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Latitude']\n return add_dict",
"_____no_output_____"
],
[
"addresses = df['address']",
"_____no_output_____"
],
[
"all_dict = {}",
"_____no_output_____"
],
[
"sleep_counter = 0",
"_____no_output_____"
],
[
"for i, a in enumerate(tqdm(addresses)):\n all_dict[i] = ogcioparser_dict(a)\n sleep_counter +=1\n if sleep_counter % 500 == 0:\n sleep(randint(5,10))",
" 0%| | 42/27615 [00:31<6:00:20, 1.28it/s]"
],
[
"df_parse = pd.DataFrame.from_dict(all_dict, orient='index')",
"_____no_output_____"
],
[
"df_parse",
"_____no_output_____"
],
[
"df_parse.to_csv('parse.csv')",
"_____no_output_____"
],
[
"a,b,c,d = ogcioparser(df['address'][0])\nprint(a,b,c,d)",
"817834 835831 114.1726 22.2994\n"
],
[
"json.loads(res.text)",
"_____no_output_____"
],
[
"add_1 = json.loads(res.text)",
"_____no_output_____"
],
[
"add_1['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Northing']",
"_____no_output_____"
],
[
"add_1['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Easting']",
"_____no_output_____"
],
[
"add_1['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Longitude']",
"_____no_output_____"
],
[
"add_1['SuggestedAddress'][0]['Address']['PremisesAddress']['GeospatialInformation'][0]['Latitude']",
"_____no_output_____"
],
[
"sleep(randint(10,100))",
"_____no_output_____"
],
[
"len(df['address'])",
"_____no_output_____"
],
[
"sleep_counter = 0",
"_____no_output_____"
],
[
"df_parse = pd.DataFrame(columns=['Longtitude','Latitude','Northing','Easting'])",
"_____no_output_____"
],
[
"df_parse['Longtitude'], df_parse['Latitude'], df_parse['Northing'], df_parse['Easting'] = df['address'].apply(lambda x: ogcioparser(x))",
"_____no_output_____"
],
[
"for i in tqdm(range(len(df['address']))):\n df_parse['Longtitude'], df_parse['Latitude'], df_parse['Northing'], df_parse['Easting'] = df['address'].apply(lambda x: ogcioparser(x))\n sleep_counter +=1\n if sleep_counter % 500 == 0:\n sleep(randint(5,10))",
"\n 0%| | 0/27615 [00:00<?, ?it/s]\u001b[A"
],
[
"!ipython nbconvert --to=python openrice_allrestaurant.ipynb",
"[TerminalIPythonApp] WARNING | Subcommand `ipython nbconvert` is deprecated and will be removed in future versions.\n[TerminalIPythonApp] WARNING | You likely want to use `jupyter nbconvert` in the future\n[NbConvertApp] Converting notebook openrice_allrestaurant.ipynb to python\n[NbConvertApp] Writing 4311 bytes to openrice_allrestaurant.py\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf6a8f4560a1bc75ba7b70389c8859fc3c66c96 | 76,363 | ipynb | Jupyter Notebook | Tax_Min_Method.ipynb | jjwalther08/tax-min-method | 39406fd1a72b49dfb390fbfbf916ada63299627b | [
"MIT"
] | null | null | null | Tax_Min_Method.ipynb | jjwalther08/tax-min-method | 39406fd1a72b49dfb390fbfbf916ada63299627b | [
"MIT"
] | 1 | 2021-08-25T07:19:55.000Z | 2021-08-25T07:19:55.000Z | Tax_Min_Method.ipynb | jjwalther08/tax-min-method | 39406fd1a72b49dfb390fbfbf916ada63299627b | [
"MIT"
] | null | null | null | 33.158055 | 426 | 0.381887 | [
[
[
"# Tax-Min-Method",
"_____no_output_____"
]
],
[
[
"pip install pandas-datareader",
"Requirement already satisfied: pandas-datareader in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (0.10.0)\nRequirement already satisfied: requests>=2.19.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas-datareader) (2.25.1)\nRequirement already satisfied: pandas>=0.23 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas-datareader) (1.2.0)\nRequirement already satisfied: lxml in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas-datareader) (4.6.3)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from requests>=2.19.0->pandas-datareader) (1.26.2)\nRequirement already satisfied: idna<3,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from requests>=2.19.0->pandas-datareader) (2.10)\nRequirement already satisfied: chardet<5,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from requests>=2.19.0->pandas-datareader) (4.0.0)\nRequirement already satisfied: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from requests>=2.19.0->pandas-datareader) (2020.12.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas>=0.23->pandas-datareader) (2.8.1)\nRequirement already satisfied: numpy>=1.16.5 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas>=0.23->pandas-datareader) (1.19.5)\nRequirement already satisfied: pytz>=2017.3 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pandas>=0.23->pandas-datareader) (2020.5)\nRequirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from python-dateutil>=2.7.3->pandas>=0.23->pandas-datareader) (1.15.0)\n\u001b[33mWARNING: You are using pip version 20.2.3; however, version 21.2.4 is available.\nYou should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.9/bin/python3.9 -m pip install --upgrade pip' command.\u001b[0m\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"#Import required libraries\nimport pandas as pd\nimport numpy as np\nfrom datetime import date\nimport decimal\nfrom pandas_datareader import data as pdr\nfrom datetime import datetime",
"_____no_output_____"
],
[
"import os\nworking_directory = os.getcwd()",
"_____no_output_____"
]
],
[
[
"## Import CSV file and convert \"Date Acquired\" column to datetime ",
"_____no_output_____"
]
],
[
[
"path = working_directory + '/data/Min_Tax_Port.csv'\nd_parser = lambda x: datetime.strptime(x, '%m/%d/%Y')\ntax_min_port = pd.read_csv(path, parse_dates=[\"Date Acquired\"], date_parser=d_parser)\ntax_min_port",
"_____no_output_____"
]
],
[
[
"# Create variables that gives us the current date and date one year prior.\nThis logic is to determine the current date and then the date one year prior which would indicate any lots purchased before then (the date one year prior from current date) are definitely long term tax lots.\nSome brokerage firms platforms will actually allow you to define which tax lots are long or short, however, that is too easy and defining a function or method to determine this will be helpful for any data that does not identify the tax lots beforehand.",
"_____no_output_____"
]
],
[
[
"import datetime\n#date_now = datetime.datetime.now()\ndate_now = datetime.date.today()\nyear_ago = date_now.year - 1\n\ncurrent_date = date_now.strftime('%Y-%m-%d')\none_year_ago = date_now.replace(year=year_ago).strftime('%Y-%m-%d')\n\nprint(current_date)\nprint(one_year_ago)",
"2021-08-15\n2020-08-15\n"
]
],
[
[
"## Filter the dataframe to create two new datasets that specify all Long-Term lots and all Short-Term Lots\nThese are not really relevant for this project but just wanted to filter for good measure.",
"_____no_output_____"
]
],
[
[
"# Filter dataframe to show all tax lots that are considered long term gains/losses\nlong_term_lots = (tax_min_port['Date Acquired'] <= one_year_ago)\nlong_term = tax_min_port.loc[long_term_lots]\nlong_term",
"_____no_output_____"
],
[
"# Filter dateframe to include tax lots that are considered short term gains/losses\nshort_term_lots = (tax_min_port['Date Acquired'] > one_year_ago)\nshort_term = tax_min_port.loc[short_term_lots]\nshort_term",
"_____no_output_____"
]
],
[
[
"## Now filter the original dataframe to display four separate categories.\n 1.) Short-Term losses 2.) Long-Term losses 3.) Long-Term gains 4.) Short-Term gains.\n\nThese dataframes will also be filtered to display the lots with the highest cost bases (Cost/Share) in descending order.",
"_____no_output_____"
]
],
[
[
"#Filter dataframe to display short term losses and highest cost basis in descending order\nshort = (tax_min_port['Total Gain'] < 0) & (tax_min_port['Date Acquired'] > one_year_ago)\nshort_term_loss = tax_min_port.loc[short].sort_values(by='Cost/Share', ascending=False)\nshort_term_loss",
"_____no_output_____"
],
[
"#Filter dataframe to display long term losses and highest cost basis in descending order\nlong = (tax_min_port['Total Gain'] < 0) & (tax_min_port['Date Acquired'] <= one_year_ago)\nlong_term_loss = tax_min_port.loc[long].sort_values(by='Cost/Share', ascending=False)\nlong_term_loss",
"_____no_output_____"
],
[
"#Filter dataframe to display long term gains and highest cost basis in descending order\nlong = (tax_min_port['Total Gain'] > 0) & (tax_min_port['Date Acquired'] <= one_year_ago)\nlong_term_gain = tax_min_port.loc[long].sort_values(by='Cost/Share', ascending=False)\nlong_term_gain",
"_____no_output_____"
],
[
"#Filter dataframe to display short term gain\nshort = (tax_min_port['Total Gain'] > 0) & (tax_min_port['Date Acquired'] > one_year_ago)\nshort_term_gain = tax_min_port.loc[short].sort_values(by='Cost/Share', ascending=False)\nshort_term_gain",
"_____no_output_____"
]
],
[
[
"## Identify the total gain or loss for each new category. No practical use for this excercise, just helpful information to see.",
"_____no_output_____"
]
],
[
[
"short_term_loss['Total Gain'].sum()",
"_____no_output_____"
],
[
"long_term_loss['Total Gain'].sum()",
"_____no_output_____"
],
[
"long_term_gain['Total Gain'].sum()",
"_____no_output_____"
],
[
"short_term_gain['Total Gain'].sum()",
"_____no_output_____"
]
],
[
[
"## Exhaust each category before moving to the next, but within each category, lots with the highest cost basis are sold first.\nWill also append another column to each category to include the total proceeds from each lot. (Last Price * Quanity = Total Value). Although for this section only the Short-Term Loss category is being displayed.",
"_____no_output_____"
]
],
[
[
"short_term_loss[\"Total Value\"] = (short_term_loss[\"Last Price\"] * short_term_loss[\"Quanity\"])",
"_____no_output_____"
],
[
"short_term_loss",
"_____no_output_____"
]
],
[
[
"This is where it gets tricky. Consider a one-time $10,000 withdrawal. Looking at the previous dataframe I can tell that we will only need to iterate through the first category (Short Term Loss) to get to this value.\n\nBelow will display in essence a new proposal of all the tax lots that should be sold to realize short term losses from the highest cost basis first. However, I need to figure out a way for the proposal to consider if a partial lot need be sold to achieve the total withdrawal amount. Additonally, I need to write a function or loop that will be able to iterate through the remaining categories in order if applicable.",
"_____no_output_____"
]
],
[
[
"# Consider a total withdrawal amount and write a function that \nwithdrawal_amount = 10000",
"_____no_output_____"
],
[
"recommended_tax_lots = short_term_loss.loc[short_term_loss['Total Value'].cumsum().le(withdrawal_amount)]",
"_____no_output_____"
],
[
"recommended_tax_lots",
"_____no_output_____"
],
[
"recommended_tax_lots['Total Value'].sum()",
"_____no_output_____"
],
[
"proceeds_still_needed = withdrawal_amount - recommended_tax_lots['Total Value'].sum()\nproceeds_still_needed",
"_____no_output_____"
]
],
[
[
"In this instance above, the proposal still requires $827.27 to meet the $10,000 withdrawal amount. When viewing the next lot in the category, 36.09 shares of PLTR from the 2021-01-12 tax lot would need to be sold to achieve this.\n\nHow do we account for partial lots in the proposal???",
"_____no_output_____"
],
[
"The functions below are just for informational purposes. acumsum reflects the total value in descending order from each lot. The second just determines the first index where the value is greater than 10k.",
"_____no_output_____"
]
],
[
[
"# test = short_term_loss.loc[short_term_loss.sort_values('Total Proceeds',ascending=False,ignore_index=True)['Total Proceeds'].cumsum().le(10000)]",
"_____no_output_____"
],
[
"acumsum = np.cumsum(short_term_loss[\"Total Value\"])",
"_____no_output_____"
],
[
"acumsum",
"_____no_output_____"
],
[
"np.argmax(acumsum > withdrawal_amount)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecf6ac8725fa99676854bc10ae15a3566a838064 | 25,125 | ipynb | Jupyter Notebook | codesearchnet_distrib.ipynb | mandubian/codenets | 63be72b706d57dbfb2ecec94adc203fc7bdfa3cf | [
"Apache-2.0"
] | 19 | 2020-02-12T22:01:13.000Z | 2021-06-13T21:50:51.000Z | codesearchnet_distrib.ipynb | Tubbz-alt/codenets | 63be72b706d57dbfb2ecec94adc203fc7bdfa3cf | [
"Apache-2.0"
] | 1 | 2020-06-28T06:50:19.000Z | 2020-08-12T14:03:25.000Z | codesearchnet_distrib.ipynb | Tubbz-alt/codenets | 63be72b706d57dbfb2ecec94adc203fc7bdfa3cf | [
"Apache-2.0"
] | 1 | 2020-12-15T16:52:42.000Z | 2020-12-15T16:52:42.000Z | 28.421946 | 379 | 0.363741 | [
[
[
"> Simply copied from original notebook https://github.com/github/CodeSearchNet/blob/master/notebooks/ExploreData.ipynb",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('../../..')",
"_____no_output_____"
],
[
"import os\nfrom pathlib import Path\nimport pandas as pd\n\nfrom codenets.codesearchnet.copied_code.utils import read_file_samples",
"_____no_output_____"
]
],
[
[
"## Exploring The Full Dataset\n\nYou will need to complete the setup steps in the README.md file located in the root of this repository before proceeding.",
"_____no_output_____"
],
[
"The training data is located in `/resources/data`, which contains approximately 3.2 Million code, comment pairs across the train, validation, and test partitions. You can learn more about the directory structure and associated files by viewing `/resources/README.md`.\n\nThe preprocessed data re stored in [json lines](http://jsonlines.org/) format. First, we can get a list of all these files for further inspection:",
"_____no_output_____"
]
],
[
[
"root_path = Path(\"/home/mandubian/workspaces/tools/CodeSearchNet/\")\npython_files = sorted((root_path / \"resources/data/python/\").glob('**/*.gz'))\njava_files = sorted((root_path / \"resources/data/java/\").glob('**/*.gz'))\ngo_files = sorted((root_path / \"resources/data/go/\").glob('**/*.gz'))\nphp_files = sorted((root_path / \"resources/data/php\").glob('**/*.gz'))\njavascript_files = sorted((root_path / \"resources/data/javascript\").glob('**/*.gz'))\nruby_files = sorted((root_path / \"resources/data/ruby\").glob('**/*.gz'))\nall_files = python_files + go_files + java_files + php_files + javascript_files + ruby_files",
"_____no_output_____"
],
[
"print(f'Total number of files: {len(all_files):,}')",
"Total number of files: 77\n"
]
],
[
[
"To make analysis of this dataset easier, we can load all of the data into a pandas dataframe: ",
"_____no_output_____"
]
],
[
[
"columns_long_list = ['repo', 'path', 'url', 'code', \n 'code_tokens', 'docstring', 'docstring_tokens', \n 'language', 'partition']\n\ncolumns_short_list = ['code_tokens', 'docstring_tokens', \n 'language', 'partition']\n\ndef jsonl_list_to_dataframe(file_list, columns=columns_long_list):\n \"\"\"Load a list of jsonl.gz files into a pandas DataFrame.\"\"\"\n return pd.concat([pd.read_json(f, \n orient='records', \n compression='gzip',\n lines=True)[columns] \n for f in file_list], sort=False)",
"_____no_output_____"
]
],
[
[
"Two columns that will be heavily used in this dataset are `code_tokens` and `docstring_tokens`, which represent a parallel corpus that can be used for interesting tasks like information retrieval (for example trying to retrieve a codesnippet using the docstring.). You can find more information regarding the definition of the above columns in the README of this repo. \n\nNext, we will read in all of the data for a limited subset of these columns into memory so we can compute summary statistics. **Warning:** This step takes ~ 20 minutes.",
"_____no_output_____"
]
],
[
[
"all_df = jsonl_list_to_dataframe(all_files, columns=columns_short_list)",
"_____no_output_____"
],
[
"all_df.head(3)",
"_____no_output_____"
]
],
[
[
"## Summary Statistics",
"_____no_output_____"
],
[
"### Row Counts\n\n#### By Partition",
"_____no_output_____"
]
],
[
[
"all_df.partition.value_counts()",
"_____no_output_____"
]
],
[
[
"#### By Language",
"_____no_output_____"
]
],
[
[
"all_df.language.value_counts()",
"_____no_output_____"
]
],
[
[
"#### By Partition & Language",
"_____no_output_____"
]
],
[
[
"all_df.groupby(['partition', 'language'])['code_tokens'].count()",
"_____no_output_____"
]
],
[
[
"### Token Lengths By Language",
"_____no_output_____"
]
],
[
[
"all_df['code_len'] = all_df.code_tokens.apply(lambda x: len(x))\nall_df['query_len'] = all_df.docstring_tokens.apply(lambda x: len(x))",
"_____no_output_____"
]
],
[
[
"#### Code Length Percentile By Language\n\nFor example, the 80th percentile length for python tokens is 72",
"_____no_output_____"
]
],
[
[
"code_len_summary = all_df.groupby('language')['code_len'].quantile([.5, .7, .8, .9, .95])\ndisplay(pd.DataFrame(code_len_summary))",
"_____no_output_____"
]
],
[
[
"#### Query Length Percentile By Language\n\nFor example, the 80th percentile length for python tokens is 19",
"_____no_output_____"
]
],
[
[
"query_len_summary = all_df.groupby('language')['query_len'].quantile([.5, .7, .8, .9, .95])\ndisplay(pd.DataFrame(query_len_summary))",
"_____no_output_____"
]
],
[
[
"#### Query Length All Languages",
"_____no_output_____"
]
],
[
[
"query_len_summary = all_df['query_len'].quantile([.5, .7, .8, .9, .95])\ndisplay(pd.DataFrame(query_len_summary))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf6ae91a1ff09d235b5a18dd206b8a8967b8b0e | 20,242 | ipynb | Jupyter Notebook | p2_continuous-control/Actor_Critic_sol.ipynb | Kryptonbond/deepRL | f4ec4d8a7d3e3e36d6186018171c9d59a543e4d1 | [
"MIT"
] | null | null | null | p2_continuous-control/Actor_Critic_sol.ipynb | Kryptonbond/deepRL | f4ec4d8a7d3e3e36d6186018171c9d59a543e4d1 | [
"MIT"
] | null | null | null | p2_continuous-control/Actor_Critic_sol.ipynb | Kryptonbond/deepRL | f4ec4d8a7d3e3e36d6186018171c9d59a543e4d1 | [
"MIT"
] | null | null | null | 40.72837 | 1,387 | 0.558591 | [
[
[
"import math\nimport random\n\nimport gym\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.distributions import Normal\nfrom IPython.display import clear_output\nimport matplotlib.pyplot as plt\n%matplotlib inline\n",
"_____no_output_____"
],
[
"use_cuda = torch.cuda.is_available()\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")\nprint(device)",
"cpu\n"
]
],
[
[
"# Replay buffer",
"_____no_output_____"
]
],
[
[
"class ReplayBuffer:\n def __init__(self, capacity):\n self.capacity = capacity\n self.buffer = []\n self.position = 0\n \n def push(self, state, action, reward, next_state, done):\n if len(self.buffer) < self.capacity:\n self.buffer.append(None)\n self.buffer[self.position] = (state, action, reward, next_state, done)\n self.position = (self.position + 1) % self.capacity\n \n def sample(self, batch_size):\n batch = random.sample(self.buffer, batch_size)\n state, action, reward, next_state, done = map(np.stack, zip(*batch))\n return state, action, reward, next_state, done\n \n def __len__(self):\n return len(self.buffer)",
"_____no_output_____"
],
[
"class NormalizedActions(gym.ActionWrapper):\n def _action(self, action):\n low = self.action_space.low\n high = self.action_space.high\n \n action = low + (action + 1.0) * 0.5 * (high - low)\n action = np.clip(action, low, high)\n \n return action\n\n def _reverse_action(self, action):\n low = self.action_space.low\n high = self.action_space.high\n \n action = 2 * (action - low) / (high - low) - 1\n action = np.clip(action, low, high)\n \n return actions",
"_____no_output_____"
],
[
"def plot(frame_idx, rewards):\n clear_output(True)\n plt.figure(figsize=(20,5))\n plt.subplot(131)\n plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))\n plt.plot(rewards)\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor",
"_____no_output_____"
]
],
[
[
"class ValueNetwork(nn.Module):\n def __init__(self, state_dim, hidden_dim, init_w=3e-3):\n super(ValueNetwork, self).__init__()\n \n self.linear1 = nn.Linear(state_dim, hidden_dim)\n self.linear2 = nn.Linear(hidden_dim, hidden_dim)\n self.linear3 = nn.Linear(hidden_dim, 1)\n \n self.linear3.weight.data.uniform_(-init_w, init_w)\n self.linear3.bias.data.uniform_(-init_w, init_w)\n \n def forward(self, state):\n x = F.relu(self.linear1(state))\n x = F.relu(self.linear2(x))\n x = self.linear3(x)\n return x\n \n \nclass SoftQNetwork(nn.Module):\n def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):\n super(SoftQNetwork, self).__init__()\n \n self.linear1 = nn.Linear(num_inputs + num_actions, hidden_size)\n self.linear2 = nn.Linear(hidden_size, hidden_size)\n self.linear3 = nn.Linear(hidden_size, 1)\n \n self.linear3.weight.data.uniform_(-init_w, init_w)\n self.linear3.bias.data.uniform_(-init_w, init_w)\n \n def forward(self, state, action):\n x = torch.cat([state, action], 1)\n x = F.relu(self.linear1(x))\n x = F.relu(self.linear2(x))\n x = self.linear3(x)\n return x\n \n \nclass PolicyNetwork(nn.Module):\n def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3, log_std_min=-20, log_std_max=2):\n super(PolicyNetwork, self).__init__()\n \n self.log_std_min = log_std_min\n self.log_std_max = log_std_max\n \n self.linear1 = nn.Linear(num_inputs, hidden_size)\n self.linear2 = nn.Linear(hidden_size, hidden_size)\n \n self.mean_linear = nn.Linear(hidden_size, num_actions)\n self.mean_linear.weight.data.uniform_(-init_w, init_w)\n self.mean_linear.bias.data.uniform_(-init_w, init_w)\n \n self.log_std_linear = nn.Linear(hidden_size, num_actions)\n self.log_std_linear.weight.data.uniform_(-init_w, init_w)\n self.log_std_linear.bias.data.uniform_(-init_w, init_w)\n \n def forward(self, state):\n x = F.relu(self.linear1(state))\n x = F.relu(self.linear2(x))\n \n mean = self.mean_linear(x)\n log_std = self.log_std_linear(x)\n log_std = torch.clamp(log_std, self.log_std_min, self.log_std_max)\n \n return mean, log_std\n \n def evaluate(self, state, epsilon=1e-6):\n mean, log_std = self.forward(state)\n std = log_std.exp()\n \n normal = Normal(mean, std)\n z = normal.sample()\n action = torch.tanh(z)\n \n log_prob = normal.log_prob(z) - torch.log(1 - action.pow(2) + epsilon)\n log_prob = log_prob.sum(-1, keepdim=True)\n \n return action, log_prob, z, mean, log_std\n \n \n def get_action(self, state):\n #state = torch.FloatTensor(state).unsqueeze(0).to(device)\n print(\"debug\", np.array(state))\n state = torch.from_numpy(state).float().to(device) # #torch.from_numpy(np.array(state))\n\n mean, log_std = self.forward(state)\n std = log_std.exp()\n \n normal = Normal(mean, std)\n z = normal.sample()\n action = torch.tanh(z)\n \n action = action.detach().cpu().numpy()\n return action[0]",
"_____no_output_____"
],
[
"def soft_q_update(batch_size, \n gamma=0.99,\n mean_lambda=1e-3,\n std_lambda=1e-3,\n z_lambda=0.0,\n soft_tau=1e-2,\n ):\n state, action, reward, next_state, done = replay_buffer.sample(batch_size)\n\n state = torch.FloatTensor(state).to(device)\n next_state = torch.FloatTensor(next_state).to(device)\n action = torch.FloatTensor(action).to(device)\n reward = torch.FloatTensor(reward).unsqueeze(1).to(device)\n done = torch.FloatTensor(np.float32(done)).unsqueeze(1).to(device)\n\n expected_q_value = soft_q_net(state, action)\n expected_value = value_net(state)\n new_action, log_prob, z, mean, log_std = policy_net.evaluate(state)\n\n\n target_value = target_value_net(next_state)\n next_q_value = reward + (1 - done) * gamma * target_value\n q_value_loss = soft_q_criterion(expected_q_value, next_q_value.detach())\n\n expected_new_q_value = soft_q_net(state, new_action)\n next_value = expected_new_q_value - log_prob\n value_loss = value_criterion(expected_value, next_value.detach())\n\n log_prob_target = expected_new_q_value - expected_value\n policy_loss = (log_prob * (log_prob - log_prob_target).detach()).mean()\n \n\n mean_loss = mean_lambda * mean.pow(2).mean()\n std_loss = std_lambda * log_std.pow(2).mean()\n z_loss = z_lambda * z.pow(2).sum(1).mean()\n\n policy_loss += mean_loss + std_loss + z_loss\n\n soft_q_optimizer.zero_grad()\n q_value_loss.backward()\n soft_q_optimizer.step()\n\n value_optimizer.zero_grad()\n value_loss.backward()\n value_optimizer.step()\n\n policy_optimizer.zero_grad()\n policy_loss.backward()\n policy_optimizer.step()\n \n \n for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):\n target_param.data.copy_(\n target_param.data * (1.0 - soft_tau) + param.data * soft_tau\n )",
"_____no_output_____"
],
[
"from unityagents import UnityEnvironment\nenv = UnityEnvironment(file_name='/home/deeplearning/Desktop/RL/deep-reinforcement-learning/p2_continuous-control/Reacher_Linux/Reacher.x86_64')\n#env = NormalizedActions(gym.make(\"Pendulum-v0\"))\n#env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')\n#env.seed(2)\n\n#agent = Agent(state_size=33, action_size=4, random_seed=2)#\n\n#brain_name = env.brain_names[0]\n#brain = env.brains[brain_name]\n#env_info = env.reset(train_mode=True)[brain_name] # reset the environment \n#states = env_info.vector_observations\n\n#-----------------------------------------#\naction_dim = 4 #env.action_space.shape[0]\nstate_dim = 33 #env.observation_space.shape[0]\nhidden_dim = 256\n\nvalue_net = ValueNetwork(state_dim, hidden_dim).to(device)\ntarget_value_net = ValueNetwork(state_dim, hidden_dim).to(device)\n\nsoft_q_net = SoftQNetwork(state_dim, action_dim, hidden_dim).to(device)\npolicy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)\n\nfor target_param, param in zip(target_value_net.parameters(), value_net.parameters()):\n target_param.data.copy_(param.data)\n \n\nvalue_criterion = nn.MSELoss()\nsoft_q_criterion = nn.MSELoss()\n\nvalue_lr = 3e-4\nsoft_q_lr = 3e-4\npolicy_lr = 3e-4\n\nvalue_optimizer = optim.Adam(value_net.parameters(), lr=value_lr)\nsoft_q_optimizer = optim.Adam(soft_q_net.parameters(), lr=soft_q_lr)\npolicy_optimizer = optim.Adam(policy_net.parameters(), lr=policy_lr)\n\n\nreplay_buffer_size = 1000000\nreplay_buffer = ReplayBuffer(replay_buffer_size)",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\tgoal_size -> 5.0\n\t\tgoal_speed -> 1.0\nUnity brain name: ReacherBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 33\n Number of stacked Vector Observation: 1\n Vector Action space type: continuous\n Vector Action space size (per agent): 4\n Vector Action descriptions: , , , \n"
],
[
"max_frames = 40000\nmax_steps = 500\nframe_idx = 0\nrewards = []\nbatch_size = 128",
"_____no_output_____"
],
[
"while frame_idx < max_frames:\n state = env.reset()\n episode_reward = 0\n \n \n for step in range(max_steps):\n action = policy_net.get_action(state)\n next_state, reward, done, _ = env.step(action)\n \n replay_buffer.push(state, action, reward, next_state, done)\n if len(replay_buffer) > batch_size:\n soft_q_update(batch_size)\n \n state = next_state\n episode_reward += reward\n frame_idx += 1\n \n if frame_idx % 1000 == 0:\n plot(frame_idx, rewards)\n \n if done:\n break\n \n rewards.append(episode_reward)",
"debug {'ReacherBrain': <unityagents.brain.BrainInfo object at 0x7ff86094e048>}\n"
],
[
"v = state.values\n\nprint(np.array(v) )\nprint(\"debug\", torch.from_numpy(np.array(state.values) ))",
"<built-in method values of dict object at 0x7ff85fa26608>\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf6bf6b3e4cc9ce7e105ee93b85fd9a308db209 | 19,695 | ipynb | Jupyter Notebook | ErrNExceptions.ipynb | swap-10/LetsPython | ee8012be88ec9d4f3eb343791de832c082032ffb | [
"MIT"
] | null | null | null | ErrNExceptions.ipynb | swap-10/LetsPython | ee8012be88ec9d4f3eb343791de832c082032ffb | [
"MIT"
] | null | null | null | ErrNExceptions.ipynb | swap-10/LetsPython | ee8012be88ec9d4f3eb343791de832c082032ffb | [
"MIT"
] | null | null | null | 29.439462 | 158 | 0.54755 | [
[
[
"<h1>Errors and exceptions</h1>",
"_____no_output_____"
],
[
"<div style=\"font-size: 15px\">\r\nThere are (at least) two distinguishable kinds of errors: syntax errors and<br>\r\nexceptions.\r\n</div>",
"_____no_output_____"
],
[
"<h2>Syntax Errors</h2>\r\n<div style=\"font-size: 15px\">\r\nSyntax errors, also known as parsing errors, are perhaps the most common kind<br>\r\nof complaint you get while you are still Learning Python (even after that<br> though!)\r\n</div>",
"_____no_output_____"
]
],
[
[
"while True print(\"Hello World\")",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 15px\">\r\nThe parser repeats the offending line and displays a little 'arrow' pointing<br>\r\nat the earliest point in the line where the error was detected. The error is<br>\r\ncaused by (or at least detected at) the token preceding the arrow: in the<br>\r\nexample, the error is detected at the function <code>print()</code> since<br>\r\na colon (':') is missing before it. File name and line nmber are printed so<br>\r\nyou know where to look in case the input came from a script.\r\n</div>",
"_____no_output_____"
],
[
"<h2>Exceptions</h2>\r\n<div style=\"font-size: 15px\">\r\nEven if a statement or expression is syntactically correct, it may cause an<br>\r\nerror when an attempt is made to execute it. Errors detected during execution<br>\r\nare called exceptions and are not unconditionally fatal. Most exceptions are not handled by programs and result in error messages.\r\n</div>",
"_____no_output_____"
]
],
[
[
"10*(1/0)",
"_____no_output_____"
],
[
"4 + spam*3",
"_____no_output_____"
],
[
"'2' + 2",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 15px\">\r\nThe string printed as exception type is the name of the built-in exception<br>\r\nthat occured, for all built in exceptions. This is a useful, though not<br>\r\nnecessary convention for user-defined exceptions as well\r\n</div>",
"_____no_output_____"
],
[
"<h2>Handling exceptions</h2>",
"_____no_output_____"
]
],
[
[
"while True:\r\n try:\r\n x = int(input(\"Please enter a number: \"))\r\n break\r\n except ValueError:\r\n print(\"Oops! That wasn't a valud number. Try again\")",
"Oops! That wasn't a valud number. Try again\nOops! That wasn't a valud number. Try again\n"
]
],
[
[
"<div style=\"font-size: 15px\">\r\nThe <code>try</code> statement works as follows:\r\n<ul>\r\nFirst the try clause (statements between the try and except keywords) is<br>\r\nexecuted.<br>\r\nIf no exception occurs, the except clause is skipped and execution of the<br>\r\n<code>try</code> statement is finished.<br>\r\nIf an exception occurs during execution of the try clause, the rest of the<br>\r\nclause is skipped. Then if its type matches the exception named after the<br>\r\n<code>except</code> keyword, the except clause is executed, and then<br>\r\nexecution continue after the <code>try</code> statement.<br>\r\nIf an exception occurs which does not match the exception named in the<br>\r\nexcept clause, it is passed on to outer <code>try</code> statements; if no<br>\r\nhandler is found, it is an <i>unhandled exception</i> and execution stops<br>\r\nwith a message.\r\n</ul>\r\nA <code>try</code> statement may have more than one except clause, to specify<br>\r\nhandlers for different exceptions. At most one handler will be executed.<br>\r\nHandlers only handle exceptions that occur in the corresponding try clause,<br>\r\nnot in other handlers of the same <code>try</code> statement. An except<br>\r\nclause may name multiple exceptions as a parenthesized tuple, for example:<br>\r\n<br>\r\n<blockquote>\r\n<code>\r\nexcept (RuntimeError, TypeError, NameError):<br>\r\n pass\r\n</code>\r\n</blockquote>\r\n<br>\r\n(Used Alt + 2 + 5 + 5 for indent. Clear space and re-indent if copying code<br>\r\nsnippet)<br>\r\n<br>\r\nA class in an except clause is compatible with an exception if it is the<br>\r\nsame class or a base class thereof (but not the other way around — an<br>\r\nexcept clause listing a derived class is not compatible with a base class).<br>\r\nFor example, the following code will print B, C, D in that order:<br>\r\n</div>",
"_____no_output_____"
]
],
[
[
"class B(Exception):\r\n pass\r\n\r\nclass C(B):\r\n pass\r\n\r\nclass D(C):\r\n pass\r\n\r\nfor cls in [B, C, D]:\r\n try:\r\n raise cls()\r\n except D:\r\n print(\"D\")\r\n except C:\r\n print(\"C\")\r\n except B:\r\n print(\"B\")\r\n\r\nprint(\"With order reversed: \")\r\nclass B(Exception):\r\n pass\r\n\r\nclass C(B):\r\n pass\r\n\r\nclass D(C):\r\n pass\r\n\r\nfor cls in [B, C, D]:\r\n try:\r\n raise cls()\r\n except B:\r\n print(\"B\")\r\n except D:\r\n print(\"D\")\r\n except C:\r\n print(\"C\")\r\n\r\n# The first matching clause is triggered\r\n\r\n# Blanket except clause (use with caution):\r\n\r\nimport sys\r\n\r\ntry:\r\n f = open('myfile.txt')\r\n s = f.readline()\r\n i = int(s.strip())\r\nexcept OSError as err:\r\n print(\"OS error: {0}\".format(err))\r\nexcept ValueError:\r\n print(\"Could not convert data to an integer.\")\r\nexcept:\r\n print(\"Unexpected error:\", sys.exc_info()[0])\r\n raise",
"B\nC\nD\nWith order reversed: \nB\nB\nB\nOS error: [Errno 2] No such file or directory: 'myfile.txt'\n"
],
[
"# Optional else clause, when present must follow all except clauses\r\n# Can be used when (e.g.:) a piece of code that must be executed when the try\r\n# clause does not raise an exception\r\n\r\nfor arg in sys.argv[1:]:\r\n try:\r\n f = open(arg, 'r')\r\n except OSError:\r\n print('cannot open', arg)\r\n else:\r\n print(arg, 'has', len(f.readlines()), 'lines') # Opened successfully\r\n f.close()",
"_____no_output_____"
],
[
"# The except clause may specify a variable after the exception name.\r\n# The variable is bound to an exception instance with the arguments stored in\r\n# instance.args.\r\n# One may also instantiate an exception first before raising it and add any\r\n# attributes to it as desired\r\n\r\ntry:\r\n raise Exception('spam', 'eggs')\r\nexcept Exception as inst:\r\n print(type(inst))\r\n print(inst.args)\r\n print(inst)\r\n\r\n# The exception instance arguments stored in .args\r\n# __str__ allows args to be printed directly, but may be overridden in\r\n# exception subclasses unpack args\r\n\r\n x, y = inst.args\r\n print('x = ', x)\r\n print('y = ', y)\r\n\r\nprint('\\n\\n')\r\n# Exceptions occuring in functions inside try clause\r\ndef this_fails():\r\n \"\"\"What a surprise\"\"\"\r\n x = 1/0\r\n\r\ntry:\r\n this_fails()\r\nexcept ZeroDivisionError as err:\r\n print(\"handling run-time error:\", err)\r\n",
"<class 'Exception'>\n('spam', 'eggs')\n('spam', 'eggs')\nx = spam\ny = eggs\n\n\n\nhandling run-time error: division by zero\n"
]
],
[
[
"<h2>Raising Exceptions</h2>",
"_____no_output_____"
]
],
[
[
"# The raise exception allows the programmer to force a specified exception to\r\n# occur.\r\n\r\n# raise NameError('HiThere')\r\n\r\n# The lone argument to raise is the exception to be raised. If an exception\r\n# class is passed, it will be implicitly instantiated by calling its constructor\r\n# with no arguments:\r\n\r\nraise ValueError",
"_____no_output_____"
],
[
"# If you need to determine whether an exception was raised but don't intend to\r\n# handle it, a simpler form of the raise statement allows you to re-raise the\r\n# exception:\r\n\r\ntry:\r\n raise NameError('Hola')\r\nexcept NameError:\r\n print('An exception flew by')\r\n raise",
"_____no_output_____"
]
],
[
[
"<h2>Exception Chaining</h2>",
"_____no_output_____"
]
],
[
[
"# The raise statements allows an optional 'from' which enables chaining\r\n# exceptions\r\n\r\ndef func():\r\n raise IOError\r\n\r\ntry:\r\n func()\r\nexcept IOError as blahblah:\r\n raise RuntimeError('Failed to open database') from blahblah\r\n",
"_____no_output_____"
],
[
"# Exception chaining happens automatically when an exception is raised\r\n# inside an except or finally section\r\n# Exception chaining can be disabled by using 'from None'\r\n\r\ntry:\r\n open('nonexistentfile.txt')\r\nexcept IOError:\r\n raise RuntimeError from None",
"_____no_output_____"
]
],
[
[
"<h2>User defined Exceptions</h2>\r\n<div style=\"font-size: 15px\">\r\nPrograms may name their own exceptions by creating a new exception class<br> Exceptions should typically be derived from the <code>Exception</code>\r\nclass, either directly or indirectly.<br>\r\n<br>\r\nException classes can be defined which do anything any other class can<br>\r\ndo, but are usually kept simple, often only offering a number of<br>\r\nattributes that allow information about the error to be extracted by<br>\r\nhandlers for the exception. When creating a module that can raise several<br>\r\ndistinct errors, a common practice is to create a base class for<br>\r\nexceptions defined by that module, and subclass that to create specific<br>\r\nexception classes for different error conditions:\r\n</div>",
"_____no_output_____"
]
],
[
[
"class Error(Exception):\r\n \"\"\"Base class for exceptions in this module\"\"\"\r\n pass\r\n\r\nclass InputError(Error):\r\n \"\"\"Exception raised for errors in the input:\r\n \r\n Attributes:\r\n expression -- input expression in which the error occured\r\n message -- explanation of the error\r\n \"\"\"\r\n\r\n def __init__(self, expression, message):\r\n self.expression = expression\r\n self.message = message\r\n\r\nclass TransitionError(Error):\r\n \"\"\"Raised when an operation attempts a state transition that's not allowed\r\n \r\n Attributes:\r\n previous -- state at beginning of transition\r\n next -- attempted new state\r\n message -- explanation of why the specific transition is not allowed\r\n \"\"\"\r\n\r\n def __init__(self, previous, next, message):\r\n self.previous = previous\r\n self.next = next\r\n self.message = message",
"_____no_output_____"
]
],
[
[
"<h2>Clean up actions</h2>\r\n<div style=\"font-size: 15px\">\r\nThe <code>try</code> statement has another optional clause which is intended<br>\r\nto define clean-up actions that must be executed under all circumstances.\r\n</div>",
"_____no_output_____"
]
],
[
[
"try:\r\n raise KeyboardInterrupt\r\nfinally:\r\n print('See ya later, World!')\r\n",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 15px\">\r\nIf a <code>finally</code> clause is present, the <code>finally</code> clause<br>\r\nwill execute as the last task before the <code>try</code> statement<br>\r\ncompletes. The <code>finally</code> clause runs whether or not the<br>\r\n<code>try</code> statement produces an exception. The following points<br>\r\ndiscuss more complex cases when an exception occurs:<br>\r\n<ul>\r\n<li>If an exception occurs during execution of the try clause, the exception<br>\r\nmay be handled by an except clause. If the exception is not handled by an<br>\r\nexcept clause, the exception is re-raised after the finally clause has<br>\r\nbeen executed.\r\n</li>\r\n<li>An exception could occur during execution of an except or else clause.<br>\r\nAgain, the exception is re-raised after the finally clause has been executed.\r\n</li>\r\n<li>If the try statement reaches a break, continue or return statement, the<br>\r\nfinally clause will execute just prior to the break, continue or return<br>\r\nstatement’s execution.\r\n</li>\r\n<li>If a finally clause includes a return statement, the returned value will<br>\r\nbe the one from the finally clause’s return statement, not the value from<br>\r\nthe try clause’s return statement.\r\n</li>\r\n</ul>\r\n</div>",
"_____no_output_____"
]
],
[
[
"def bool_return():\r\n try:\r\n return True\r\n finally:\r\n return False\r\nbool_return()",
"_____no_output_____"
],
[
"def divido(x, y):\r\n try:\r\n result = x/y\r\n except ZeroDivisionError:\r\n print(\"division by zero!\")\r\n else:\r\n print(\"result is\", result)\r\n finally:\r\n print(\"This is the finally clause\")\r\n\r\ndivido(2, 1)",
"result is 2.0\nThis is the finally clause\n"
],
[
"divido(2, 0)",
"division by zero!\nThis is the finally clause\n"
],
[
"divido('2', '1')",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 15px\">\r\nObserve that the finally clause is executed in any event. The TypeError<br>\r\nraised by dividing two strings is not handled by the except clause and<br>\r\ntherefore re-raised after the finally clause has been executed.\r\n</div>",
"_____no_output_____"
],
[
"<h2>Pre-defined Clean-up Actions</h2>",
"_____no_output_____"
]
],
[
[
"import os\r\npathtofile = os.path.abspath('./IOandfiles/workfile.txt')\r\n\r\n# For this we will take into consideration a case we see in th IOandfiles\r\n# tutorial for handling files\r\n\r\nwith open(pathtofile) as f:\r\n for line in f:\r\n print(line, end='')\r\n\r\n# After the statement is executed, the file object f is always closed, even\r\n# even if a problem was encountered while processing the lines. Objects which,\r\n# like files, provide predefined clean-up actions will indicate this in their\r\n# documentation",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecf6ca08e344d2069d77e359a6db018bd2755b1c | 60,018 | ipynb | Jupyter Notebook | underthecovers/assembly/UnixAssemblyProgramming.ipynb | vasia/UndertheCovers | bf8d1528da1ab7615dc0c73bbde8d920006ca5e9 | [
"MIT"
] | null | null | null | underthecovers/assembly/UnixAssemblyProgramming.ipynb | vasia/UndertheCovers | bf8d1528da1ab7615dc0c73bbde8d920006ca5e9 | [
"MIT"
] | null | null | null | underthecovers/assembly/UnixAssemblyProgramming.ipynb | vasia/UndertheCovers | bf8d1528da1ab7615dc0c73bbde8d920006ca5e9 | [
"MIT"
] | null | null | null | 48.795122 | 1,058 | 0.650188 | [
[
[
"%run -i ../python/common.py\nsetupExamples(\"unixasm\", \"../src/Makefile ../src/setup.gdb ../src/empty.s ../src/empty.gdb ../src/empty256.s\")",
"_____no_output_____"
]
],
[
[
"# Executables and Processes\n\nIn this chapter we will explore what \"native\" binary programs are and begin our journey to learning how to create them though assembly programming.\n\nThis chapter follows two approaches to this material. In the first part we take an self-guided discovery approach. Here we use our knowledge and access to UNIX to follow our noses and poke around an executable to see what we can learn. In the second part of the chapter we take a more traditional textbook approach and present the conceptual model for how executables and processes relate to each other.\n\n\n**The following chapter includes several manual page entries. A reader is not expected to read these completely. They are mainly hear to illustrate how we can learn about the detail and document the precise way we can look them up later when we need too. In general you should skip the first few paragraphs. If there details that you should pickup on now the text will point you to them**",
"_____no_output_____"
],
[
"## \"Running\" Executables\n\nPerhaps the most basic thing we do on a computer is run programs. As we have seen, on UNIX, one of the main purposes of the shell is to let us start and manage running programs -- Processes. As a recap remember that when we type a command like `ls` into a shell, it is not a built-in command. The shell will look to see if a file, with a matching name, exists in the list of directories specified by the `PATH` environment variable. If one is found (eg `/bin/ls`), and it's meta data marks it as \"executable\", the shell process will make calls to the UNIX kernel to create a new child process and try and \"run\" the file within the new process. ",
"_____no_output_____"
]
],
[
[
"display(HTML(htmlFig(\n [\n [\n# {'src':\"/files/work/UndertheCovers/underthecovers/images/Processes/Processes.003.png\",\n# 'caption':'A: Press Enter'\n# 'border': '1px solid black',\n# 'padding':'1px',\n# 'cellwidth':'33.33%'\n# },\n {'src':\"../images/Processes/Processes.004.png\",\n 'caption':'A: Bash calls kernel functions.', \n# 'border': '1px solid black',\n# 'padding':'1px',\n 'cellwidth':'50%'\n },\n {'src':\"../images/Processes/Processes.005.png\",\n 'caption':'B: Kernel runs the program in new process.',\n# 'border': '1px solid black',\n# 'padding':'1px',\n 'cellwidth':'50%'\n },\n ]\n ],\n id=\"fig:shell-blankline\",\n caption=\"<center> Figure: Shell calls kernel functions, fork and exec, to create a new process and 'runs' the 'executable' withing it</center>\"\n)))",
"_____no_output_____"
]
],
[
[
"As the figures state, there are two basic kinds of files that the kernel knows how to \"execute\" within a process. One is an ASCII file that has a special string at its beginning -- `#!<path of interpreter>` and the other is an **executable**. The former is just a convenient way to allow programs like the shell to automatically be started with the contents of the file passed to it as a script to interpret. This makes it easy to write \"scripts\" that behave as if they where programs of their own. When in reality they are being interpreted as commands to the \"real\" program specified on the first line of the file. But the question, of course, is what exactly are real programs or **executables**.",
"_____no_output_____"
],
[
"## What's inside an executable\n\nLets explore the `/bin/ls` file using our UNIX skills to see what we can figure out.\n\n### What does ls tell us about ls ;-)",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"ls -l /bin/ls\", noposttext=True, stripnl=True, markdown=False)",
"_____no_output_____"
]
],
[
[
"Running using ls to list the meta data of the file `/bin/ls` we see that it contains a sizable number of bytes. We also see that the permissions clearly mark it as being executable by all users of the system `-rwxr-x-rx` (if you don't remember how to read this output see `man ls`).\n\n### Can we display its contents to the Terminal with `cat`?\n\nWe encourage you to open a terminal and give this a shot. What happened? Well remember that all bytes that are sent to terminal are interpreted by the terminal as ASCII encoded information. It should be quickly apparent to you that whatever `/bin/ls` is it is NOT predominantly ASCII encoded information! Rather the bytes in it must be of some other kind of binary representation. \n\nBelow we pass the `-v` flag to `cat` so that it converts the non-ASCII values (things the terminal will not be able to print) in `/bin/ls` into a sequence of printable characters that represent their value, Specifically it using '^' and 'M-' as prefixes followed by another character. You can see the man page of `cat` and the [ASCII Table](../unix/terminal.ipynb#ASCII_sec) for more information.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"cat -v /bin/ls\", prompt='', pretext='$ cat -v /bin/ls', height='20em', wait=False, markdown=False, noposttext=True)",
"_____no_output_____"
]
],
[
[
"### Lets look at the byte values of `/bin/ls` using `xxd`\n\nSo while the data in `/bin/ls` does not seem to be encoded in ASCII we can use other UNIX tools to translate the individual bytes of the file into a numeric ASCII value so that we can at least see what the values of the bytes of the file are. There are several such tools we could use. Examples include: `od` (octal dump), `hexdump`, and `xxd`. We will use `xxd`",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"man xxd\", prompt='', pretext='$ man xxd', height='20em', wait=False, markdown=False, noposttext=True)",
"_____no_output_____"
]
],
[
[
"`xxd` conveniently lets us look at the value of a file represented in base 2 binary digits or base 16 hexadecimal digits. We will use the following command to display the first 256 bytes of the file in binary: `xxd -l 256 -g 1 -c 8 -b /bin/ls`\n\nWhere: \n - `-l 256` is used to restrict ourselves to the first 80 bytes\n - `-g 1` is used to tell xxd to work on units/groups of single bytes\n - `-c 8` is used to print 8 units/groups per line\n - `-b` means display the values in base 2 (binary) notation\n\nThis causes `xxd` to open `/bin/ls` and read the first 256 bytes. It examines the value of each byte read and translates it so that it produces a string of eight ASCII characters of either `0` or `1` depending on the value of the bits of the byte. In this way we can use `xxd` to display the byte values of a file. The left hand column of the output encodes the byte position in the file that the line of data corresponds too. These position values start at zero are in hexadecimal notation (eg. `00000010` is 16 in decimal). On the far right of each line `xxd` prints an ASCII interpretation for any byte values that correspond to printable ASCII characters (otherwise it prints a `.`).",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"xxd -l 256 -g 1 -c 8 -b /bin/ls\", wait=True, height='20em', markdown=False, noposttext=True)",
"_____no_output_____"
]
],
[
[
"Using hexadecimal notation we get more concise visual representation",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"xxd -l 256 -g 1 -c 8 /bin/ls\", wait=True, height='20em', markdown=False, noposttext=True)",
"_____no_output_____"
]
],
[
[
"So while it might look cool, without knowing how to interpret the byte values it really does not provide us much insight as to what makes this file a program that lists the contents of directories. ",
"_____no_output_____"
],
[
"### Using the UNIX `file` command on `/bin/ls`\n\nWhile there are no explicit file types in UNIX that tell use what kind of information is in a file (we are expected to know) there is a command that is very good at examining a file and guessing what kind of information is encoded in the file based on a large database of test. This command is called `file`. Here is its manual page.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"man file\", prompt='', pretext='$ man file', wait=False, height='20em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
]
],
[
[
"Well let's see what `file` has to say about `/bin/ls`.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"file /bin/ls\", markdown=False, stripnl=True, noposttext=True)",
"_____no_output_____"
]
],
[
[
"Ok cool! File tells us `/bin/ls` is and **ELF** file. You might have noticed that the xxd output showed the ASCII characters `ELF` near the beginning of the file. This is due to the fact that this is part of the `ELF` standard format to make recognition of them easier. \n\n\n\n### ELF Files - Executable and Linking Format Files\n\nSo what exactly is an ELF file? Lets see what the manuals have to say. P.S. You are not expected to understand what it is saying at this point.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"man elf\", prompt='', pretext='$ man elf', wait=False, height='20em', markdown=False, noposttext=True)",
"_____no_output_____"
]
],
[
[
"Wow that's a lot of information that does not make much sense at this point. However, it is nice to see that it seems to be a format for encoding \"executable\" files ;-)\n\nNow as it turns out there several tools such as `readelf` and `objdump` that we could read about that are designed for decoding with `elf` files. But it is not clear that this is going to help that much unless we get a more conceptual understanding of what it means to encode a program for execution in a process. \n\nFor your interest here is the output for `readelf --all /bin/ls` and `objdump --all /bin/ls` which dump summary information about the `/bin/ls` executable. ",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"readelf --all /bin/ls\", wait=False, height='20em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
],
[
"TermShellCmd(\"objdump --all /bin/ls\", wait=False, height='20em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
]
],
[
[
"As a teaser here is some actual \"content\" that objdump can extract and decode from `/bin/ls`. Specifically this command, `objdump -d /bin/ls` 'disassembles' the binary.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"objdump -d /bin/ls\", wait=False, height='20em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
]
],
[
[
"## Executing an Executable in a Process\n\nLets try this from the other direction. We know that there is a call to the OS to run an executable. Lets see what we can find out by examining the OS documentation. \n\nLets start by looking up the manpage for the operating system call `exec`. At this point we are going to ignore the programming syntax and mechanics and rather focus on what we can learn in broad strokes from the manual page.\n",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"man 3 exec | cat -n\", wait=False, height='40em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
]
],
[
[
"> <img style=\"margin: 1px 5px 0px 0px;\" align=\"left\" width=\"40\" src=\"../images/fyi.svg\"> <p style=\"background-color:powderblue;\"> Notice in the above output we see line numbers for the man page. The `man` command itself does not support line numbers but the `cat` program does if you pass it the `-n` flag. So instead of just using the command `man exec` on its own we have sent its output to `cat -n` using the pipe syntax of the shell:`|`. So our combined shell command is: `man exec | cat -n`. Remember ot notice these things as UNIX can teach many good programming habits like the value of breaking our software down into small reusable parts and having a standard way for combining those parts (eg a pipe). ",
"_____no_output_____"
],
[
"We want to focus on the first two paragraphs of the description (lines 27 - 33). These sentences imply that running a program loads a new \"process image\" over the current one. Remember in the Introduction we used the term memory image it is not a random coincidence that we are seeing the same terminology here. Further reading between the lines the \"file\" to be executed contains or is the base of the new process image. Given that this man page tells us that `exec` is really just a front end of `execve` lets look at that man page and see if we can learn a little more.",
"_____no_output_____"
]
],
[
[
"TermShellCmd(\"man 2 execve | cat -n\", wait=False, height='20em', markdown=False, noposttext=True, tmout=2)",
"_____no_output_____"
]
],
[
[
"Let's focus on lines 13-21 and 41-43. Again we see that the wording is all about replacing the contents of an existing process with the value from the executable file. Further we that some parts of the new process will be `newly initialized`: *stack*, *heap* and *data* *segments*. In lines 41-43 we are told that `execve`, assuming success, will overwrite certain parts of the *text*, *data* and *stack* of the process with the contents of the executable file (newly loaded program). So vaguely we are getting the picture that an executable encodes values that get \"loaded\" into a process to initialize the execution of the program contained within it. \n\nOur task now is to start putting the pieces together. We need to get a better idea of processes are, their relationship to binary executable files and how we go about encoding a program into an executable.",
"_____no_output_____"
],
[
"## Binary Executables as Process Images.",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/SLS_TheMachine.png\", \n width=\"80%\", id=\"fig:vnm\", \n caption=\"<center>Figure: Our illustration of a von Neumann computer. Our view is slightly updated to put the model interms of todays computers.</center>\")))",
"_____no_output_____"
]
],
[
[
"Given our basic view of how a [von Neumann](../assembly/vonNeumannArchitecture.ipynb) computer operates we can see that what we load into memory drives the CPU's execution. But how does this relate to the what is going on when we are running an operating system like UNIX. Which constructs a world of running software down into an OS kernel and a set of user processes?",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/UnixRunning.png\",\n caption=\"Figure: Running Unix system.\", \n width=\"80%\", align=\"center\", \n margin=\"auto auto auto auto\")))",
"_____no_output_____"
]
],
[
[
"From our perspective as users of an operating system we navigate the file system to find programs, in the form of executable files, to launch. In UNIX we use a shell like bash to do this. Bash on our behalf calls down into the UNIX Kernel requesting that it creates a new process from the specified executable file. In UNIX bash does this with two UNIX kernel system calls (fork and exec). ",
"_____no_output_____"
]
],
[
[
"display(HTML(htmlFig(\n [\n [\n# {'src':\"/files/work/UndertheCovers/underthecovers/images/Processes/Processes.003.png\",\n# 'caption':'A: Press Enter', \n# 'border': '1px solid black',\n# 'padding':'1px',\n# 'cellwidth':'33.33%'\n# },\n {'src':\"../images/Processes/Processes.004.png\",\n# 'caption':'B: Shell \"blank line\" input processing' , \n# 'border': '1px solid black',\n# 'padding':'1px',\n# 'cellwidth':'33.33%'\n },\n {'src':\"../images/Processes/Processes.005.png\",\n# 'caption':'C: Shell sends Prompt back', \n# 'border': '1px solid black',\n# 'padding':'1px',\n# 'cellwidth':'33.33%'\n },\n ]\n ],\n# id=\"fig:shell-blankline\",\n# caption=\"<center> Figure: Shell blank line behavior </center>\"\n)))",
"_____no_output_____"
]
],
[
[
"### Process Contexts\n\nTo better understanding of how processes, executables and the von Neumann model of execution relate we need to dig down a little further. ",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/contexts.png\",\n caption=\"Figure: Process contexts.\", \n width=\"80%\", align=\"center\", \n margin=\"auto auto auto auto\")))",
"_____no_output_____"
]
],
[
[
"Each time we ask the OS kernel to launch a binary executable the kernel creates a new process \"Context\". \nEach context is a collection of data structures that the kernel uses to represent the memory for the process and the CPU registers. A process is like a restricted virtual von Neumann computer. \n\n#### Context Switch -- Scheduling Processes\nUsing the [privileged execution](../assembly/vonNeummanArchitecture.ipynb) features of the hardware the kernel multiplexes, the contents on the the real CPU and memory of the computer. \n\nTo let a particular process execute the kernel loads the GPRS of the CPU from the context and sets SPRS of the CPU so that all \"normal\" mode memory accesses are directed to the process context's memory data structures. Once these steps are done the kernel resumes \"normal\" execution via a privileged instruction and the particular process will be executing on the CPU in context of its own memory. \n\nWhen the kernel wants to switch which process is executing it takes over the CPU, saves the values of the GPRS into the process context of the current process and then follows the steps above to set a different context as the currently executing one.\n\nThe act of picking which process to execute on the CPU and switching between them is called scheduling and is a core function of the operating system. Processes are what lets us treat our computers like many little sub-computers each which can execute based on an independent view of the CPU and memory. While it is executing the process has control of the CPU and can conduct memory transactions. \n\nFrom the perspective of the von Neumann architecture the thing that the process is missing is the ability to conduct I/O. This is on purpose to avoid the programs that run in processes needing to deal with the [The Ugly Underbelly](../assembly/vonNeumannArchitecture.ipynb) of I/O devices. Rather as we will see in other chapters to conduct I/O processes make requests to the kernel to control the I/O devices on it's behalf. The OS kernel hides the details of I/O from the process and ensure that I/O is carefully arbitrated between the multiple running processes. In UNIX the core abstract primitive for I/O it provides processes are functions for creating, writing and reading files.\n\nThe exact details of how processes contexts and scheduling are implemented vary between operating systems and the details of the hardware. For the level of detail we have covered is enough for us to continue our exploration of how processes and executables are related. ",
"_____no_output_____"
],
[
"### Executables: Initial Images for creating Process Contexts",
"_____no_output_____"
],
[
"We can now put things together. In the section above we discussed how the Kernel represents processes with context data structures and uses them to schedule the CPU across processes. But where does the p initial contents of memory for a process come from?\n\nThis is the primary role that binary executables files serve. Binaries are files that describe the initial memory contents for a process; the exact values the process contexts data structures should be initialized with and their addresses. From this perspective an executable is the initial memory \"image\" that the OS \"loads\" into a newly created process context. The reason we call it an image is that it is precise map that describes what the memory of a process should initially look like. While the process executes the memory context will change and evolve diverging from the initial image of the executable. \n\nThe figure below illustrates this relationship with a simple example.",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/contexts2.png\",\n caption=\"Figure: Example illustrating the relationships between executables and processes.\", \n width=\"100%\", align=\"center\", \n margin=\"auto auto auto auto\")))",
"_____no_output_____"
]
],
[
[
"At the bottom of the figure, four executable files are illustrated as colored document icons; 1) `/bin/ls` (orange), 2) `/bin/bash`, 3) `mypgm` (blue) and 4) `/bin/vim` (purple). These files are located on the storage I/O devices. The rest of the diagram illustrates a UNIX kernel that is running four processes. With the kernel, four contexts are shown, one for each process. The memory of each context is colored with the color of the executable they were started with. In this case `Context 0` was initialized with `/bin/ls`, `Context 1` and `Context 2` from `/bin/bash` and finally `Context 3` with an executable file called `mypgm`. Above the kernel is the abstract view of the running processes that correspond to the contexts. The kernel assigns each process a unique identifying number called a Process Identifier (pid). \n\nIn UNIX the `ps` command lets us explore this view of the systems, listing the running process. `ps` can display many attributes of the processes including the pid and the executable it was initialized with. \n\nExecutables are used to initialize the memory for new process contents. Many contents can be created from a single executable each process after created is an independent running instance of the executable. In the figure, we can see that two processes have been started from the `bash` executable. As the processes execute they modify their memory contexts but the original executable is not modified and is preserved as a starting point for any new contexts started from it. This is how we can have many independent \"shell\" processes running all spawned from the same binary. Note that while many independent processes can be running at a given time, started from one or more executables there is always only one OS kernel running that manages these processes. \n\nOnce initialized the process contents are schedule by the OS onto the CPU and memory, as described in the prior section, and continue executing until they terminate. \n\nSo now that we know how executable relate to processes what remains is learning how to create executables. Then we will be able to start processes from images that we constructed. Once we understand this then the goal will be learning how to prepare the contents of an executable so that it encodes our programs!",
"_____no_output_____"
],
[
"<div style=\"background-color:powderblue;\">\n<img style=\"margin: 1px 5px 0px 0px;\" align=\"left\" width=\"40\" src=\"../images/fyi.svg\"> \nWhile we call it loading most modern operating systems use advanced techniques that allow it to avoid:\n 1. Having to have entire initial image in an executable. \n 2. Needing to load the entire context when an executable is launched. \n\nTwo of the techniques used are: 1) [Dynamic loading and linking](https://en.wikipedia.org/wiki/Dynamic_loading) and 2) [Memory Paging](https://en.wikipedia.org/wiki/Memory_paging). \n\nRather than having every executable contain an exact copy of it's initial memory image, dynamic loading and linking allows the executable to only contain the parts of memory that are unique to the program. Any common libraries of routines an executable uses are not in the executable. Rather the executable simply includes a list of the libraries and routines it requires. When creating a process for such a \"dynamically linked\" executable a dynamic loader-linker will validate that the required files can be found. It then ensures that as the process is run the values from the other file are added to the process context. This approach has several nice properties. The sizes of binaries are considerably smaller and it now possible to upgrade libraries and have new process, started from exiting binaries, automatically use them. \n \nThe second technique, Memory Paging, refers to the OS using facilities of the CPU (Memory Management Unit) to avoid having the entire memory of a process context present in the computers physical memory. Specifically, the OS uses hardware \"paging\" features to carve up a process context memory into chunks called pages. At any given time the only pages that need to be present are the ones that the executing process is currently conducting memory transactions too. The other pages, of currently schedule process and, non-schedule processes can be stored on I/O devices. The OS then shuffles the pages in and out of main memory as needed. Treating the main memory as a form a \"cache\" for the active pages of the running processes. This allows us to run many programs who's \"virtual\" memory is larger than the physical memory of the computer.\n </div>",
"_____no_output_____"
],
[
"> <img style=\"margin: 1px 5px 0px 0px;\" align=\"left\" width=\"40\" src=\"../images/fyi.svg\"> <p style=\"background-color:powderblue;\"> [Virtual Machines](https://en.wikipedia.org/wiki/System_virtual_machine) are an advanced scheme that uses CPU support to partition the computer so that multiple independent OS Kernels can be started. Each believing it is running directly on the hardware and starting and managing it's own processes. ",
"_____no_output_____"
],
[
"## Creating executables\n\nGiven the nature of von Neumann execution and what we know now, \"Programming\" a process means preparing an initial memory image in the form of an executable. \nTo do this we will use the two tools that are the foundation for constructing executables: an assembler and a linker. \n\nWhile many high level programming languages exist in the end to get something to execute it must be represent as binary values in memory. An assembler and linker are the base tools that provide programmers the to directly specifying what the contents of an executable should be. \n\nThe rest of this chapter focuses on introducing these tools and how to practically use them to create an executable. We will however defer our discussion of how to encode useful programs within executable to later chapters. Rather we will culminate this chapter with the ability to create an empty executable that will let us create a \"blank\" process. While this process will not be useful on its own we will be able to use it with a debugger to directly explore the internals of the CPU and memory of a process and learn how to use the [debugger as more than a debugger](../assembly/Debuggers.ipynb).",
"_____no_output_____"
]
],
[
[
"\ndisplay(HTML('''\n<i>Figure: We will be using the GNU binary utilities to work with executables. We will focus on the GNU Assembler and Linker.</i>\n<iframe src=\"https://sourceware.org/binutils/\", height=\"400px\" width=\"100%\"></iframe>\n'''))",
"_____no_output_____"
]
],
[
[
"### Assemblers\n\nThe [assembler](https://en.wikipedia.org/wiki/Assembly_language#Assembler) is a program that processes commands in a file that directs it to create fragments of memory contents. The linker, discussed next, will combine these fragments according to a generic layout to create an executable. \n\nThe code we write in the syntax of the assembler is assembly code and is a combination of CPU specific mnemonics and special assembler specific **directives** . We write assembly code files in ASCII with an ASCII editor. Traditionally on UNIX systems we use the suffix `.s` or `.S` for assembly code files. ",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/ASSEMBLY-VNA-SOFTWARE/ASSEMBLY-VNA-SOFTWARE.027.png\",\n caption=\"Figure: Assembler.\", \n width=\"100%\", align=\"center\", \n margin=\"auto auto auto auto\")))",
"_____no_output_____"
]
],
[
[
"As illustrated in the figure authors of the assembler use the CPU manufactures documentation to write the assembler. The assembler is written to be able to translate the CPU mnemonics, written in ASCII into the CPU specific binary values that encode the specified instruction.\n\nThe assembler directives allow us to write arbitrary values that should be placed in memory. The assembler understands various formats (such as hex, binary, signed and unsigned integers, ASCII, etc) and sizes. The assembler will convert the values we write, in the various formats, into the correct raw binary values that should be place in memory. Directives also let us control both the relative and absolute placement of values. \n\nIn addition to mnemonics and directives assemblers allow a programmer to introduce introduce symbolic human readable labels for the addresses of particular values. The symbolic labels, or simply symbols, allow us within our assembly code to refer to the values at the location of the symbol. It will be the linker's job to both set the address for a label and replace the reference to the symbols address to the address it assigns to the symbol.\n\nFinally and by no means least the assembler allows us to provide comments that explain what our code is doing. It is particularly important to carefully document assembly code given its cryptic nature. It is not uncommon for the comments in an assembly file to far out number the actual code lines.\n\n\nAssuming that there is no errors in the assembly code the assembler will translate our input file into what is called an **object** file. This file, in general, will encode a sequence of values for both the [text and data](../assembly/vonNeumannArchitecture.ipynb) of our program, however, the exact locations will not be specified. In addition to the values the file can encode various descriptive facts so that the linker can connect up our code with code generated from other object files. It can also contain information that can be saved in the final binary that other tools like the OS and debugger can use to understand what the bytes of the executable mean. For example, it can contain debugger information that will allow the debugger to know what source code file and how the lines within it correspond to particular opcode bytes in memory.\n\nTraditionally assemblers can produce a \"listing\" file which is an ASCII document that describes the actions and output that the assembler took and produced while processing the input assembly source file. The listing file can be very useful in understanding how you code we translate into byte values and what symbols it introduces and those that it references. \n\nAs we progress we will understand more of the features and details of the assembler and its syntax through examples. For the moment we are only concerned with a general understanding of it role and how.",
"_____no_output_____"
],
[
"#### GNU Assembler (gas)\n\nThe particular assembler that we will be using is the GNU Assembler (gas). As stated before we will learn its basic syntax as we work through various examples. \n\nWhen we use gas to process INTEL mnemonics we will be using the mnemonic syntax that is compatible with the INTEL manuals. There is another standard that was introduce by A&TT, for writing INTEL mnemonics but was will avoid this syntax. \n\nThe following is the manual for gas. Section 3.5 describes statements which can be very helpful in learning to read and write gas assembly.",
"_____no_output_____"
]
],
[
[
"display(HTML('''\n<i>Figure: GNU Assembler Manual.</i>\n<iframe src=\"https://sourceware.org/binutils/docs/as/\", height=\"400px\" width=\"100%\"></iframe>\n'''))",
"_____no_output_____"
]
],
[
[
"### Linkers\n\nThe process of constructing executables has purposefully been broken down into two steps to make it easier to construct programs out of a collection of code. Specifically as we read above an assembler produces fragments of memory values called object files. \n\nThe basic idea is that programmers write various reusable chucks of assembly code that introduce data and text stored in object files. The data and text from multiple object files can then be combined to produce a final executable by a [linker](https://en.wikipedia.org/wiki/Linker_(computing)). Where the linker takes object files, libraries of object files and a generic description of how the text and data should be laid out, located in memory, to be compatible with the OS process model. ",
"_____no_output_____"
]
],
[
[
"display(Markdown(htmlFig(\"../images/ASSEMBLY-AddressSpaceandIO/ASSEMBLY-AddressSpaceandIO.004.png\",\n caption=\"Figure: Assembler.\", \n width=\"100%\", align=\"center\", \n margin=\"auto auto auto auto\")))",
"_____no_output_____"
]
],
[
[
"While this approach might seem unnecessarily complicated at first it make our lives as programmers much easier. First it provides us with the ability to compose our programs out of our own libraries of code and code provided by others. Secondly we don't have to worry about where exactly everything ends up in memory. Rather the linker will be given all the inputs including general directives for how to layout the text and data of an executable for it to be compatible with a particular OS. With all of these this as input it will organize the values of our code into a coherent image that has the text and data correctly laid out and all symbols and their references will be carefully connect. \n\nThe operation of the linker may seem quite vague at this point. In later chapters we will fill in the details when we discuss how memory gets assigned by the OS to a process context when \"loading\" an executable.\n\n#### GNU Linker (ld)\n\nLike the assembler the specific linker we will be using is the default linker used by the Linux Operating system, the GNU Linker (ld). The following is the manual for ld.",
"_____no_output_____"
]
],
[
[
"display(HTML('''\n<i>Figure: GNU Linker Manual.</i>\n<iframe src=\"https://sourceware.org/binutils/docs/ld/Overview.html#Overview\", height=\"400px\" width=\"100%\"></iframe>\n'''))",
"_____no_output_____"
]
],
[
[
"### Creating and using our first executable.\n\nOur goal now is to put all the pieces of this chapter together to create our first executable. While it will be a valid executable that the OS can load it will only contain a small number of bytes that will all be zero. As such when we go to run it we don't expect it to behave in a particularly useful way. \nHowever, we will be able to use it with the debugger to explore the process that is created from the the binary. In some sense it is the simplest process we can create that has at least some initialized memory within it.\n\n#### The source code",
"_____no_output_____"
]
],
[
[
"display(Markdown(FileCodeBox(\n file=exdir + \"/empty.s\", \n lang=\"gas\", \n title=\"<b>CODE: Our very own 'empty' binary</b>\",\n h=\"100%\",\n w=\"70em\"\n)))",
"_____no_output_____"
]
],
[
[
"The above code is the contents of a file we have named `empty.s`. In reality it is not really empty but rather contains four bytes of zero.\n\nThis code has been extensively commented so that we can use it to learn a little about the assembly syntax and what it lets us do.\nIn general, as said before, it is good practice to write lots of \ncomments in your assembly code. The following is an un-commented version:\n\n```gas\n .intel_syntax noprefix\n .text \n .global _start\n_start: \n .byte 0x00, 0x00, 0x00, 0x00\n .byte 0x00, 0x00, 0x00, 0x00\n```\n\nAs described the keywords that start with `.` are directives. The assembler does not attempt to interpret them as CPU mnemonics. Rather\nthey are various command that control how the assembler behaves. \n\nUnsurprisingly, the first statement, `.intel_syntax noprefix`, tells the assembler\nthat we want to use the INTEL syntax for writing mnemonics. The next statement `.text` is a \"section\" directive, more verbosely it could have been written\n`.section .text`. It tells the assembler to let the linker know that any values that follow belong with the text values of the final binary. If we want to put values in other sections, such as data we would then switch sections. The next line `.global _start` tells the assembler to place the symbolic label that is introduced on the following in the list of \"external\" symbols that this object file is making visible to the other object files. The reason we do this is due to a requirement that every executable needs to define a symbol that tells the OS how to initialize the PC when it creates a new process context from the executable. This symbol is call the entry point. On Linux the default symbol the linker looks for as the entry point is `_start`. So in combination the two lines say that file is defining an externally visible symbol `_start` and defines it to be the location of the values that follow. Since we have not inserted any values yet we are still at the beginning of the text memory that this object file defines. \n\nThe next two lines define raw hex byte sized values that we want to place into the memory object. The `.byte` directive expects us to list one more comma separated values that the assembler will translate into binary values and place in the object file. The assembler understands many formats for the values.\nIn our case we are using hex notation and simply specify eight bytes all zero valued. Each byte moves the insertion point to the next offset relative to the current section. As such these two lines tell the assembler that we want this object file to contribute 8 zero value bytes to the text values of the final linked executable. ",
"_____no_output_____"
],
[
"#### Assembling the source into an object file\n\nThe following invocation of the assembler will translate the `empty.s` into `empty.o`. The `-g` flag tells the assembler to keep information in the object file that the debugger can use. The `-o empty` tells the assembler what the name of the output object file it should create. ",
"_____no_output_____"
]
],
[
[
"display(TermShellCmd('''[[ -a empty.o ]] && rm empty.o\nmake ASFLAGS='' LDFLAGS='' empty.o\n''', cwd=exdir, prompt='', pretext='', prenl=False, noposttext=True, height=\"100%\"))",
"_____no_output_____"
]
],
[
[
"Using `ls` we see that indeed the assembler created a file called `empty.o`. Note it is quite a bit bigger than the 8 bytes that we want loaded into memory. That is because there is lots of extra information that must be put into an object file so that link knows how to work with it. ",
"_____no_output_____"
]
],
[
[
"display(TermShellCmd('''ls -l empty.o''', \n cwd=exdir, noposttext=True, height=\"100%\"))",
"_____no_output_____"
]
],
[
[
"#### Linking the object file into an executable\n\nWhile we only have one object file that composes our binary we will need to have the linker process the object file and convert it into a executable object file. The `ld` has built into a file called the link script that describes where thinks, like the text and data of the executable files need to go to be compatible with the OS, Linux in our case. This includes encoding the address where the `_start` symbol gets place as the entry point for the binary.\n\nThe following is the command to link our single object file into an executable. Again note that we pass ld `-g` flag also telling it to generate and keep information for the debugger in the executable. ",
"_____no_output_____"
]
],
[
[
"display(TermShellCmd('''[[ -a empty ]] && rm empty\nmake ASFLAGS='' LDFLAGS='' empty\n''', cwd=exdir, prompt='', pretext='', prenl=False, noposttext=True, height=\"100%\"))",
"_____no_output_____"
]
],
[
[
"Using ls again we see that the linker produced the executable `empty` as directed by the `-o empty` argument.",
"_____no_output_____"
]
],
[
[
"display(TermShellCmd('''ls -l empty''', \n cwd=exdir, noposttext=True, height=\"100%\"))",
"_____no_output_____"
]
],
[
[
"Notice this time it does have the execute permissions set.",
"_____no_output_____"
],
[
"#### Running the executable\n\nOk lets run it.",
"_____no_output_____"
]
],
[
[
"# no output from seg faulting code .... don't have time to fix this\ndisplay(TermShellCmd('''echo '$ ./empty\nSegmentation fault'\n''', \n cwd=exdir, prompt='',prenl='', noposttext=True, wait=True, height=\"100%\", tmout=4))",
"_____no_output_____"
]
],
[
[
"#### Segmentation Fault\nThe message `Segmentation fault` indicates that the kernel informed the shell that the kernel terminated the process that it started to run the `empty` executable. Specifically the kernel told the shell that empty did an \"Invalid memory reference\" and so the kernel terminated the process. An invalid memory reference means that the process trying to do a memory transaction to an address that did not have valid memory associated with it. \n\nIn later chapters we will discuss why memory is invalid an how to create valid memory. For the moment it suffices to say that an memory that our process will have needed to be defined by the executable. An in our case we only created 8 bytes of memory we knew was valid.\n\nThe real use of the empty binary is that it allows us to create a very simple process that we can use the debugger with.\n",
"_____no_output_____"
],
[
"#### Using empty with gdb\n\nIn the following session we use gdb to create a process and then explore, modify and execute an instruction within it. Other chapters will go into the details this is just to give us a flavor of how we can use gdb to explore the relationships between an executable and a process created from it.\n\nThe following is an explanation for what is was done\n\n1. set the disassembly syntax of gdb to intel\n - `set disassembly-flavor intel`\n2. open the empty assembly file\n - `file empty`\n3. We can now printf explore the values within the executable This includes finding out where a symbol got located including our entry point\n - `p /x &_start`\n4. Before we start a process we want to have gdb freeze it before any instructions within it execute. So we set a break point at the address of the entry point.\n - `b &_start`\n5. Ok let's start a process from the open executable\n - `run`\n6. At this point the a process has been created but it is frozen at the breakpoint and we can control it from gdb. We can print information about it\n`info proc`\n7. We can look at it's registers\n`info registers`\n8. We can work with individual registers\n - `p /x $rax`\n - `p /t $rax`\n - `p /d $rax`\n - `p /x $rip`\n9. We can examine memory\n - `x/8xb 0x401000`\n10. We can even ask gdb to disassemble memory\n - `x/2i &_start`\n11. We can also write new values into memory. Lets uses this this ability to replace some of our zeros with the values that encode an instruction : `popcnt rbx, rax`. Each time we add a new value we will ask gdb to display the memory and to disassemble what it finds there so we can see our progress.\n - `set {unsigned char}&_start = 0xF3`\n - `x/5xb &_start`\n - `x/1i &_start`\n - `set {unsigned char}(&_start+1) = 0x48`\n - `x/5xb &_start`\n - `x/1i &_start`\n - `set {unsigned char}(&_start+2) = 0x0F`\n - `x/5xb &_start`\n - `x/1i &_start`\n - `set {unsigned char}(&_start+3) = 0xB8`\n - `x/5xb &_start`\n - `x/1i &_start`\n - `set {unsigned char}(&_start+4) = 0xD8`\n - `x/5xb &_start`\n - `x/1i &_start`\n12. Cool those five bytes to encode the instruction we want. Lets display those bytes in various other forms\n - In binary notation: `x/5tb _start`\n - As signed decimal numbers: `x/5db _start`\n - As unsigned decimal numbers: `x/5ub _start`\n13. Lets display the current contents of the two register that our instruction uses an operands\n - `p/x {$rax, $rbx}`\n14. Before we execute the instruction lets remove the break point to make our life easier\n - `delete`\n15 We can use gdb to execute one instruction at a time\n - `stepi`\n16. Lets look to see if the registers have changed\n - `p/x {$rax, $rbx}`\n17. Lets try again but lets put a value in rax\n - `set $rax = 0b1011`\n18. Again we print the value of the registers to be sure we know what values are in them before we execute the instruction\n - `p/x {$rax, $rbx}`\n19. Remember the pc tells us where the next instruction is\n - `p /x $pc`\n20. So we need to reset it back to the address we placed the instruction which was the address of `_start`\n - `set $pc = $_start`\n - `stepi`\n - `p/x {$rax, $rbx}`\n\nCool rbx now has the number of bits set to one that the value in rax has -- its \"population count\"\n\n\n",
"_____no_output_____"
]
],
[
[
"display(gdbFile(\"empty.gdb\", cwd=exdir, height=\"10em\", tmout=4))",
"_____no_output_____"
]
],
[
[
"### The code to create an executable for a larger empty process\n\nThe following code replaces the two lines of `.byte` directives with a single `.fill` directive that inserts in this case 256 zero valued bytes into memory at `_start`",
"_____no_output_____"
]
],
[
[
"display(Markdown(FileCodeBox(\n file=exdir + \"/empty256.s\", \n lang=\"gas\", \n title=\"<b>CODE: An empty binary with a little more room.</b>\",\n h=\"100%\",\n w=\"100%\",\n number=True\n)))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf6eebede3d25457ad42bfa81363efa1a21d20d | 13,773 | ipynb | Jupyter Notebook | notebook/depreciated/Lung_Segmentation_apply.ipynb | shaverschuren/SmartDetect_segmentation | 9fea95bbb334e375948823413778db5596f27b0c | [
"MIT"
] | 1 | 2021-05-06T18:00:56.000Z | 2021-05-06T18:00:56.000Z | notebook/depreciated/Lung_Segmentation_apply.ipynb | shaverschuren/SmartDetect_segmentation | 9fea95bbb334e375948823413778db5596f27b0c | [
"MIT"
] | null | null | null | notebook/depreciated/Lung_Segmentation_apply.ipynb | shaverschuren/SmartDetect_segmentation | 9fea95bbb334e375948823413778db5596f27b0c | [
"MIT"
] | null | null | null | 52.368821 | 117 | 0.593044 | [
[
[
"import os\n\nimport numpy as np\nimport pandas as pd\n\nimport pydicom\nimport cv2\nimport matplotlib.pyplot as plt\n\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras.optimizers import *\nfrom keras.losses import binary_crossentropy\nfrom keras.utils import Sequence\nfrom keras import backend as keras\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.callbacks import ModelCheckpoint, LearningRateScheduler\n\nfrom glob import glob\nfrom tqdm import tqdm",
"Using TensorFlow backend.\n"
],
[
"INPUT_DIR = os.path.join(\"/mnt/data/Smart-Detect/\", \"Segmentation\")\n\nSEGMENTATION_DIR = os.path.join(INPUT_DIR, \"segmentation\")\n\nSEGMENTATION_MODEL = os.path.join(INPUT_DIR, \"unet_lung_seg.hdf5\")\n\n#Where segmented images will be saved\nSEGMENTATION_RESULT = os.path.join(SEGMENTATION_DIR, \"result\")\nSEGMENTATION_RESULT_TRAIN = os.path.join(\"/mnt/data/Smart-Detect/Target_Class_2D/regular_seg/\", \"covid\")\nSEGMENTATION_RESULT_TEST = os.path.join(\"/mnt/data/Smart-Detect/Segmentation/segtrain/\", \"No_Findings\")\n\n#Unused?\nSEGMENTATION_TEST_DIR = os.path.join(SEGMENTATION_DIR, \"test\")\nSEGMENTATION_TRAIN_DIR = os.path.join(SEGMENTATION_DIR, \"train/image\")\n\n#The images to segment\nRSNA_TRAIN_DIR = os.path.join(\"/mnt/data/Smart-Detect/Target_Class_2D/regular/\", \"covid\")\nRSNA_TEST_DIR = os.path.join(\"/mnt/data/Smart-Detect/ChestX-ray14/train/\", \"No_Findings\")\n\n#RSNA_LABELS_FILE = os.path.join(RSNA_DIR, \"stage_1_train_labels.csv\")\n#RSNA_CLASS_INFO_FILE = os.path.join(RSNA_DIR, \"stage_1_detailed_class_info.csv\")\n\n",
"_____no_output_____"
],
[
"def dice_coef(y_true, y_pred):\n y_true_f = keras.flatten(y_true)\n y_pred_f = keras.flatten(y_pred)\n intersection = keras.sum(y_true_f * y_pred_f)\n return (2. * intersection + 1) / (keras.sum(y_true_f) + keras.sum(y_pred_f) + 1)\n\ndef dice_coef_loss(y_true, y_pred):\n return -dice_coef(y_true, y_pred)\n\nsegmentation_model = load_model(SEGMENTATION_MODEL, \\\n custom_objects={'dice_coef_loss': dice_coef_loss, \\\n 'dice_coef': dice_coef})\n\nsegmentation_model.summary()",
"Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 512, 512, 1) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 512, 512, 32) 320 input_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 512, 512, 32) 9248 conv2d_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 256, 256, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 256, 256, 64) 18496 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 256, 256, 64) 36928 conv2d_3[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 128, 128, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 128, 128, 128 73856 max_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 128, 128, 128 147584 conv2d_5[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_3 (MaxPooling2D) (None, 64, 64, 128) 0 conv2d_6[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 64, 64, 256) 295168 max_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 64, 64, 256) 590080 conv2d_7[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_4 (MaxPooling2D) (None, 32, 32, 256) 0 conv2d_8[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 32, 32, 512) 1180160 max_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 32, 32, 512) 2359808 conv2d_9[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_1 (Conv2DTrans (None, 64, 64, 256) 524544 conv2d_10[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 64, 64, 512) 0 conv2d_transpose_1[0][0] \n conv2d_8[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 64, 64, 256) 1179904 concatenate_1[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 64, 64, 256) 590080 conv2d_11[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_2 (Conv2DTrans (None, 128, 128, 128 131200 conv2d_12[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 128, 128, 256 0 conv2d_transpose_2[0][0] \n conv2d_6[0][0] \n__________________________________________________________________________________________________\nconv2d_13 (Conv2D) (None, 128, 128, 128 295040 concatenate_2[0][0] \n__________________________________________________________________________________________________\nconv2d_14 (Conv2D) (None, 128, 128, 128 147584 conv2d_13[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_3 (Conv2DTrans (None, 256, 256, 64) 32832 conv2d_14[0][0] \n__________________________________________________________________________________________________\nconcatenate_3 (Concatenate) (None, 256, 256, 128 0 conv2d_transpose_3[0][0] \n conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_15 (Conv2D) (None, 256, 256, 64) 73792 concatenate_3[0][0] \n__________________________________________________________________________________________________\nconv2d_16 (Conv2D) (None, 256, 256, 64) 36928 conv2d_15[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_4 (Conv2DTrans (None, 512, 512, 32) 8224 conv2d_16[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 512, 512, 64) 0 conv2d_transpose_4[0][0] \n conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_17 (Conv2D) (None, 512, 512, 32) 18464 concatenate_4[0][0] \n__________________________________________________________________________________________________\nconv2d_18 (Conv2D) (None, 512, 512, 32) 9248 conv2d_17[0][0] \n__________________________________________________________________________________________________\nconv2d_19 (Conv2D) (None, 512, 512, 1) 33 conv2d_18[0][0] \n==================================================================================================\nTotal params: 7,759,521\nTrainable params: 7,759,521\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"def image_to_train(img):\n img = (img*(-1))+255\n npy = img /255\n \n npy = np.reshape(npy, npy.shape+(1,))\n npy = np.reshape(npy, (1,) + npy.shape)\n return npy\n\ndef train_to_image(npy):\n img = (npy[0,:,:,0]*255.).astype(np.uint8)\n kernel = np.ones((40,40),np.uint8)\n img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)\n img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)\n return img",
"_____no_output_____"
],
[
"def segment_image(pid, img, save_to):\n img = cv2.resize(img, (512, 512))\n segm_ret = segmentation_model.predict(image_to_train(img), \\\n verbose=0)\n\n #print(segm_ret.shape)\n #mask_pp = cv2.morphologyEx(segm_ret, cv2.MORPH_OPEN, 10)\n img = cv2.bitwise_and(img, img, mask=train_to_image(segm_ret))\n \n cv2.imwrite(os.path.join(save_to, \"%s.png\" % pid), img)\n\nfor filename in tqdm(glob(os.path.join(RSNA_TRAIN_DIR, \"*.png\"))):\n pid, fileext = os.path.splitext(os.path.basename(filename))\n img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)\n segment_image(pid, img, SEGMENTATION_RESULT_TRAIN)\n\n#for filename in tqdm(glob(os.path.join(RSNA_TEST_DIR, \"*.jpg\"))):\n# pid, fileext = os.path.splitext(os.path.basename(filename))\n# img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)\n# segment_image(pid, img, SEGMENTATION_RESULT_TEST)\n\n",
"100%|██████████| 6724/6724 [44:38<00:00, 2.51it/s] \n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf6f0a61e7b80ff0c9dd71daf96dbd2c75e7962 | 2,642 | ipynb | Jupyter Notebook | notebooks/Elasticseach.ipynb | coatk1/test_code | b70f889ca1e84260bac27ae71d5be7824c502da5 | [
"MIT"
] | null | null | null | notebooks/Elasticseach.ipynb | coatk1/test_code | b70f889ca1e84260bac27ae71d5be7824c502da5 | [
"MIT"
] | 98 | 2020-01-11T16:43:18.000Z | 2022-03-10T19:30:33.000Z | notebooks/Elasticseach.ipynb | coatk1/test_code | b70f889ca1e84260bac27ae71d5be7824c502da5 | [
"MIT"
] | 1 | 2020-04-13T22:34:24.000Z | 2020-04-13T22:34:24.000Z | 19.284672 | 73 | 0.476911 | [
[
[
"# Elasticsearch",
"_____no_output_____"
]
],
[
[
"from elasticsearch import Elasticsearch",
"_____no_output_____"
],
[
"es = Elasticsearch([{'host': 'localhost', 'port': 9200}])",
"_____no_output_____"
],
[
"es",
"_____no_output_____"
],
[
"_es = None\n_es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\nif _es.ping():\n print('Yay Connect')\nelse:\n print('Awww it could not connect!')",
"Awww it could not connect!\n"
],
[
"# import json\n# from time import sleep\n\n# import requests\n# import urllib3\n# from bs4 import BeautifulSoup",
"_____no_output_____"
],
[
"# import logging\n\n\n# def connect_elasticsearch():\n# _es = None\n# _es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\n# if _es.ping():\n# print('Yay Connect')\n# else:\n# print('Awww it could not connect!')\n# return _es\n\n\n# if __name__ == '__main__':\n# logging.basicConfig(level=logging.ERROR)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf6ff1c997f411f055fc6e58fa06980bd09fab2 | 268,558 | ipynb | Jupyter Notebook | Tyler_Sheppard_LS_DS_121_Join_and_Reshape_Data.ipynb | ty3117/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling | 6ac46b342d5c156b208643301bb77db2ab3cfbdb | [
"MIT"
] | null | null | null | Tyler_Sheppard_LS_DS_121_Join_and_Reshape_Data.ipynb | ty3117/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling | 6ac46b342d5c156b208643301bb77db2ab3cfbdb | [
"MIT"
] | null | null | null | Tyler_Sheppard_LS_DS_121_Join_and_Reshape_Data.ipynb | ty3117/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling | 6ac46b342d5c156b208643301bb77db2ab3cfbdb | [
"MIT"
] | null | null | null | 53.830026 | 27,908 | 0.580787 | [
[
[
"<a href=\"https://colab.research.google.com/github/ty3117/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/Tyler_Sheppard_LS_DS_121_Join_and_Reshape_Data.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"_Lambda School Data Science_\n\n# Join and Reshape datasets\n\nObjectives\n- concatenate data with pandas\n- merge data with pandas\n- understand tidy data formatting\n- melt and pivot data with pandas\n\nLinks\n- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)\n- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)\n - Combine Data Sets: Standard Joins\n - Tidy Data\n - Reshaping Data\n- Python Data Science Handbook\n - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append\n - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join\n - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping\n - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables\n \nReference\n- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)\n- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)",
"_____no_output_____"
],
[
"## Download data\n\nWe’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!",
"_____no_output_____"
]
],
[
[
"!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz",
"--2019-05-10 01:08:48-- https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz\nResolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.177.237\nConnecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.177.237|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 205548478 (196M) [application/x-gzip]\nSaving to: ‘instacart_online_grocery_shopping_2017_05_01.tar.gz.1’\n\ninstacart_online_gr 100%[===================>] 196.03M 41.6MB/s in 5.1s \n\n2019-05-10 01:08:53 (38.5 MB/s) - ‘instacart_online_grocery_shopping_2017_05_01.tar.gz.1’ saved [205548478/205548478]\n\n"
],
[
"!ls -1",
"instacart_2017_05_01\ninstacart_online_grocery_shopping_2017_05_01.tar.gz\ninstacart_online_grocery_shopping_2017_05_01.tar.gz.1\nsample_data\n"
],
[
"!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz",
"instacart_2017_05_01/\ninstacart_2017_05_01/._aisles.csv\ninstacart_2017_05_01/aisles.csv\ninstacart_2017_05_01/._departments.csv\ninstacart_2017_05_01/departments.csv\ninstacart_2017_05_01/._order_products__prior.csv\ninstacart_2017_05_01/order_products__prior.csv\ninstacart_2017_05_01/._order_products__train.csv\ninstacart_2017_05_01/order_products__train.csv\ninstacart_2017_05_01/._orders.csv\ninstacart_2017_05_01/orders.csv\ninstacart_2017_05_01/._products.csv\ninstacart_2017_05_01/products.csv\n"
],
[
"%cd instacart_2017_05_01",
"/content/instacart_2017_05_01\n"
],
[
"!ls -lh *.csv",
"-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv\n-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv\n-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv\n-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv\n-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv\n-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv\n"
],
[
"!tail aisles.csv",
"125,trail mix snack mix\n126,feminine care\n127,body lotions soap\n128,tortillas flat bread\n129,frozen appetizers sides\n130,hot cereal pancake mixes\n131,dry pasta\n132,beauty\n133,muscles joints pain relief\n134,specialty wines champagnes\n"
],
[
"!tail departments.csv",
"12,meat seafood\n13,pantry\n14,breakfast\n15,canned goods\n16,dairy eggs\n17,household\n18,babies\n19,snacks\n20,deli\n21,missing\n"
],
[
"!head order_products__prior.csv",
"order_id,product_id,add_to_cart_order,reordered\n2,33120,1,1\n2,28985,2,1\n2,9327,3,0\n2,45918,4,1\n2,30035,5,0\n2,17794,6,1\n2,40141,7,1\n2,1819,8,1\n2,43668,9,0\n"
],
[
"!head order_products__train.csv",
"order_id,product_id,add_to_cart_order,reordered\n1,49302,1,1\n1,11109,2,1\n1,10246,3,0\n1,49683,4,0\n1,43633,5,1\n1,13176,6,0\n1,47209,7,0\n1,22035,8,1\n36,39612,1,0\n"
],
[
"",
"_____no_output_____"
],
[
"!head products.csv",
"product_id,product_name,aisle_id,department_id\n1,Chocolate Sandwich Cookies,61,19\n2,All-Seasons Salt,104,13\n3,Robust Golden Unsweetened Oolong Tea,94,7\n4,Smart Ones Classic Favorites Mini Rigatoni With Vodka Cream Sauce,38,1\n5,Green Chile Anytime Sauce,5,13\n6,Dry Nose Oil,11,11\n7,Pure Coconut Water With Orange,98,7\n8,Cut Russet Potatoes Steam N' Mash,116,1\n9,Light Strawberry Blueberry Yogurt,120,16\n"
]
],
[
[
"# Join Datasets",
"_____no_output_____"
],
[
"## Goal: Reproduce this example\n\nThe first two orders for user id 1:",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, Image\nurl = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'\nexample = Image(url=url, width=600)\n\ndisplay(example)",
"_____no_output_____"
]
],
[
[
"## Load data\n\nHere's a list of all six CSV filenames",
"_____no_output_____"
]
],
[
[
"!ls -lh *.csv",
"-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv\n-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv\n-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv\n-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv\n-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv\n-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv\n"
]
],
[
[
"For each CSV\n- Load it with pandas\n- Look at the dataframe's shape\n- Look at its head (first rows)\n- `display(example)`\n- Which columns does it have in common with the example we want to reproduce?",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### aisles",
"_____no_output_____"
]
],
[
[
"aisles = pd.read_csv(\"aisles.csv\")\naisles.head()",
"_____no_output_____"
],
[
"aisles.shape",
"_____no_output_____"
]
],
[
[
"### departments",
"_____no_output_____"
]
],
[
[
"departments = pd.read_csv(\"departments.csv\")\ndepartments.head()",
"_____no_output_____"
],
[
"departments.shape",
"_____no_output_____"
]
],
[
[
"### order_products__prior",
"_____no_output_____"
]
],
[
[
"order_products__prior = pd.read_csv(\"order_products__prior.csv\")\norder_products__prior.head()",
"_____no_output_____"
],
[
"order_products__prior.shape",
"_____no_output_____"
]
],
[
[
"### order_products__train",
"_____no_output_____"
]
],
[
[
"order_products__train = pd.read_csv(\"order_products__train.csv\")\norder_products__train.head()",
"_____no_output_____"
],
[
"order_products__train.shape",
"_____no_output_____"
]
],
[
[
"### orders",
"_____no_output_____"
]
],
[
[
"orders = pd.read_csv(\"orders.csv\")\norders.head()",
"_____no_output_____"
],
[
"orders.tail()",
"_____no_output_____"
]
],
[
[
"### products",
"_____no_output_____"
]
],
[
[
"products = pd.read_csv(\"products.csv\")\nproducts.head()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Concatenate order_products__prior and order_products__train",
"_____no_output_____"
]
],
[
[
"order_products = pd.concat([order_products__prior, order_products__train])\norder_products.shape",
"_____no_output_____"
],
[
"(order_products.shape, order_products__prior.shape, order_products__train.shape)",
"_____no_output_____"
],
[
"assert len(order_products__prior) + len(order_products__train) == len(order_products)",
"_____no_output_____"
],
[
"assert order_products__prior.shape[0] + order_products__train.shape[0] == order_products.shape[0]",
"_____no_output_____"
],
[
"assert len(order_products.columns) == len(order_products__prior.columns) == len(order_products__train.columns)",
"_____no_output_____"
],
[
"display(example)",
"_____no_output_____"
]
],
[
[
"## Get a subset of orders — the first two orders for user id 1",
"_____no_output_____"
],
[
"From `orders` dataframe:\n- user_id\n- order_id\n- order_number\n- order_dow\n- order_hour_of_day",
"_____no_output_____"
],
[
"## Merge dataframes",
"_____no_output_____"
],
[
"Merge the subset from `orders` with columns from `order_products`",
"_____no_output_____"
]
],
[
[
"idx_user_id_1 = (orders['user_id'] == 1) & (orders['order_number'] < 3)\ncolumns = ['user_id', 'order_id', 'order_number', 'order_dow', 'order_hour_of_day']\nsubset = orders.loc[idx_user_id_1, columns]\nsubset.head()",
"_____no_output_____"
]
],
[
[
"Merge with columns from `products`",
"_____no_output_____"
]
],
[
[
"columns = ['order_id', 'add_to_cart_order', 'product_id']\nmerge = pd.merge(subset, order_products[columns], on='order_id')",
"_____no_output_____"
],
[
"merge.head()",
"_____no_output_____"
],
[
"products.head()",
"_____no_output_____"
],
[
"final = pd.merge(merge, products[['product_id', 'product_name']], on='product_id')\nfinal",
"_____no_output_____"
],
[
"mapper = {col_name: col_name.replace('_', ' ') for col_name in final.columns}",
"_____no_output_____"
],
[
"final.rename(index=str, columns=mapper, inplace=True)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"final.columns = [col_name.replace('_', ' ') for col_name in final.columns]\n# alternatively:\n# final.rename(index=str, columns=mapper, inplace=True)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"final = final.sort_values(by=['order number', 'add to cart order'])\nfinal",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"display(example)",
"_____no_output_____"
],
[
"csv_files = !ls -1 *.csv\ncsv_files",
"_____no_output_____"
],
[
"df_dict = {}\nfor csv_f in csv_files:\n k = csv_f.split('.')[0]\n df_dict[k] = pd.read_csv(csv_f)\ndf_dict.keys()",
"_____no_output_____"
],
[
"df_dict['order_products__train'].head()",
"_____no_output_____"
]
],
[
[
"# Reshape Datasets",
"_____no_output_____"
],
[
"## Why reshape data?\n\n#### Some libraries prefer data in different formats\n\nFor example, the Seaborn data visualization library prefers data in \"Tidy\" format often (but not always).\n\n> \"[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:\n\n> - Each variable is a column\n- Each observation is a row\n\n> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot.\"\n\n#### Data science is often about putting square pegs in round holes\n\nHere's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!",
"_____no_output_____"
],
[
"## Hadley Wickham's Examples\n\nFrom his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n\ntable1 = pd.DataFrame(\n [[np.nan, 2],\n [16, 11], \n [3, 1]],\n index=['John Smith', 'Jane Doe', 'Mary Johnson'], \n columns=['treatmenta', 'treatmentb'])\n\ntable2 = table1.T",
"_____no_output_____"
]
],
[
[
"\"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. \n\nThe table has two columns and three rows, and both rows and columns are labelled.\"",
"_____no_output_____"
]
],
[
[
"table1",
"_____no_output_____"
]
],
[
[
"\"There are many ways to structure the same underlying data. \n\nTable 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different.\"",
"_____no_output_____"
]
],
[
[
"table2",
"_____no_output_____"
]
],
[
[
"\"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.\n\nTable 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable.\"\n\n| name | trt | result |\n|--------------|-----|--------|\n| John Smith | a | - |\n| Jane Doe | a | 16 |\n| Mary Johnson | a | 3 |\n| John Smith | b | 2 |\n| Jane Doe | b | 11 |\n| Mary Johnson | b | 1 |",
"_____no_output_____"
],
[
"## Table 1 --> Tidy\n\nWe can use the pandas `melt` function to reshape Table 1 into Tidy format.",
"_____no_output_____"
]
],
[
[
"table1.reset_index().melt(id_vars='index')",
"_____no_output_____"
],
[
"tidy = table1.reset_index().melt(id_vars='index')\ntidy = tidy.rename(columns={'index': 'name', 'variable': 'trt', 'value': 'result'})\ntidy['trt'] = tidy['trt'].str.replace('treatment', '')\n#tidy['trt'] = tidy['trt'].apply(lambda x: x.replace('treatment', ''))",
"_____no_output_____"
],
[
"tidy",
"_____no_output_____"
]
],
[
[
"## Table 2 --> Tidy",
"_____no_output_____"
]
],
[
[
"##### LEAVE BLANK --an assignment exercise #####\ntidy2 = table2.reset_index().melt(id_vars='index')\ntidy2 = tidy2.rename(columns={'index': 'trt', 'variable': 'name', 'value': 'result'})\ntable2.reset_index().melt(id_vars='index')\ntidy2",
"_____no_output_____"
]
],
[
[
"## Tidy --> Table 1\n\nThe `pivot_table` function is the inverse of `melt`.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"tidy2.pivot_table(index='name', columns='trt', values='result')",
"_____no_output_____"
]
],
[
[
"## Tidy --> Table 2",
"_____no_output_____"
]
],
[
[
"##### LEAVE BLANK --an assignment exercise #####\ntidy2.pivot_table(index='name', columns='trt', values='result')\n",
"_____no_output_____"
]
],
[
[
"# Seaborn example\n\nThe rules can be simply stated:\n\n- Each variable is a column\n- Each observation is a row\n\nA helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot.\"",
"_____no_output_____"
]
],
[
[
"sns.catplot(x='trt', y='result', col='name', \n kind='bar', data=tidy, height=2);",
"_____no_output_____"
]
],
[
[
"## Now with Instacart data",
"_____no_output_____"
]
],
[
[
"products = pd.read_csv('products.csv')\n\norder_products = pd.concat([pd.read_csv('order_products__prior.csv'), \n pd.read_csv('order_products__train.csv')])\n\norders = pd.read_csv('orders.csv')",
"_____no_output_____"
]
],
[
[
"## Goal: Reproduce part of this example\n\nInstead of a plot with 50 products, we'll just do two — the first products from each list\n- Half And Half Ultra Pasteurized\n- Half Baked Frozen Yogurt",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, Image\nurl = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'\nexample = Image(url=url, width=600)\n\ndisplay(example)",
"_____no_output_____"
]
],
[
[
"So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.",
"_____no_output_____"
],
[
"## Subset and Merge\n\nOne challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.",
"_____no_output_____"
]
],
[
[
"a = products[['product_id', 'product_name']]\nb = order_products[['order_id', 'product_id']]\nc = orders[['order_id', 'order_hour_of_day']]\n\nmerged1 = pd.merge(a,b)\nmerged2 = pd.merge(merged1,c)",
"_____no_output_____"
],
[
"merged1.head()",
"_____no_output_____"
],
[
"merged2.head()",
"_____no_output_____"
],
[
"merged2.shape",
"_____no_output_____"
],
[
"product_names = [\n 'Half And Half Ultra Pasteurized',\n 'Half Baked Frozen Yogurt'\n]",
"_____no_output_____"
],
[
"idx = (merged2['product_name'] == product_names[0]) | (merged2['product_name'] == product_names[1])",
"_____no_output_____"
],
[
"idx.sum()",
"_____no_output_____"
],
[
"subset = merged2[idx]",
"_____no_output_____"
],
[
"subset.head()",
"_____no_output_____"
]
],
[
[
"## 4 ways to reshape and plot",
"_____no_output_____"
],
[
"### 1. value_counts",
"_____no_output_____"
]
],
[
[
"cream = subset[subset['product_name'] == product_names[0]]\nfroyo = subset[subset['product_name'] == product_names[1]]",
"_____no_output_____"
],
[
"cream.shape, froyo.shape",
"_____no_output_____"
],
[
"cream['order_hour_of_day'].value_counts(normalize=True).sort_index().plot()\nfroyo['order_hour_of_day'].value_counts(normalize=True).sort_index().plot()",
"_____no_output_____"
],
[
"display(example)",
"_____no_output_____"
]
],
[
[
"### 2. crosstab",
"_____no_output_____"
]
],
[
[
"table = pd.crosstab(subset['order_hour_of_day'], subset['product_name'])\ntable.plot()",
"_____no_output_____"
]
],
[
[
"### 3. Pivot Table",
"_____no_output_____"
]
],
[
[
"subset.pivot_table(index='order_hour_of_day', \n columns='product_name', \n values='order_id',\n aggfunc=len).plot()",
"_____no_output_____"
]
],
[
[
"### 4. melt",
"_____no_output_____"
]
],
[
[
"table.head()",
"_____no_output_____"
],
[
"melted = table.reset_index().melt(id_vars='order_hour_of_day')\nmelted.head()",
"_____no_output_____"
],
[
"sns.relplot(x='order_hour_of_day', \n y='value', \n hue='product_name', \n data=melted, \n kind='line');",
"_____no_output_____"
]
],
[
[
"# Assignment\n\n## Join Data Section\n\nThese are the top 10 most frequently ordered products. How many times was each ordered? \n\n1. Banana\n2. Bag of Organic Bananas\n3. Organic Strawberries\n4. Organic Baby Spinach \n5. Organic Hass Avocado\n6. Organic Avocado\n7. Large Lemon \n8. Strawberries\n9. Limes \n10. Organic Whole Milk\n\nFirst, write down which columns you need and which dataframes have them.\n\nNext, merge these into a single dataframe.\n\nThen, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.\n\n## Reshape Data Section\n\n- Replicate the lesson code\n- Complete the code cells we skipped near the beginning of the notebook\n- Table 2 --> Tidy\n- Tidy --> Table 2\n- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"merged2['product_name'].value_counts().head(10)",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nflights = sns.load_dataset('flights')\n\nflights.head(12)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nsns.set()\nflights.pivot_table('flights', index='year', columns='month', aggfunc='passengers').plot()\nplt.ylabel('total flights per year');",
"_____no_output_____"
],
[
"flights.pivot_table(index='year', columns='month', values='passengers')",
"_____no_output_____"
]
],
[
[
"## Join Data Stretch Challenge\n\nThe [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of \"**Popular products** purchased earliest in the day (green) and latest in the day (red).\" \n\nThe post says,\n\n> \"We can also see the time of day that users purchase specific products.\n\n> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.\n\n> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**\"\n\nYour challenge is to reproduce the list of the top 25 latest ordered popular products.\n\nWe'll define \"popular products\" as products with more than 2,900 orders.\n\n## Reshape Data Stretch Challenge\n\n_Try whatever sounds most interesting to you!_\n\n- Replicate more of Instacart's visualization showing \"Hour of Day Ordered\" vs \"Percent of Orders by Product\"\n- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing \"Number of Purchases\" vs \"Percent Reorder Purchases\"\n- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)\n- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecf6ff27bdc4163e6f6394396e4f4e7ca99e34f2 | 147,209 | ipynb | Jupyter Notebook | session/translation/ms-en/transformer-large.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 88 | 2021-01-06T10:01:31.000Z | 2022-03-30T17:34:09.000Z | session/translation/ms-en/transformer-large.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 43 | 2021-01-14T02:44:41.000Z | 2022-03-31T19:47:42.000Z | session/translation/ms-en/transformer-large.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 38 | 2021-01-06T07:15:03.000Z | 2022-03-19T05:07:50.000Z | 66.852407 | 637 | 0.734724 | [
[
[
"from tensor2tensor import models\nfrom tensor2tensor import problems\nfrom tensor2tensor.layers import common_layers\nfrom tensor2tensor.utils import trainer_lib\nfrom tensor2tensor.utils import t2t_model\nfrom tensor2tensor.utils import registry\nfrom tensor2tensor.utils import metrics\nfrom tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.data_generators import translate\nfrom tensor2tensor.utils import registry",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensor2tensor/utils/optimize.py:187: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensor2tensor/models/research/neural_stack.py:52: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.\n\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensor2tensor/utils/trainer_lib.py:111: The name tf.OptimizerOptions is deprecated. Please use tf.compat.v1.OptimizerOptions instead.\n\n"
],
[
"@registry.register_problem\nclass TRANSLATION32k(translate.TranslateProblem):\n\n @property\n def additional_training_datasets(self):\n \"\"\"Allow subclasses to add training datasets.\"\"\"\n return []",
"_____no_output_____"
],
[
"PROBLEM = 'translatio_n32k'\nproblem = problems.problem(PROBLEM)",
"_____no_output_____"
],
[
"!mkdir t2t/data2\n!cp t2t/data/vocab.translatio_n32k.32768.subwords t2t/data2/",
"mkdir: cannot create directory ‘t2t/data2’: File exists\r\n"
],
[
"import tensorflow as tf\nimport os\n\nvocab_file = \"t2t/data/vocab.translatio_n32k.32768.subwords\"\nckpt_path = tf.train.latest_checkpoint(os.path.join('t2t/train-large'))\nvocab_file, ckpt_path",
"_____no_output_____"
],
[
"from t import text_encoder\n\nencoder = text_encoder.SubwordTextEncoder(vocab_file)",
"_____no_output_____"
],
[
"class Model:\n def __init__(self, HPARAMS = \"transformer_big\", DATA_DIR = 't2t/data2'):\n \n self.X = tf.placeholder(tf.int32, [None, None])\n self.Y = tf.placeholder(tf.int32, [None, None])\n \n self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)\n maxlen_decode = 50 + tf.reduce_max(self.X_seq_len)\n \n x = tf.expand_dims(tf.expand_dims(self.X, -1), -1)\n y = tf.expand_dims(tf.expand_dims(self.Y, -1), -1)\n \n features = {\n \"inputs\": x,\n \"targets\": y,\n \"target_space_id\": tf.constant(1, dtype=tf.int32),\n }\n print(features)\n \n Modes = tf.estimator.ModeKeys\n hparams = trainer_lib.create_hparams(HPARAMS, data_dir=DATA_DIR, problem_name=PROBLEM)\n translate_model = registry.model('transformer')(hparams, Modes.PREDICT)\n logits, _ = translate_model(features)\n \n with tf.variable_scope(tf.get_variable_scope(), reuse=True):\n self.fast_result = translate_model._greedy_infer(features, maxlen_decode)[\"outputs\"]\n self.beam_result = translate_model._beam_decode_slow(\n features, maxlen_decode, beam_size=5, \n top_beams=1, alpha=1.0)[\"outputs\"]\n \n self.fast_result = tf.identity(self.fast_result, name = 'greedy')\n self.beam_result = tf.identity(self.beam_result, name = 'beam')\n \ntf.reset_default_graph()\nsess = tf.InteractiveSession()\nmodel = Model()\nvar_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\nsaver = tf.train.Saver(var_list = var_lists)\nsaver.restore(sess, ckpt_path)",
"WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py:507: calling count_nonzero (from tensorflow.python.ops.math_ops) with axis is deprecated and will be removed in a future version.\nInstructions for updating:\nreduction_indices is deprecated, use axis instead\n"
],
[
"string = 'sedangkan,Xpernah pon kita kacau solat n penampilan orang lain.'\nencoded = encoder.encode(string) + [1]\nf, b = sess.run([model.fast_result, model.beam_result], feed_dict = {model.X: [encoded]})\nencoder.decode(f[0]), encoder.decode(b[0])",
"_____no_output_____"
],
[
"string = 'Agama pada lazimnya bermakna kepercayaan kepada Tuhan, atau sesuatu kuasa yang ghaib dan sakti seperti Dewa, dan juga amalan dan institusi yang berkait dengan kepercayaan tersebut. Agama dan kepercayaan merupakan dua pekara yang sangat berkaitan. Tetapi Agama mempunyai makna yang lebih luas, yakni merujuk kepada satu sistem kepercayaan yang kohensif, dan kepercayaan ini adalah mengenai aspek ketuhanan.'\nencoded = encoder.encode(string) + [1]\nf, b = sess.run([model.fast_result, model.beam_result], feed_dict = {model.X: [encoded]})\nencoder.decode(f[0]), encoder.decode(b[0])",
"_____no_output_____"
],
[
"string = 'KOTA KINABALU: Pengumuman Datuk Seri Shafie Apdal sebagai calon Perdana Menteri Pakatan Harapan Plus oleh Tun Dr Mahathir Mohamad dilihat sebagai satu taktik pecah dan perintah yang sudah menjadi kebiasaan bagi bekas perdana menteri itu. Setiausaha Pemuda Pakatan Harapan Sabah, Razeef Rakimin berkata, Tun Mahathir sudah ketandusan idea dan pencalonan Shafie akan menjadi perangkap kepada Pakatan Harapan.“Pertama, siapa dia untuk menentukan calon Perdana Menteri Pakatan Harapan? Beliau bukan pemimpin Pakatan Harapan bahkan beliau tidak mempunyai parti politik setelah dibuang oleh parti yang diasaskannya sendiri.'\nencoded = encoder.encode(string) + [1]\nf, b = sess.run([model.fast_result, model.beam_result], feed_dict = {model.X: [encoded]})\nencoder.decode(f[0]), encoder.decode(b[0])",
"_____no_output_____"
],
[
"string = 'Pada 9 Disember 1997, CNBC Asia bergabung dengan Asia Business News, sehingga menjadi CNBC Asia Business News, namun setahun kemudian saluran ini di-\"restore\" menjadi CNBC Asia.'\nencoded = encoder.encode(string) + [1]\nf, b = sess.run([model.fast_result, model.beam_result], feed_dict = {model.X: [encoded]})\nencoder.decode(f[0]), encoder.decode(b[0])",
"_____no_output_____"
],
[
"string = 'Puan Wong Shu Qi [Kluang] minta Menteri Kewangan menyatakan sama ada kerajaan merancang untuk membenarkan syarikat ramalan nombor Magnum, Sports Toto dan DaMaCai mengadakan cabutan khas pada setiap hari Selasa bagi tahun 2019 dan masa yang akan datang.'\nencoded = encoder.encode(string) + [1]\nf, b = sess.run([model.fast_result, model.beam_result], feed_dict = {model.X: [encoded]})\nencoder.decode(f[0]), encoder.decode(b[0])",
"_____no_output_____"
],
[
"batch_size = 128\n\npath = 't2t/tmp/test'\n\nwith open(os.path.join(path, 'left.txt')) as fopen:\n left = fopen.read().split('\\n')\n \nwith open(os.path.join(path, 'right.txt')) as fopen:\n right = fopen.read().split('\\n')\n \nlen(left), len(right)",
"_____no_output_____"
],
[
"p = sess.run(model.fast_result, feed_dict = {model.X: [encoder.encode(left[0]) + [1]]}).tolist()\nresults = []\nfor row in p:\n results.append([i for i in row if i not in [0, 1]])\nresults",
"_____no_output_____"
],
[
"from tensor2tensor.utils import bleu_hook",
"_____no_output_____"
],
[
"bleu_hook.compute_bleu(reference_corpus = [encoder.encode(right[0])], \n translation_corpus = results)",
"_____no_output_____"
],
[
"pad_sequences = tf.keras.preprocessing.sequence.pad_sequences",
"_____no_output_____"
],
[
"from tqdm import tqdm\n\nresults = []\nfor i in tqdm(range(0, len(left), batch_size)):\n index = min(i + batch_size, len(left))\n x = left[i: index]\n encoded = [encoder.encode(l) + [1] for l in x]\n batch_x = pad_sequences(encoded, padding='post')\n \n p = sess.run(model.fast_result, feed_dict = {model.X: batch_x}).tolist()\n result = []\n for row in p:\n result.append([i for i in row if i not in [0, 1]])\n results.extend(result)",
"100%|██████████| 782/782 [51:48<00:00, 3.97s/it] \n"
],
[
"len(results)",
"_____no_output_____"
],
[
"rights = [encoder.encode(r) for r in right[:len(results)]]\nbleu_hook.compute_bleu(reference_corpus = rights,\n translation_corpus = results)",
"_____no_output_____"
],
[
"saver = tf.train.Saver(tf.trainable_variables())\nsaver.save(sess, 'transformer-large/model.ckpt')",
"_____no_output_____"
],
[
"strings = ','.join(\n [\n n.name\n for n in tf.get_default_graph().as_graph_def().node\n if ('Variable' in n.op\n or 'Placeholder' in n.name\n or 'greedy' in n.name\n or 'beam' in n.name\n or 'alphas' in n.name\n or 'self/Softmax' in n.name)\n and 'adam' not in n.name\n and 'beta' not in n.name\n and 'global_step' not in n.name\n and 'modality' not in n.name\n and 'Assign' not in n.name\n ]\n)\nstrings.split(',')",
"_____no_output_____"
],
[
"def freeze_graph(model_dir, output_node_names):\n\n if not tf.gfile.Exists(model_dir):\n raise AssertionError(\n \"Export directory doesn't exists. Please specify an export \"\n 'directory: %s' % model_dir\n )\n\n checkpoint = tf.train.get_checkpoint_state(model_dir)\n input_checkpoint = checkpoint.model_checkpoint_path\n\n absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])\n output_graph = absolute_model_dir + '/frozen_model.pb'\n clear_devices = True\n with tf.Session(graph = tf.Graph()) as sess:\n saver = tf.train.import_meta_graph(\n input_checkpoint + '.meta', clear_devices = clear_devices\n )\n saver.restore(sess, input_checkpoint)\n output_graph_def = tf.graph_util.convert_variables_to_constants(\n sess,\n tf.get_default_graph().as_graph_def(),\n output_node_names.split(','),\n )\n with tf.gfile.GFile(output_graph, 'wb') as f:\n f.write(output_graph_def.SerializeToString())\n print('%d ops in the final graph.' % len(output_graph_def.node))",
"_____no_output_____"
],
[
"freeze_graph('transformer-large', strings)",
"INFO:tensorflow:Restoring parameters from transformer-large/model.ckpt\n"
],
[
"def load_graph(frozen_graph_filename):\n with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\n with tf.Graph().as_default() as graph:\n tf.import_graph_def(graph_def)\n return graph",
"_____no_output_____"
],
[
"g = load_graph('transformer-large/frozen_model.pb')\nx = g.get_tensor_by_name('import/Placeholder:0')\ngreedy = g.get_tensor_by_name('import/greedy:0')\nbeam = g.get_tensor_by_name('import/beam:0')\ntest_sess = tf.InteractiveSession(graph = g)",
"/home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py:1750: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n warnings.warn('An interactive session is already active. This can '\n"
],
[
"string = \"Beliau yang juga saksi pendakwaan kesembilan berkata, ia bagi mengelak daripada wujud isu digunakan terhadap Najib.\"\nencoded = encoder.encode(string) + [1]\ng, b = test_sess.run([greedy, beam], feed_dict = {x:[encoded]})",
"_____no_output_____"
],
[
"encoder.decode(g[0])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf70064be5775a372362cd4de1b63f97444b660 | 35,131 | ipynb | Jupyter Notebook | keychest/keychestenv-play.ipynb | sergeivolodin/causality-disentanglement-rl | 5a41b4a2e3d85fa7e9c8450215fdc6cf954df867 | [
"CC0-1.0"
] | 2 | 2020-12-11T05:26:24.000Z | 2021-04-21T06:12:58.000Z | keychest/keychestenv-play.ipynb | sergeivolodin/causality-disentanglement-rl | 5a41b4a2e3d85fa7e9c8450215fdc6cf954df867 | [
"CC0-1.0"
] | 9 | 2020-04-30T16:29:50.000Z | 2021-03-26T07:32:18.000Z | keychest/keychestenv-play.ipynb | sergeivolodin/causality-disentanglement-rl | 5a41b4a2e3d85fa7e9c8450215fdc6cf954df867 | [
"CC0-1.0"
] | null | null | null | 102.422741 | 11,188 | 0.864792 | [
[
[
"import gin\nfrom keychest.keychestenv import KeyChestEnvironmentRandom, KeyChestGymEnv, KeyChestEnvironment\nfrom keychest.keychestenv_gui import jupyter_gui\nfrom keychest.keychestenv_gofa import features_for_obs, max_reward, hardcoded_policy_step\nfrom matplotlib import pyplot as plt\nfrom helpers import get_env_performance\nimport numpy as np\nfrom tqdm import tqdm\nfrom time import time",
"_____no_output_____"
],
[
"gin.enter_interactive_mode()\n#gin.parse_config_file('./keychest/config/5x5.gin')\ngin.parse_config_file('./keychest/config/10x10.gin')\n#gin.bind_parameter('KeyChestEnvironment.flatten_observation', False)",
"_____no_output_____"
],
[
"env = KeyChestGymEnv()",
"_____no_output_____"
],
[
"jupyter_gui(env)",
"_____no_output_____"
],
[
"#print(\"Steps per second:\", get_env_performance(env, 3))",
"_____no_output_____"
],
[
"env.observation_space.shape, env.action_space.shape, env.reset().shape",
"_____no_output_____"
],
[
"#import gym\n#env1 = gym.make('CartPole-v0')\n#get_env_performance(env1, 3)",
"_____no_output_____"
],
[
"features_for_obs(env.reset())",
"_____no_output_____"
],
[
"def get_policy_stats(env, policy, title=\"Reward stats\"):\n \"\"\"Show reward distribution for a policy.\"\"\"\n mr = max_reward(env)\n \n def reward_on_policy(env, policy):\n done = False\n Rmax = max_reward(env)\n obs = env.reset()\n R = 0\n while not done:\n act = policy(env)\n obs, rew, done, info = env.step(act)\n #plt.imshow(env.render(mode='rgb_array'))\n #plt.show()\n R += rew\n #print(f\"Reward {R} out of {Rmax}\")\n return R\n\n rews = [reward_on_policy(env, policy) for _ in tqdm(range(500))]\n\n print(title)\n print('min/max/mean/std/median', np.min(rews), np.max(rews), np.mean(rews), np.std(rews), np.median(rews))\n print(\"Reward upper bound\", mr)\n\n plt.title(title)\n plt.hist(rews, alpha=0.5, label='Reward')\n plt.axvline(mr, color='red', label='Maximal reward')\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"get_policy_stats(env, hardcoded_policy_step, title=\"Rewards by hardcoded policy\")",
"100%|██████████| 500/500 [00:16<00:00, 30.00it/s]\n"
],
[
"get_policy_stats(env, lambda x: env.action_space.sample(), \"Rewards on a random policy\")",
"100%|██████████| 500/500 [00:02<00:00, 188.51it/s]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf71ce3c8e123af9a8bc4fd1725f2fc82bc4e59 | 2,123 | ipynb | Jupyter Notebook | Notebook/Day7.ipynb | huiwenzhang/100-Days-ML-Note | 1503dff1960bf19b05812fd0d7fc3f633874654f | [
"MIT"
] | 1 | 2019-01-08T04:55:57.000Z | 2019-01-08T04:55:57.000Z | Notebook/.ipynb_checkpoints/Day7-checkpoint.ipynb | huiwenzhang/100-Days-ML-Note | 1503dff1960bf19b05812fd0d7fc3f633874654f | [
"MIT"
] | null | null | null | Notebook/.ipynb_checkpoints/Day7-checkpoint.ipynb | huiwenzhang/100-Days-ML-Note | 1503dff1960bf19b05812fd0d7fc3f633874654f | [
"MIT"
] | 2 | 2019-09-09T07:00:19.000Z | 2019-11-17T04:00:21.000Z | 24.686047 | 208 | 0.590674 | [
[
[
"**How successful you are is decided by the people around you instead of youself. Finding the group which shares most similarites to you, you will know what kind of person you will become in the future**",
"_____no_output_____"
],
[
"## What is k-NN?\n- It is a classification method\n- It assigns points to a class by connting the most votes (label) from his companions(top k close samples)\n- Non-parametric: doesn't have a distrution model for the data\n- Instance-based: classify by memorize and counting (also called lazy algorithm)",
"_____no_output_____"
],
[
"## How does it work?\n- Prepare a set of labelled objects (dataset)\n- Sortted top k objects based on some kind of distance measure\n- Predict by counting most votes",
"_____no_output_____"
],
[
"## How to predict?\n- Predict by counting most votes in top k objects",
"_____no_output_____"
],
[
"## Measure of distance\n- Euclidean distance\n- Hamming distance\n- Manhattan distance\n- KL divergence",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecf71d50be35e8f294e3bf6181cad7faa67e2d6b | 7,402 | ipynb | Jupyter Notebook | Functions and decorators.ipynb | IroniX2/python-exercises | bea6675e8375f391ad5d47b965c2e094584a5058 | [
"MIT"
] | null | null | null | Functions and decorators.ipynb | IroniX2/python-exercises | bea6675e8375f391ad5d47b965c2e094584a5058 | [
"MIT"
] | null | null | null | Functions and decorators.ipynb | IroniX2/python-exercises | bea6675e8375f391ad5d47b965c2e094584a5058 | [
"MIT"
] | null | null | null | 18.930946 | 78 | 0.481762 | [
[
[
"# Functions & Decorators",
"_____no_output_____"
],
[
"## First class functions",
"_____no_output_____"
],
[
"https://www.youtube.com/watch?v=r7Dtus7N4pI",
"_____no_output_____"
]
],
[
[
"def my_first_function(x):\n print(x)",
"_____no_output_____"
],
[
"def greet():\n return 'Hello'",
"_____no_output_____"
],
[
"my_first_function(greet)",
"<function greet at 0x7f7cec7835e0>\n"
]
],
[
[
"## Inner functions",
"_____no_output_____"
]
],
[
[
"def foo():\n def inner():\n return 'inner msg'\n \n #print(inner())\n return inner",
"_____no_output_____"
],
[
"foo()()",
"_____no_output_____"
]
],
[
[
"## Decorators",
"_____no_output_____"
],
[
"### simple (why tho?)",
"_____no_output_____"
]
],
[
[
"def my_decorator(func):\n def wrapper():\n print(\"This is done before function exec\")\n func()\n print(\"This is done after function exec\")\n return wrapper",
"_____no_output_____"
],
[
"def greet():\n print(\"Hello\")",
"_____no_output_____"
],
[
"greet = my_decorator(greet)\ngreet()",
"This is done before function exec\nHello\nThis is done after function exec\n"
],
[
"### syntactic sugar",
"_____no_output_____"
],
[
"@my_decorator\ndef greet2():\n print(\"Hello\")",
"_____no_output_____"
],
[
"greet2() # actually the wrapper()",
"This is done before function exec\nHello\nThis is done after function exec\n"
]
],
[
[
"## Decorating functions with arguments",
"_____no_output_____"
]
],
[
[
"def my_decorator(func):\n def wrapper(*args, **kwargs):\n print(\"This is done before function exec\")\n func(*args, **kwargs)\n print(\"This is done after function exec\")\n return wrapper",
"_____no_output_____"
],
[
"@my_decorator\ndef greet(name, sir):\n print(f'Hello {name} {sir}')",
"_____no_output_____"
],
[
"greet('peter', 'Angel')",
"This is done before function exec\nHello peter Angel\nThis is done after function exec\n"
],
[
"@my_decorator\ndef msg(xxx):\n print(xxx)",
"_____no_output_____"
],
[
"msg(\"text goes here my mans\")",
"This is done before function exec\ntext goes here my mans\nThis is done after function exec\n"
]
],
[
[
"### Returning values from Decorated functions",
"_____no_output_____"
]
],
[
[
"def my_decorator(func):\n def wrapper(*args):\n x = 'From wrapper before execution'\n x += func(*args)\n x += 'From wrapper after execution'\n return x\n return wrapper",
"_____no_output_____"
],
[
"@my_decorator\ndef greet(name):\n return f'Hello {name}'",
"_____no_output_____"
],
[
"greet('Alex')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecf733fac173c3a68ac2708a67ff3ec4c668415f | 63,485 | ipynb | Jupyter Notebook | agent/1.turtle-agent.ipynb | tooyipjee/Stock-Prediction-Models | 9dc3352821bbd1c08d9a85f8f3c66aeb24f2deca | [
"Apache-2.0"
] | null | null | null | agent/1.turtle-agent.ipynb | tooyipjee/Stock-Prediction-Models | 9dc3352821bbd1c08d9a85f8f3c66aeb24f2deca | [
"Apache-2.0"
] | null | null | null | agent/1.turtle-agent.ipynb | tooyipjee/Stock-Prediction-Models | 9dc3352821bbd1c08d9a85f8f3c66aeb24f2deca | [
"Apache-2.0"
] | null | null | null | 128.772819 | 46,168 | 0.822509 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()",
"_____no_output_____"
],
[
"df = pd.read_csv('../dataset/GOOG-year.csv')\ndf.head()",
"_____no_output_____"
],
[
"count = int(np.ceil(len(df) * 0.1))\nsignals = pd.DataFrame(index=df.index)\nsignals['signal'] = 0.0\nsignals['trend'] = df['Close']\nsignals['RollingMax'] = (signals.trend.shift(1).rolling(count).max())\nsignals['RollingMin'] = (signals.trend.shift(1).rolling(count).min())\nsignals.loc[signals['RollingMax'] < signals.trend, 'signal'] = -1\nsignals.loc[signals['RollingMin'] > signals.trend, 'signal'] = 1\nsignals",
"_____no_output_____"
],
[
"def buy_stock(\n real_movement,\n signal,\n initial_money = 10000,\n max_buy = 1,\n max_sell = 1,\n):\n \"\"\"\n real_movement = actual movement in the real world\n delay = how much interval you want to delay to change our decision from buy to sell, vice versa\n initial_state = 1 is buy, 0 is sell\n initial_money = 1000, ignore what kind of currency\n max_buy = max quantity for share to buy\n max_sell = max quantity for share to sell\n \"\"\"\n starting_money = initial_money\n states_sell = []\n states_buy = []\n current_inventory = 0\n\n def buy(i, initial_money, current_inventory):\n shares = initial_money // real_movement[i]\n if shares < 1:\n print(\n 'day %d: total balances %f, not enough money to buy a unit price %f'\n % (i, initial_money, real_movement[i])\n )\n else:\n if shares > max_buy:\n buy_units = max_buy\n else:\n buy_units = shares\n initial_money -= buy_units * real_movement[i]\n current_inventory += buy_units\n print(\n 'day %d: buy %d units at price %f, total balance %f'\n % (i, buy_units, buy_units * real_movement[i], initial_money)\n )\n states_buy.append(0)\n return initial_money, current_inventory\n\n for i in range(real_movement.shape[0] - int(0.025 * len(df))):\n state = signal[i]\n if state == 1:\n initial_money, current_inventory = buy(\n i, initial_money, current_inventory\n )\n states_buy.append(i)\n elif state == -1:\n if current_inventory == 0:\n print('day %d: cannot sell anything, inventory 0' % (i))\n else:\n if current_inventory > max_sell:\n sell_units = max_sell\n else:\n sell_units = current_inventory\n current_inventory -= sell_units\n total_sell = sell_units * real_movement[i]\n initial_money += total_sell\n try:\n invest = (\n (real_movement[i] - real_movement[states_buy[-1]])\n / real_movement[states_buy[-1]]\n ) * 100\n except:\n invest = 0\n print(\n 'day %d, sell %d units at price %f, investment %f %%, total balance %f,'\n % (i, sell_units, total_sell, invest, initial_money)\n )\n states_sell.append(i)\n \n invest = ((initial_money - starting_money) / starting_money) * 100\n total_gains = initial_money - starting_money\n return states_buy, states_sell, total_gains, invest",
"_____no_output_____"
],
[
"states_buy, states_sell, total_gains, invest = buy_stock(df.Close, signals['signal'])",
"day 28: cannot sell anything, inventory 0\nday 29: cannot sell anything, inventory 0\nday 30: cannot sell anything, inventory 0\nday 44: cannot sell anything, inventory 0\nday 45: cannot sell anything, inventory 0\nday 47: cannot sell anything, inventory 0\nday 54: cannot sell anything, inventory 0\nday 55: cannot sell anything, inventory 0\nday 56: cannot sell anything, inventory 0\nday 85: cannot sell anything, inventory 0\nday 86: cannot sell anything, inventory 0\nday 87: cannot sell anything, inventory 0\nday 88: cannot sell anything, inventory 0\nday 89: cannot sell anything, inventory 0\nday 90: cannot sell anything, inventory 0\nday 91: cannot sell anything, inventory 0\nday 92: cannot sell anything, inventory 0\nday 96: buy 1 units at price 817.580017, total balance 9182.419983\nday 97: buy 1 units at price 814.429993, total balance 8367.989990\nday 117, sell 1 units at price 862.760010, investment 5.934214 %, total balance 9230.750000,\nday 118, sell 1 units at price 872.299988, investment 7.105582 %, total balance 10103.049988,\nday 120: cannot sell anything, inventory 0\nday 121: cannot sell anything, inventory 0\nday 122: cannot sell anything, inventory 0\nday 123: cannot sell anything, inventory 0\nday 124: cannot sell anything, inventory 0\nday 125: cannot sell anything, inventory 0\nday 127: cannot sell anything, inventory 0\nday 132: cannot sell anything, inventory 0\nday 133: cannot sell anything, inventory 0\nday 138: cannot sell anything, inventory 0\nday 139: cannot sell anything, inventory 0\nday 140: cannot sell anything, inventory 0\nday 141: cannot sell anything, inventory 0\nday 142: cannot sell anything, inventory 0\nday 146: cannot sell anything, inventory 0\nday 162: buy 1 units at price 927.330017, total balance 9175.719971\nday 164: buy 1 units at price 917.789978, total balance 8257.929993\nday 165: buy 1 units at price 908.729980, total balance 7349.200013\nday 166: buy 1 units at price 898.700012, total balance 6450.500001\nday 177, sell 1 units at price 970.890015, investment 8.032714 %, total balance 7421.390016,\nday 179, sell 1 units at price 972.919983, investment 8.258592 %, total balance 8394.309999,\nday 180, sell 1 units at price 980.340027, investment 9.084234 %, total balance 9374.650026,\nday 200: buy 1 units at price 906.659973, total balance 8467.990053\nday 226, sell 1 units at price 944.489990, investment 4.172459 %, total balance 9412.480043,\nday 227, sell 1 units at price 949.500000, investment 4.725038 %, total balance 10361.980043,\nday 228: cannot sell anything, inventory 0\nday 232: cannot sell anything, inventory 0\nday 233: cannot sell anything, inventory 0\nday 236: cannot sell anything, inventory 0\nday 238: cannot sell anything, inventory 0\nday 239: cannot sell anything, inventory 0\nday 240: cannot sell anything, inventory 0\nday 241: cannot sell anything, inventory 0\n"
],
[
"close = df['Close']\nfig = plt.figure(figsize = (15,5))\nplt.plot(close, color='r', lw=2.)\nplt.plot(close, '^', markersize=10, color='m', label = 'buying signal', markevery = states_buy)\nplt.plot(close, 'v', markersize=10, color='k', label = 'selling signal', markevery = states_sell)\nplt.title('total gains %f, total investment %f%%'%(total_gains, invest))\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf734f2fcb86e7e76763cb532361c48fcef3047 | 291,533 | ipynb | Jupyter Notebook | [산학협업과제1] 데이터 설명서.ipynb | redshim/WorkWithAcademy | 9fa41efa6b8ea042a7189cbfdc0222e88bccd76a | [
"CC0-1.0"
] | null | null | null | [산학협업과제1] 데이터 설명서.ipynb | redshim/WorkWithAcademy | 9fa41efa6b8ea042a7189cbfdc0222e88bccd76a | [
"CC0-1.0"
] | null | null | null | [산학협업과제1] 데이터 설명서.ipynb | redshim/WorkWithAcademy | 9fa41efa6b8ea042a7189cbfdc0222e88bccd76a | [
"CC0-1.0"
] | null | null | null | 559.564299 | 274,056 | 0.931929 | [
[
[
"## ★ 산학협엽과제 데이터 설명서\n* 데이터 소유권 현대자동차",
"_____no_output_____"
],
[
"- 변수명 및 데이터 정의\n 총 30개 File 전송\n Out_1.csv ~ Out_30.csv",
"_____no_output_____"
],
[
"\n| 변수명 | 설명 |\n| ------------- |:-------------:|\n|\tTime\t|\t시간\t|\n|\tFLSD\t|\t스프링 변위\t|\n|\tFLSARF\t|\t쇽업소버 로드 힘(Fz)\t|\n|\tFLSBLF\t|\t스탭바 링크 힘(Fa)\t|\n|\tFLFX\t|\tFx\t|\n|\tFLFY\t|\tFy\t|\n|\tFLFZ\t|\tFz\t|\n|\tFLMX\t|\tMx\t|\n|\tFLMY\t|\tMy\t|\n|\tFLMZ\t|\tMz\t|\n|\tFLLCABFX\t|\t로어암 힘(Fx)\t|\n|\tFLLCABFY\t|\t로어암 힘(Fy)\t|\n|\tFLTBF\t|\tFL볼조인트\t|\n|\tFRSD\t|\t스프링 변위\t|\n|\tFRSARF\t|\t쇽업소버 로드 힘(Fz)\t|\n|\tFRSBLF\t|\t스탭바 링크 힘(Fa)\t|\n|\tFRFX\t|\tFx\t|\n|\tFRFY\t|\tFy\t|\n|\tFRFZ\t|\tFz\t|\n|\tFRMX\t|\tMx\t|\n|\tFRMY\t|\tMy\t|\n|\tFRMZ\t|\tMz\t|\n|\tFRLCABFX\t|\t로어암 힘(Fx)\t|\n|\tFRLCABFY\t|\t로어암 힘(Fy)\t|\n|\tFRTBF\t|\tFL볼조인트\t|\n|\tRLSD\t|\t스프링 변위\t|\n|\tRLSARF\t|\t쇽업소버 로드 힘(Fz)\t|\n|\tRLSBLF\t|\t스탭바 링크 힘(Fa)\t|\n|\tRLFX\t|\tFx\t|\n|\tRLFY\t|\tFy\t|\n|\tRLFZ\t|\tFz\t|\n|\tRLMX\t|\tMx\t|\n|\tRLMY\t|\tMy\t|\n|\tRLMZ\t|\tMz\t|\n|\tRLUCAF\t|\t어퍼암 힘(Fa)\t|\n|\tRLAAF\t|\t어시스트암 힘(Fa)\t|\n|\tRLTAF\t|\t트레일링암 힘(Fx)\t|\n|\tRRSD\t|\t스프링 변위\t|\n|\tRRSARF\t|\t쇽업소버 로드 힘(Fz)\t|\n|\tRRSBLF\t|\t스탭바 링크 힘(Fa)\t|\n|\tRRFX\t|\tFx\t|\n|\tRRFY\t|\tFy\t|\n|\tRRFZ\t|\tFz\t|\n|\tRRMX\t|\tMx\t|\n|\tRRMY\t|\tMy\t|\n|\tRRMZ\t|\tMz\t|\n|\tRRUCAF\t|\t어퍼암 힘(Fa)\t|\n|\tRRAAF\t|\t어시스트암 힘(Fa)\t|\n|\tRRTAF\t|\t트레일링암 힘(Fx)\t|\n|\tACCX\t|\t가속도 X\t|\n|\tACCY\t|\t가속도 Y\t|\n|\tACCZ\t|\t가속도 Z\t|\n|\tLatitude\t|\t위도\t|\n|\tLongitude\t|\t경도\t|\n|\tSpeedkph\t|\t차속\t|\n|\tPitching\t|\tPitch\t|\n|\tRolling\t|\tRoll\t|\n|\tVS\t|\t차속\t|\n|\tBRAKE_ACT\t|\t브레이크 작동\t|\n|\tSAS_Angle\t|\t스티어링 각도\t|\n|\tSAS_Speed\t|\t스티어링 속도\t|\n|\tPV_AV_CAN\t|\t악셀 궤도\t|\n|\tWHL_SPD_FL\t|\t휠속\t|\n|\tWHL_SPD_FR\t|\t휠속\t|\n|\tWHL_SPD_RL\t|\t휠속\t|\n|\tWHL_SPD_RR\t|\t휠속\t|\n",
"_____no_output_____"
]
],
[
[
"# Sample Data Loading\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\ndf = pd.read_csv('F:/001. AI Data(19)/20빅데이터분석양성과정/분석과제1-하중예측/Out_1.csv')",
"_____no_output_____"
],
[
"plt.figure (figsize=(30,15))",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df[['FLFX','FLSD','VS','SAS_Angle']].plot(figsize=(20,30), subplots=True, label=True)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecf73d4c2b722105e1778e1b51eaef562d97f2a2 | 2,185 | ipynb | Jupyter Notebook | 0.15/_downloads/plot_shift_evoked.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.15/_downloads/plot_shift_evoked.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.15/_downloads/plot_shift_evoked.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 40.462963 | 1,137 | 0.575286 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Shifting time-scale in evoked data\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Author: Mainak Jas <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\nimport mne\nfrom mne.viz import tight_layout\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\n\n# Reading evoked data\ncondition = 'Left Auditory'\nevoked = mne.read_evokeds(fname, condition=condition, baseline=(None, 0),\n proj=True)\n\nch_names = evoked.info['ch_names']\npicks = mne.pick_channels(ch_names=ch_names, include=[\"MEG 2332\"])\n\n# Create subplots\nf, (ax1, ax2, ax3) = plt.subplots(3)\nevoked.plot(exclude=[], picks=picks, axes=ax1,\n titles=dict(grad='Before time shifting'))\n\n# Apply relative time-shift of 500 ms\nevoked.shift_time(0.5, relative=True)\n\nevoked.plot(exclude=[], picks=picks, axes=ax2,\n titles=dict(grad='Relative shift: 500 ms'))\n\n# Apply absolute time-shift of 500 ms\nevoked.shift_time(0.5, relative=False)\n\nevoked.plot(exclude=[], picks=picks, axes=ax3,\n titles=dict(grad='Absolute shift: 500 ms'))\n\ntight_layout()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf7459fe5795644b2cf363441625da2b101c3a6 | 14,432 | ipynb | Jupyter Notebook | mlxtend/docs/sources/user_guide/preprocessing/standardize.ipynb | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | mlxtend/docs/sources/user_guide/preprocessing/standardize.ipynb | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | mlxtend/docs/sources/user_guide/preprocessing/standardize.ipynb | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | 29.155556 | 460 | 0.492725 | [
[
[
"# Standardize",
"_____no_output_____"
],
[
"A function that performs column-based standardization on a NumPy array.",
"_____no_output_____"
],
[
"> from mlxtend.preprocessing import standardize",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"The result of standardization (or Z-score normalization) is that the features will be rescaled so that they'll have the properties of a standard normal distribution with\n\n$\\mu = 0$ and $\\sigma = 1$.\n\nwhere $\\mu$ is the mean (average) and $\\sigma$ is the standard deviation from the mean; standard scores (also called z scores) of the samples are calculated as\n\n$$z=\\frac{x-\\mu}{\\sigma}.$$\n\nStandardizing the features so that they are centered around 0 with a standard deviation of 1 is not only important if we are comparing measurements that have different units, but it is also a general requirement for the optimal performance of many machine learning algorithms. ",
"_____no_output_____"
],
[
"One family of algorithms that is scale-invariant encompasses tree-based learning algorithms. Let's take the general CART decision tree algorithm. Without going into much depth regarding information gain and impurity measures, we can think of the decision as \"is feature x_i >= some_val?\" Intuitively, we can see that it really doesn't matter on which scale this feature is (centimeters, Fahrenheit, a standardized scale -- it really doesn't matter).\n\n\nSome examples of algorithms where feature scaling matters are:\n\n\n- k-nearest neighbors with an Euclidean distance measure if want all features to contribute equally\n- k-means (see k-nearest neighbors)\n- logistic regression, SVMs, perceptrons, neural networks etc. if you are using gradient descent/ascent-based optimization, otherwise some weights will update much faster than others\n- linear discriminant analysis, principal component analysis, kernel principal component analysis since you want to find directions of maximizing the variance (under the constraints that those directions/eigenvectors/principal components are orthogonal); you want to have features on the same scale since you'd emphasize variables on \"larger measurement scales\" more.\n\n\nThere are many more cases than I can possibly list here ... I always recommend you to think about the algorithm and what it's doing, and then it typically becomes obvious whether we want to scale your features or not.\n\n\nIn addition, we'd also want to think about whether we want to \"standardize\" or \"normalize\" (here: scaling to [0, 1] range) our data. Some algorithms assume that our data is centered at 0. For example, if we initialize the weights of a small multi-layer perceptron with tanh activation units to 0 or small random values centered around zero, we want to update the model weights \"equally.\"\nAs a rule of thumb I'd say: When in doubt, just standardize the data, it shouldn't hurt. \n\n\n ",
"_____no_output_____"
],
[
"## Example 1 - Standardize a Pandas DataFrame",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ns1 = pd.Series([1, 2, 3, 4, 5, 6], index=(range(6)))\ns2 = pd.Series([10, 9, 8, 7, 6, 5], index=(range(6)))\ndf = pd.DataFrame(s1, columns=['s1'])\ndf['s2'] = s2\ndf",
"_____no_output_____"
],
[
"from mlxtend.preprocessing import standardize\nstandardize(df, columns=['s1', 's2'])",
"_____no_output_____"
]
],
[
[
"## Example 2 - Standardize a NumPy Array",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nX = np.array([[1, 10], [2, 9], [3, 8], [4, 7], [5, 6], [6, 5]])\nX",
"_____no_output_____"
],
[
"from mlxtend.preprocessing import standardize\nstandardize(X, columns=[0, 1])",
"_____no_output_____"
]
],
[
[
"## Example 3 - Re-using parameters",
"_____no_output_____"
],
[
"In machine learning contexts, it is desired to re-use the parameters that have been obtained from a training set to scale new, future data (including the independent test set). By setting `return_params=True`, the `standardize` function returns a second object, a parameter dictionary containing the column means and standard deviations that can be re-used by feeding it to the `params` parameter upon function call.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom mlxtend.preprocessing import standardize\n\nX_train = np.array([[1, 10], [4, 7], [3, 8]])\nX_test = np.array([[1, 2], [3, 4], [5, 6]])\n\nX_train_std, params = standardize(X_train, \n columns=[0, 1], \n return_params=True)\nX_train_std",
"_____no_output_____"
],
[
"params",
"_____no_output_____"
],
[
"X_test_std = standardize(X_test, \n columns=[0, 1], \n params=params)\nX_test_std",
"_____no_output_____"
]
],
[
[
"## API",
"_____no_output_____"
]
],
[
[
"with open('../../api_modules/mlxtend.preprocessing/standardize.md', 'r') as f:\n print(f.read())",
"## standardize\n\n*standardize(array, columns=None, ddof=0, return_params=False, params=None)*\n\nStandardize columns in pandas DataFrames.\n\n**Parameters**\n\n- `array` : pandas DataFrame or NumPy ndarray, shape = [n_rows, n_columns].\n\n\n- `columns` : array-like, shape = [n_columns] (default: None)\n\n Array-like with column names, e.g., ['col1', 'col2', ...]\n or column indices [0, 2, 4, ...]\n If None, standardizes all columns.\n\n- `ddof` : int (default: 0)\n\n Delta Degrees of Freedom. The divisor used in calculations\n is N - ddof, where N represents the number of elements.\n\n- `return_params` : dict (default: False)\n\n If set to True, a dictionary is returned in addition to the\n standardized array. The parameter dictionary contains the\n column means ('avgs') and standard deviations ('stds') of\n the individual columns.\n\n- `params` : dict (default: None)\n\n A dictionary with column means and standard deviations as\n returned by the `standardize` function if `return_params`\n was set to True. If a `params` dictionary is provided, the\n `standardize` function will use these instead of computing\n them from the current array.\n\n**Notes**\n\nIf all values in a given column are the same, these values are all\n set to `0.0`. The standard deviation in the `parameters` dictionary\n is consequently set to `1.0` to avoid dividing by zero.\n\n**Returns**\n\n- `df_new` : pandas DataFrame object.\n\n Copy of the array or DataFrame with standardized columns.\n\n**Examples**\n\nFor usage examples, please see\n [http://rasbt.github.io/mlxtend/user_guide/preprocessing/standardize/](http://rasbt.github.io/mlxtend/user_guide/preprocessing/standardize/)\n\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf754d5dc9246fffffb922120888c1669d74d61 | 24,797 | ipynb | Jupyter Notebook | AlgebraicSolvers.ipynb | dkesseli/numpde | c2f005b81c2f687dfecdd63d37da3591d61c9598 | [
"BSD-2-Clause"
] | null | null | null | AlgebraicSolvers.ipynb | dkesseli/numpde | c2f005b81c2f687dfecdd63d37da3591d61c9598 | [
"BSD-2-Clause"
] | null | null | null | AlgebraicSolvers.ipynb | dkesseli/numpde | c2f005b81c2f687dfecdd63d37da3591d61c9598 | [
"BSD-2-Clause"
] | null | null | null | 48.90927 | 587 | 0.595516 | [
[
[
"#### Jupyter notebooks\n\nThis is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.\n\n# Algebraic solvers\n\nWhe solving elliptic boundary value problems as well as when using implicit methods for transient PDE, we need to solve algebraic systems of equations. We will write such systems as\n$$ F(u) = 0 $$\nwhere $u$ is a vector of state variables and $F(u)$ is a vector of residuals of the same length.\nWe will primarily be interested in defect correction methods of the form\n\\begin{gather} A \\delta u = - F(u) \\\\\nu \\gets u + \\gamma \\delta u\n\\end{gather}\nwhere $A$ is a matrix and $\\gamma$ is a scalar parameter often found using a line search.\n\n* If $A = I$, this is a Richardson iteration, which is related to gradient descent. Such methods are usually quite slow unless $F(u)$ is especially \"nice\".\n* If $A = \\partial F/\\partial u$, this is a Newton method and $\\gamma=1$ can often be used.\n\n## Newton-Raphson methods for systems\n\nThe **Jacobian** of $F$ is\n$$ J(u) = \\frac{\\partial F}{\\partial u}(u) =\n\\begin{bmatrix} \\frac{\\partial F_0}{\\partial u_0} & \\frac{\\partial F_0}{\\partial u_1} & \\dotsb \\\\\n \\frac{\\partial F_1}{\\partial u_0} & \\frac{\\partial F_1}{\\partial u_1} & \\\\\n \\vdots & & \\ddots\n \\end{bmatrix}(u) . $$\nThe method can be derived by taking the Taylor expansion of $F(u)$ at $u$,\n$$ F(u + \\delta u) = F(u) + \\frac{\\partial F}{\\partial u}(u) (\\delta u) + \\frac{\\partial^2 F}{\\partial u^2}(u) (\\delta u \\otimes \\delta u) / 2 + \\dotsb $$\nNote that each higher term is a higher rank tensor, thus computationally unweildy. If we truncate the series with the linear term and set equal to zero, we have a linear equation for $\\delta u$\n$$ \\frac{\\partial F}{\\partial u}(u) \\delta u = - F(u) $$\nwhich will hopefully make $F(u + \\partial u) \\approx 0$. This is Newton's method.\n\n* Each iteration requires evaluating $F(u)$ -- almost any method will have this property.\n* Each iteration requires evaluating the Jacobian matrix $J(u)$ -- this either requires custom code, algorithmic differentiation, or a finite difference approximation (we'll revisit this later).\n* Each iteration requires solving a linear system with the matrix $J(u)$. This may be expensive.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\npyplot.style.use('ggplot')\n\ndef fsolve_newton(F, J, u0, rtol=1e-10, maxit=50, verbose=False):\n u = u0.copy()\n Fu = F(u)\n norm0 = numpy.linalg.norm(Fu)\n enorm_last = numpy.linalg.norm(u - numpy.array([1,1]))\n for i in range(maxit):\n du = -numpy.linalg.solve(J(u), Fu)\n u += du\n Fu = F(u)\n norm = numpy.linalg.norm(Fu)\n if verbose:\n enorm = numpy.linalg.norm(u - numpy.array([1,1]))\n print('Newton {:d} anorm {:6.2e} rnorm {:6.2e} eratio {:6.2f}'.\n format(i+1, norm, norm/norm0, enorm/enorm_last**2))\n enorm_last = enorm\n if norm < rtol * norm0:\n break\n return u, i\n\ndef rostest(a,b):\n def F(u):\n x = u[0]; y = u[1]\n return numpy.array([-2*(a-x) + 4*b*x**3 - 4*b*x*y,\n 2*b*(y-x**2)])\n def J(u):\n x = u[0]; y = u[1]\n return numpy.array([[2 + 12*b*x**2 - 4*b*y, -4*b*x],\n [-4*b*x, 2*b]])\n return F, J\n\nF, J = rostest(1,3)\nfsolve_newton(F, J, numpy.array([0, 1.]), verbose=True)",
"Newton 1 anorm 2.51e+00 rnorm 3.96e-01 eratio 1.56\nNewton 2 anorm 9.91e+00 rnorm 1.57e+00 eratio 0.56\nNewton 3 anorm 3.83e-01 rnorm 6.05e-02 eratio 0.22\nNewton 4 anorm 5.11e-01 rnorm 8.08e-02 eratio 0.25\nNewton 5 anorm 5.24e-04 rnorm 8.28e-05 eratio 0.36\nNewton 6 anorm 9.76e-07 rnorm 1.54e-07 eratio 0.21\nNewton 7 anorm 3.61e-15 rnorm 5.72e-16 eratio 0.27\n"
]
],
[
[
"* Can the iteration break down? How?\n* How does the method depend on the initial guess?\n* It turns out that Newton's method has _locally quadratic_ convergence to simple roots, $$\\lim_{i \\to \\infty} |e_{i+1}|/|e_i^2| < \\infty .$$\n* \"The number of correct digits doubles each iteration.\"\n* Now that we know how to make a good guess accurate, the effort lies in getting a good guess.\n\n## Matrix-free Jacobian via finite differencing\n\nIt can be error-prone and complicated to implement the Jacobian function `J(u)`. In such cases, we can use the approximation\n\n$$ J(u) v \\approx \\frac{F(u+\\epsilon v) - F(u)}{\\epsilon} $$\n\nwhere $\\epsilon$ is some \"small\" number. Now can't access individual entries of $J$, but we can apply its action to an arbitrary vector $u$.\n\nWe know that this approximation is first order accurate in $\\epsilon$, \n$$ \\left\\lVert J(u) v - \\frac{F(u+\\epsilon v) - F(u)}{\\epsilon} \\right\\rVert \\in O(\\epsilon) . $$\nBut if $\\epsilon$ is too small, we will lose accuracy due to rounding error. If $F$ has been scaled such that its norm is of order 1, then $\\epsilon = \\sqrt{\\epsilon_{\\text{machine}}}$ is a good default choice.",
"_____no_output_____"
]
],
[
[
"import scipy.sparse.linalg as splinalg\n\ndef fsolve_newtonkrylov(F, u0, epsilon=1e-8, rtol=1e-10, maxit=50, verbose=False):\n u = u0.copy()\n Fu = F(u)\n norm0 = numpy.linalg.norm(Fu)\n for i in range(maxit):\n def Ju_fd(v):\n return (F(u + epsilon*v) - Fu) / epsilon\n Ju = splinalg.LinearOperator((len(Fu),len(u)), matvec=Ju_fd)\n du, info = splinalg.gmres(Ju, Fu, atol=1.e-6)\n if info != 0:\n print(numpy.linalg.norm(Ju @ du - Fu), norm)\n raise RuntimeError('GMRES failed to converge: {:d}'.format(info))\n u -= du\n Fu = F(u)\n norm = numpy.linalg.norm(Fu)\n if verbose:\n print('Newton {:d} anorm {:6.2e} rnorm {:6.2e}'\n .format(i, norm, norm/norm0))\n if norm < rtol * norm0:\n break\n return u, i\n\nfsolve_newtonkrylov(F, numpy.array([0.,1]), rtol=1e-6, verbose=True)",
"Newton 0 anorm 2.51e+00 rnorm 3.96e-01\nNewton 1 anorm 9.91e+00 rnorm 1.57e+00\nNewton 2 anorm 3.83e-01 rnorm 6.05e-02\nNewton 3 anorm 5.11e-01 rnorm 8.08e-02\nNewton 4 anorm 5.24e-04 rnorm 8.28e-05\nNewton 5 anorm 9.76e-07 rnorm 1.54e-07\n"
]
],
[
[
"# Sparse and iterative linear algebra\n\nMany matrices in applications, particularly the study of physical systems and graphs/networks, have entries that are mostly equal to zero. We can more efficiently store such systems by storing only the nonzero elements. We will discuss storage and optimized implementations later. Many of the methods for sparse systems apply to solving systems with matrices $A$ that can be applied to a vector ($y \\gets A x$) in significantly less than $O(n^2)$ complexity, or that are \"well-conditioned\" such that an iterative method converges in significantly less than $n$ iterations.\n\n[PETSc](https://mcs.anl.gov/petsc), the Portable Extensible Toolkit for Scientific computing, is an open source software package for the parallel solution of algebraic and differential-algebraic equations. This includes linear algebra, for which PETSc offers a broad variety of implementations. For general information about PETSc, I refer to [this primer](https://jedbrown.org/files/20150924-PETScPrimer.pdf).\n\nNumPy does not provide sparse matrix support so we will use [SciPy](https://scipy.org) in this course.",
"_____no_output_____"
],
[
"## Direct solves\n\nThe complexity of this solve is potentially dominant, so we should understand its cost in terms of the problem size. The standard method for a direct solve is $LU$ (or Cholesky) factorization. Given a $2\\times 2$ block matrix, the algorithm proceeds as\n\\begin{split}\n \\begin{bmatrix} A & B \\\\ C & D \\end{bmatrix} &=\n \\begin{bmatrix} L_A & \\\\ C U_A^{-1} & 1 \\end{bmatrix}\n \\begin{bmatrix} U_A & L_A^{-1} B \\\\ & S \\end{bmatrix}\n\\end{split}\nwhere $L_A U_A = A$ and $S = D - C A^{-1} B$.\n\nFor a sparse operator, the complexity depends on the ordering of degrees of freedom.\n\n* \"natural\" ordering\n* low-bandwidth ordering\n* nested dissection ordering\n\nFor a structured grid, the \"natural\" ordering is the ordering of the original operator.\n\n\n\nA sparse direct solve of this system will result in fill up to the bandwidth.\n\n\n\nThese plots can be produced in PETSc using `-mat_view draw` and `-pc_type lu -pc_factor_mat_ordering_type natural -mat_factor_view draw` (e.g., with `-draw_pause 2` to wait 2 seconds for the viewer).\n\nThe Reverse Cuthill-McKee (`rcm`) ordering applies a breadth-first search to produce a low-bandwidth ordering.\n\n\n\nThe nested dissection (`nd`) ordering recursively bisects the domain.\n\n\n\nThe asymptotic costs are different for these approaches.",
"_____no_output_____"
],
[
"## Convergence of stationary iterative methods\n\n### Richardson iteration\nThe simplest iterative method is [Richardson's method](https://en.wikipedia.org/wiki/Modified_Richardson_iteration), which solves $A x = b$ by the iteration\n$$ x_{k+1} = x_k + \\omega (b - A x_k) $$\nwhere $\\omega > 0$ is a damping parameter and $x_0$ is an initial guess (possibly the zero vector). If $b = A x_*$, this iteration is equivalent to\n$$ x_{k+1} - x_* = (x_k - x_*) - \\omega A (x_k - x_*) = (I - \\omega A) (x_k - x_*) .$$\nIt is convenient for convergence analysis to identify the \"error\" $e_k = x_k - x_*$, in which this becomes\n$$ e_{k+1} = (I - \\omega A) e_k $$\nor\n$$ e_k = (I - \\omega A)^k e_0 $$\nin terms of the initial error. Evidently powers of the *iteration matrix* $I - \\omega A$ tell the whole story.\nSuppose that the eigendecomposition\n$$ X \\Lambda X^{-1} = I - \\omega A $$\nexists. Then\n$$ (I - \\omega A)^k = (X \\Lambda X^{-1})^k = X \\Lambda^k X^{-1} $$\nand the convergence (or divergence) rate depends only on the largest magnitude eigenvalue.\nThis analysis isn't great for two reasons:\n\n1. Not all matrices are diagonalizable.\n2. The matrix $X$ may be very ill-conditioned.\n\nWe can repair these weaknesses by using the [Schur decomposition](https://en.wikipedia.org/wiki/Schur_decomposition)\n$$ Q R Q^h = I - \\omega A $$\nwhere $R$ is right-triangular and $Q$ is unitary (i.e., orthogonal if real-valued; $Q^h$ is the Hermitian conjugate of $Q$).\nThe Schur decomposition always exists and $Q$ has a condition number of 1.\n\n* Where are the eigenvalues in $R$?\n\nEvidently we must find $\\omega$ to minimize the maximum eigenvalue of $I - \\omega A$. We can do this if $A$ is well conditioned, but not in general.\n\n### Preconditioning\n\nPreconditioning is the act of creating an \"affordable\" operation \"$P^{-1}$\" such that $P^{-1} A$ (or $A P^{-1}$) is is well-conditoned or otherwise has a \"nice\" spectrum. We then solve the system\n\n$$ P^{-1} A x = P^{-1} b \\quad \\text{or}\\quad A P^{-1} \\underbrace{(P x)}_y = b $$\n\nin which case the convergence rate depends on the spectrum of the iteration matrix\n$$ I - \\omega P^{-1} A . $$\n\n* The preconditioner must be applied on each iteration.\n* It is *not* merely about finding a good initial guess.\n\nThere are two complementary techniques necessary for efficient iterative methods:\n\n* \"accelerators\" or Krylov methods, which use orthogonality to adaptively converge faster than Richardson\n* preconditioners that improve the spectrum of the preconditioned operator\n\nAlthough there is ongoing research in Krylov methods and they are immensely useful, I would say preconditioning is 90% of the game for practical applications, particularly as a research area.",
"_____no_output_____"
],
[
"# Krylov subspaces\n\nAll matrix iterations work with approximations in a *Krylov subspace*, which has the form\n\n$$ K_n = \\big[ b \\big| Ab \\big| A^2 b \\big| \\dotsm \\big| A^{n-1} b \\big] . $$\n\nThis matrix is horribly ill-conditioned and cannot stably be computed as written. Instead, we seek an orthogonal basis $Q_n$ that spans the same space as $K_n$. We could write this as a factorization\n\n$$ K_n = Q_n R_n $$\n\nwhere the first column $q_0 = b / \\lVert b \\rVert$. The $R_n$ is unnecessary and hopelessly ill-conditioned, so a slightly different procedure is used.\n",
"_____no_output_____"
],
[
"### Arnoldi iteration\n\nThe Arnoldi iteration applies orthogonal similarity transformations to reduce $A$ to [Hessenberg form](https://en.wikipedia.org/wiki/Hessenberg_matrix), starting from a vector $q_0 = b$,\n\n$$ A = Q H Q^h . $$\n\nLet's multiply on the right by $Q$ and examine the first $n$ columns,\n\n$$ A Q_n = Q_{n+1} H_n $$\nwhere $H_n$ is an $(n+1) \\times n$ Hessenberg matrix.\n\n#### Conditioning\nThis representation is well-conditioned because $Q$ is orthogonal and\n\n$$ \\lVert H_n \\rVert \\le \\lVert Q_{n+1}^h \\rVert \\lVert A \\rVert \\lVert Q_n \\rVert \\le \\lVert A \\rVert $$.\n\nFor a lower bound, we have\n\n$$ \\sigma_{\\min}(A)^2 \\le x^h A^h A x $$\n\nfor all $x$ of norm 1. It must also be true for any $x = Q_n y$ where $\\lVert y\\rVert = 1$, thus\n\n$$ \\sigma_{\\min}(A)^2 \\le y^h Q_n^h A^h A Q_n y = y^h H_n^h Q_{n+1}^h Q_{n+1} H_n y = y^h H_n^h H_n y . $$\n\n#### GMRES\n\nGMRES (Generalized Minimum Residual) minimizes\n$$ \\lVert A x - b \\rVert $$\nover the subspace $Q_n$. I.e., $x = Q_n y$ for some $y$. By the recurrence above, this is equivalent to\n$$ \\lVert Q_{n+1} H_n y - b \\lVert $$\nwhich can be solved by minimizing\n$$ \\lVert H_n y - Q_{n+1}^h b \\rVert . $$\nSince $q_0 = b/\\lVert b \\lVert$, the least squares problem is to minimize\n$$ \\Big\\lVert H_n y - \\lVert b \\rVert e_0 \\Big\\rVert . $$\nThe solution of this least squares problem is achieved by incrementally updating a $QR$ factorization of $H_n$.\n\n**Notes**\n\n* GMRES minimizes the 2-norm of the residual $\\lVert r_n \\rVert$ which is equivalent to the $A^T A$ norm of the error $\\lVert x_n - x_* \\rVert_{A^T A}$.\n* The solution $x_n$ constructed by GMRES at iteration $n$ is not explicitly available. If a solution is needed, it must be constructed by solving the $(n+1)\\times n$ least squares problem and forming the solution as a linear combination of the $n$ vectors $Q_n$. The leading cost is $2mn$ where $m \\gg n$.\n* The residual vector $r_n = A x_n - b$ is not explicitly available in GMRES. To compute it, first build the solution $x_n$ as above.\n* GMRES needs to store the full $Q_n$, which is unaffordable for large $n$ (many iterations). The standard solution is to choose a \"restart\" $k$ and to discard $Q_n$ and start over with an initial guess $x_k$ after each $k$ iterations. This algorithm is called GMRES(k). PETSc's default solver is GMRES(30) and the restart can be controlled using the run-time option `-ksp_gmres_restart`.\n* Most implementations of GMRES use classical Gram-Schmidt because it is much faster in parallel (one reduction per iteration instead of $n$ reductions per iteration). The PETSc option `-ksp_gmres_modifiedgramschmidt` can be used when you suspect that classical Gram-Schmidt may be causing instability.\n* There is a very similar (and older) algorithm called GCR that maintains $x_n$ and $r_n$. This is useful, for example, if a convergence tolerance needs to inspect individual entries. GCR requires $2n$ vectors instead of $n$ vectors, and can tolerate a nonlinear preconditioner. FGMRES is a newer algorithm with similar properties to GCR.\n\n\n### Lanczos iteration: the symmetric case\n\nIf $A$ is symmetric, then $H = Q^T A Q$ is also symmetric. Since $H$ is Hessenberg, this means $H$ is tridiagonal. Instead of storing $Q_n$, it is sufficient to store only the last two columns since the iteration satisfies a 3-term recurrence. The analog of GMRES for the symmetric case is called MINRES and is also useful for solving symmetric indefinite problems.\n\n#### Conjugate Gradients\n\nInstead of minimizing the $A^T A$ norm of the error, the Conjugate Gradient method minimizes the $A$ norm of the error. For $A$ to induce a norm, it must be symmetric positive definite. [Jeremy Shewchuck's guide to CG](http://www.cs.cmu.edu/%7Equake-papers/painless-conjugate-gradient.pdf) is an excellent resource.",
"_____no_output_____"
],
[
"# Preconditioning\n\n## Classical methods\n\nWe have discussed the Jacobi preconditioner\n$$ P_{\\text{Jacobi}}^{-1} = D^{-1} $$\nwhere $D$ is the diagonal of $A$.\nGauss-Seidel is\n$$ P_{GS}^{-1} = (L+D)^{-1} $$\nwhere $L$ is the (strictly) lower triangular part of $A$. The upper triangular part may be used instead, or a symmetric form\n$$ P_{SGS}^{-1} = (L+U)^{-1} A \\Big( I - (L+D)^{-1} \\Big) . $$\n\n## Domain decomposition\n\nGiven a linear operator $A : V \\to V$, suppose we have a collection of prolongation operators $P_i : V_i \\to V$. The columns of $P_i$ are \"basis functions\" for the subspace $V_i$. The Galerkin operator $A_i = P_i^T A P_i$ is the action of the original operator $A$ in the subspace.\n\nDefine the subspace projection\n\n$$ S_i = P_i A_i^{-1} P_i^T A . $$\n\n* $S_i$ is a projection: $S_i^2 = S_i$\n* If $A$ is SPD, $S_i$ is SPD with respect to the $A$ inner product $x^T A y$\n* $I - S_i$ is $A$-orthogonal to the range of $P_i$\n\nThese projections may be applied additively\n\n$$ I - \\sum_{i=0}^n S_i, $$\n\nmultiplicatively\n\n$$ \\prod_{i=0}^n (I - S_i), $$\n\nor in some hybrid manner, such as\n\n$$ (I - S_0) (I - \\sum_{i=1}^n S_i) . $$\nIn each case above, the action is expressed in terms of the error iteration operator.\n\n### Examples\n\n* Jacobi corresponds to the additive preconditioner with $P_i$ as the $i$th column of the identity\n* Gauss-Seidel is the multiplicate preconditioner with $P_i$ as the $i$th column of the identity\n* Block Jacobi corresponds to labeling \"subdomains\" and $P_i$ as the columns of the identity corresponding to non-overlapping subdomains\n* Overlapping Schwarz corresponds to overlapping subdomains\n* $P_i$ are eigenvectors of $A$\n* A domain is partitioned into interior $V_I$ and interface $V_\\Gamma$ degrees of freedom. $P_I$ is embedding of the interior degrees of freedom while $P_\\Gamma$ is \"harmonic extension\" of the interface degrees of freedom. Consider the multiplicative combination $(I - S_\\Gamma)(I - S_I)$.\n\n### Convergence theory\n\nThe formal convergence is beyond the scope of this course, but the following estimates are useful. We let $h$ be the element diameter, $H$ be the subdomain diameter, and $\\delta$ be the overlap, each normalized such that the global domain diameter is 1. We express the convergence in terms of the condition number $\\kappa$ for the preconditioned operator.\n\n* (Block) Jacobi: $\\delta=0$, $\\kappa \\sim H^{-2} H/h = (Hh)^{-1}$\n* Overlapping Schwarz: $\\kappa \\sim H^{-2} H/\\delta = (H \\delta)^{-1}$\n* 2-level overlapping Schwarz: $\\kappa \\sim H/\\delta$\n\n### Hands-on with PETSc: demonstrate these estimates\n\n* Symmetric example: `src/snes/examples/tutorials/ex5.c`\n* Nonsymmetric example: `src/snes/examples/tutorials/ex19.c`\n* Compare preconditioned versus unpreconditioned norms.\n* Compare BiCG versus GMRES\n* Compare domain decomposition and multigrid preconditioning\n * `-pc_type asm` (Additive Schwarz)\n * `-pc_asm_type basic` (symmetric, versus `restrict`)\n * `-pc_asm_overlap 2` (increase overlap)\n * Effect of direct subdomain solver: `-sub_pc_type lu`\n * `-pc_type mg` (Geometric Multigrid)\n* Use monitors:\n * `-ksp_monitor_true_residual`\n * `-ksp_monitor_singular_value`\n * `-ksp_converged_reason`\n* Explain methods: `-snes_view`\n* Performance info: `-log_view`\n\n#### Examples\n```\nmpiexec -n 4 ./ex19 -lidvelocity 2 -snes_monitor -da_refine 5 -ksp_monitor -pc_type asm -sub_pc_type lu\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecf76fe49627e49ea2114a10a131539418e15583 | 576,159 | ipynb | Jupyter Notebook | temp_UNSUPERVISED.ipynb | anomalydetect/anomalydetect | 32c9044c4dd18ae44e6c4263eeef485670f74846 | [
"Unlicense"
] | 2 | 2018-06-26T01:15:25.000Z | 2018-07-10T02:01:57.000Z | temp_UNSUPERVISED.ipynb | anomalydetect/anomalydetect | 32c9044c4dd18ae44e6c4263eeef485670f74846 | [
"Unlicense"
] | null | null | null | temp_UNSUPERVISED.ipynb | anomalydetect/anomalydetect | 32c9044c4dd18ae44e6c4263eeef485670f74846 | [
"Unlicense"
] | 2 | 2018-07-10T14:14:23.000Z | 2019-05-14T06:26:00.000Z | 43.631882 | 17,052 | 0.448545 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n#df = pd.read_csv('creditcard.csv')\n#df.head()\n#X = df.drop(\"Class\", axis=1)\n#y = df[\"Class\"]\n#print(X.shape, y.shape)",
"_____no_output_____"
],
[
"d1 = {\"a\": 1, \"b\": 23, \"c\": 3, \"d\": 40}\nd2 = {\"a\": 1, \"b\": 23, \"c\": 33, \"d\": 42}\nd3 = {\"a\": 3, \"b\": 20, \"c\": 30, \"d\": 4}\n\ndf = pd.DataFrame([d1,d2,d3])\n\n\ndf.to_csv(\"data/123456/nish.csv\", encoding=\"utf-8\", index=False, header=True)",
"_____no_output_____"
],
[
"import json\n\nv_uk_id = \"123456\"\nmy_path = \"data/\"+v_uk_id+\"/\"\nv_input_json_file = my_path+\"details.json\"\nv_output_result_file = my_path+\"result.csv\"\nv_output_json_file = my_path+\"result_desc.json\"\n\n\nwith open(v_input_json_file, encoding=\"utf-8\") as data_file:\n input_json = json.load(data_file)\n \n \ninput_json\n#input_json['filename']",
"_____no_output_____"
],
[
"def fx_analysis(v_uk_id):\n\n\n my_path = \"data/\"+v_uk_id+\"/\"\n v_input_json_file = my_path+\"details.json\"\n v_output_result_csv = my_path+\"result.csv\"\n v_output_json_file = my_path+\"result_desc.json\"\n\n\n with open(v_input_json_file, encoding=\"utf-8\") as data_file:\n input_json = json.load(data_file)\n\n v_input_csv = my_path + input_json['filename']\n \n v_learning_type = input_json['learning_type']\n \n \n ##################### Temp code\n df = pd.read_csv(v_input_csv)\n df_result=df.head(50)\n df_result.to_csv(v_output_result_csv, encoding=\"utf-8\", index=False, header=True)\n \n v_output_json_contents = {\n \"image_title1\": \"nish1\" ,\n \"image_title2\": \"jfbgcjshhgsj\" ,\n \"image_title3\": \"jfbgcjshhgsj\" ,\n \"image_title4\": \"jfbgcjshhgsj\" ,\n \"image_name1\": \"image1.png\" ,\n \"image_name2\": \"image2.png\" ,\n \"image_name3\": \"image3.png\" ,\n \"image_name4\": \"image4.png\" ,\n \"image_desc1\": \"jfbgcjshhgsj\" ,\n \"image_desc2\": \"jfbgcjshhgsj\" ,\n \"image_desc3\": \"jfbgcjshhgsj\" ,\n \"image_desc4\": \"jfbgcjshhgsj\" ,\n \"model\": [ {\"model_desc\" : \"ssss\" ,\"model_file\" : \"ssss\" } , {\"model_desc\" : \"ssss\" ,\"model_file\" : \"ssss\" } ] \n }\n with open(v_output_json_file, 'w') as outfile:\n json.dump(v_output_json_contents, outfile)\n \n\n return 'Success'",
"_____no_output_____"
],
[
"fx_analysis('123456')",
"_____no_output_____"
],
[
"from pathlib import Path\ndef fx_result(v_uk_id):\n my_path = \"data/\"+v_uk_id+\"/\"\n v_input_json_file = my_path+\"details.json\"\n v_output_result_csv = my_path+\"result.csv\"\n v_output_json_file = my_path+\"result_desc.json\"\n my_file = Path(v_output_json_file)\n if my_file.is_file():\n return 'COMPLETED'\n else:\n return 'PROCESSING'\n",
"_____no_output_____"
],
[
"fx_result('123456')",
"_____no_output_____"
],
[
"v_json = {\n'image_title1': 'jfbgcjshhgsj' ,\n'image_title2': 'jfbgcjshhgsj' ,\n'image_title3': 'jfbgcjshhgsj' ,\n'image_title4': 'jfbgcjshhgsj' ,\n'image_name1': 'image1.png' ,\n'image_name2': 'image2.png' ,\n'image_name3': 'image3.png' ,\n'image_name4': 'image4.png' ,\n'image_desc1': 'jfbgcjshhgsj' ,\n'image_desc2': 'jfbgcjshhgsj' ,\n'image_desc3': 'jfbgcjshhgsj' ,\n'image_desc4': 'jfbgcjshhgsj' ,\n'model': [ {'model_desc' : 'ssss' ,'model_file' : 'ssss' } , {'model_desc' : 'ssss' ,'model_file' : 'ssss' } ] \n}\n\n\n\n\nv_json\n\nd1 = {\"a\": 1, \"b\": 23, \"c\": 3, \"d\": 40}\nd2 = {\"a\": 1, \"b\": 23, \"c\": 33, \"d\": 42}\nd3 = {\"a\": 3, \"b\": 20, \"c\": 30, \"d\": 4}\n\ndf = pd.DataFrame([d1,d2,d3])\n\n\nt1.to_csv(\"Output/books_clean.csv\", encoding=\"utf-8\", index=False, header=True)\n\n\npd.read_json('purchase_data.json')\n",
"_____no_output_____"
],
[
"df.to_csv(\"custom_anomaly_data.csv\", index=False, header=True)",
"_____no_output_____"
],
[
"from itertools import chain\nimport itertools\ndef fx_generate_data( v_detail= False):\n '''\n Usage\n #fact2 \tmyid \tfact1 \tseries1 \tdim1_v1 \tdim1_v2 \tdim2_v1 \tdim2_v2 \tdim2_v3 \tlabel\n #fact2 \tfact2_label \tmyid \tfact1 \tfact1_label \tseries1 \tseries1_label \tdim1_v1 \tdim1_v2 \tdim1_label \tdim2_v1 \tdim2_v2 \tdim2_v3 \tdim2_label \tlabel\n #df = fx_generate_data(v_detail = False)\n #df.head(5)\n #plt.scatter(df['dim1_v1'].values,df['dim1_v2'].values)\n #plt.scatter(df['dim2_v1'].values,df['dim2_v3'].values)\n '''\n # Three columns########################################################################################\n\n X = 0.7 * np.random.randn(10000, 3)\n # Generate some abnormal novel observations\n X_outliers = np.random.uniform(low=-5, high=5, size=(20, 3))\n ###############\n X_outliers_list = X_outliers.tolist()\n X_outliers_list = [[x + ['outlier_3col']][0] for x in X_outliers_list]\n x1 = X + 2\n x1_list = x1.tolist()\n x1_list = [[x + ['normal']][0] for x in x1_list]\n\n x2 = X - 2\n x2_list = x2.tolist()\n x2_list = [[x + ['normal']][0] for x in x2_list]\n\n data_3_col_list = X_outliers_list + x1_list + x2_list\n df_3d_outlier = pd.DataFrame(data_3_col_list)\n # Two columns########################################################################################\n\n X = 0.3 * np.random.randn(10000, 2)\n # Generate some abnormal novel observations\n X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2))\n ###############\n X_outliers_list = X_outliers.tolist()\n X_outliers_list = [[x + ['outlier_2col']][0] for x in X_outliers_list]\n x1 = X + 2\n x1_list = x1.tolist()\n x1_list = [[x + ['normal']][0] for x in x1_list]\n\n x2 = X - 2\n x2_list = x2.tolist()\n x2_list = [[x + ['normal']][0] for x in x2_list]\n\n data_2_col_list = x2_list + x1_list + X_outliers_list\n df_2d_outlier = pd.DataFrame(data_2_col_list)\n # sklearn Two columns########################################################################################\n# we can not label outlier list..so ignoring it\n# from sklearn.datasets.samples_generator import make_blobs\n# data_2a_col, _ = make_blobs(n_samples=20020, centers=4,n_features =2,\n# cluster_std = 1, random_state = 1)\n # timeseries ########################################################################################\n n = 1000\n a1000 = np.arange(1, 1 * n)\n b1000 = np.arange(1 * n, 2 * n)\n c1000 = np.arange(2 * n, 3 * n)\n outliers1 = np.arange(2 * n, 2 * n + 1) #duplicate outlier\n outliers2 = np.arange(1 * n, 1 * n + 1) #duplicate outlier\n outliersx = np.arange(-10000, -10000 + 1) # extreme value outlier\n outliersy = np.arange(100000000000, 100000000000 + 1) # extreme value outlier\n \n \n d3080 = np.arange(6 * n, 20 * n + 1)\n\n data_time_col1 = np.r_[a1000, b1000, b1000, c1000, c1000, c1000]\n data_time_col2 = np.r_[\n outliersx, outliers2, outliersy, outliers2, outliers2, outliers2, outliers2, outliers2, outliers2, outliers2,\n outliers1, outliers1, outliers1, outliers1, outliers1, outliers1, outliers1, outliers1, outliers1, outliers1\n ]\n data_time_col3 = np.r_[d3080]\n\n ###############\n X_outliers_list = data_time_col2.tolist()\n X_outliers_list = [ [x] + ['outlier_timecol'] for x in X_outliers_list]\n x1 = data_time_col1\n x1_list = x1.tolist()\n x1_list = [ [x] + ['normal'] for x in x1_list]\n\n x2 = data_time_col3\n x2_list = x2.tolist()\n x2_list = [ [x] + ['normal'] for x in x2_list]\n\n data_time_col_list = x2_list + X_outliers_list + x1_list\n df_series1_outlier = pd.DataFrame(data_time_col_list)\n # fact columns########################################################################################\n\n X = 1000 * np.random.randn(10000, 1)\n # Generate some abnormal novel observations\n X_outliers = np.random.uniform(low=-4, high=4, size=(20, 1)) # (1,2) are outliers between the range (...-1000, -2000,1,2,1111,6000,5000..)\n ###############\n X_outliers_list = X_outliers.tolist()\n X_outliers_list = [[x + ['outlier_fact1']][0] for x in X_outliers_list]\n x1 = X + 2\n x1_list = x1.tolist()\n x1_list = [[x + ['normal']][0] for x in x1_list]\n\n x2 = X - 2\n x2_list = x2.tolist()\n x2_list = [[x + ['normal']][0] for x in x2_list]\n\n data_fact1_col_list = x2_list + X_outliers_list + x1_list\n df_fact1_outlier = pd.DataFrame(data_fact1_col_list)\n # fact2 columns########################################################################################\n\n X = 110 * np.random.randn(10000, 1)\n # Generate some abnormal novel observations\n X_outliers = 10000 * np.random.uniform(low=-4, high=4, size=(20, 1)) # extreme value outliers\n ###############\n X_outliers_list = X_outliers.tolist()\n X_outliers_list = [[x + ['outlier_fact2']][0] for x in X_outliers_list]\n x1 = X + 2\n x1_list = x1.tolist()\n x1_list = [[x + ['normal']][0] for x in x1_list]\n\n x2 = X - 2\n x2_list = x2.tolist()\n x2_list = [ [x + ['normal']][0] for x in x2_list]\n\n data_fact2_col_list = X_outliers_list + x2_list + x1_list\n df_fact2_outlier = pd.DataFrame(data_fact2_col_list) \n########################################################################################\n # np.dstack((data_2_col,data_3_col))\n # np.array(list(zip(data_2_col,data_3_col)))\n\n #mydata = list(zip(data_time_col_list,\n# data_2_col_list,\n# data_3_col_list,\n# data_fact1_col_list,\n# data_fact2_col_list))\n\n #working mydata = list(zip(data_fact1_col_list,data_fact2_col_list))\n #mydata = list(zip(data_fact2_col_list))\n # mydata = np.column_stack((data_timeseries_col,data_2_col,data_2a_col,data_3_col,data_fact1_col,data_fact2_col))\n #mydata = list(chain(*mydata))\n #mydata = list(chain.from_iterable(mydata))\n #mydata = sum(mydata, [])\n #mydata = [ list(itertools.chain.from_iterable(list(zip(x[0],x[1] , x[2])))) for x in mydata]\n #df = pd.DataFrame(np.array(mydata))\n #df = pd.DataFrame(data_fact1_col_list)\n df = df_fact2_outlier.copy()\n df['myid'] = df.index\n df.rename(columns={0 : 'fact2' ,1 : 'fact2_label' },inplace=True)\n df['fact1'] = df_fact1_outlier[0].values\n df['fact1_label'] = df_fact1_outlier[1].values\n df['series1'] = df_series1_outlier[0].values\n df['series1_label'] = df_series1_outlier[1].values\n df['dim1_v1'] = df_2d_outlier[0].values\n df['dim1_v2'] = df_2d_outlier[1].values\n df['dim1_label'] = df_2d_outlier[2].values\n df['dim2_v1'] = df_3d_outlier[0].values\n df['dim2_v2'] = df_3d_outlier[1].values\n df['dim2_v3'] = df_3d_outlier[2].values\n df['dim2_label'] = df_3d_outlier[3].values\n \n label_list = ['outlier' if r['fact2_label'] != 'normal' else \n 'outlier' if r['fact1_label'] != 'normal' else \n 'outlier' if r['series1_label'] != 'normal' else \n 'outlier' if r['dim1_label'] != 'normal' else \n 'outlier' if r['dim2_label'] !='normal' else 'normal' for i,r in df.iterrows() ]\n df_label = pd.DataFrame(label_list)\n df['label'] = df_label[0].values\n \n if v_detail == True :\n return df\n else :\n select_list = ['fact2','myid','fact1','series1','dim1_v1','dim1_v2','dim2_v1','dim2_v2','dim2_v3','label']\n df1 = df[select_list]\n return df1\n\n",
"_____no_output_____"
],
[
"df = fx_generate_data(v_detail = False)\ndf.head(5)",
"_____no_output_____"
],
[
"from sklearn import tree\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import load_iris\n\n# Create a random forest classifier\nrf_clf = RandomForestClassifier(n_estimators=200)\nrf = rf_clf.fit(df.data, df.target)\nrf.score(iris.data, iris.target)\n# Random Forests in sklearn will automatically calculate feature importance\nimportances = rf.feature_importances_\nimportances",
"_____no_output_____"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\ndf.to_csv(\"custom_anomaly_data.csv\", index=False, header=True)",
"_____no_output_____"
],
[
"\nplt.scatter(df['dim1_v1'].values,df['dim1_v2'].values)\nplt.scatter(df['dim2_v1'].values,df['dim2_v2'].values)",
"_____no_output_____"
],
[
"# Three Sigma Rule\nimport numpy as np\n\ndef fx_ThreeSigmaRule(series_id, series_data, v_number_of_std , v_masking_Iteration):\n '''\n value should be in +- range of -->mean + n * std\n Probability\n n =1 --> 68\n n =2 --> 95\n n= 3 --> 99.7\n Usage : \n df_outliers= fx_ThreeSigmaRule(df['myid'],df['series1'], 2,2)\n df_outliers\n Good for normal distributed data ---series or sequences\n Drawback :\n Not good when data is not normal distribution \n Sensitive to extreme points ---Masking effect. If extreme value is too large then it overshadow other values\n -->iteration solves this issue\n '''\n \n \n v_df = pd.DataFrame({})\n v_df_outliers_final = pd.DataFrame({})\n v_df['anomalydetectid'] = series_id\n v_df['data'] = series_data\n \n v_Iteration = 0\n\n while (v_Iteration < v_masking_Iteration):\n #############################\n print (str(v_masking_Iteration))\n v_masking_Iteration = v_masking_Iteration -1\n v_threshold = np.std(v_df['data']) * v_number_of_std\n v_mean = np.mean(v_df['data'])\n print (str(v_mean -v_threshold ))\n print (str(v_threshold + v_mean) )\n where_tuple = (np.abs(v_df['data'] -v_mean)> v_threshold )\n v_df_outliers = v_df[where_tuple]\n\n\n\n #v_outliersList = [ [r[0] , r[1]] for i,r in v_df.iterrows() if np.abs(r[1]) > v_threshold + v_mean]\n\n if (len(v_df_outliers) > 0):\n\n v_df_outliers_final = pd.concat([v_df_outliers_final, v_df_outliers]) \n\n # Update data - remove otliers from the list\n #list1 = [x for x in list1 if x not in v_outliersList[1]]\n where_tuple = (np.abs(v_df['data'] -v_mean) <= v_threshold )\n v_df = v_df[where_tuple]\n\n else :\n\n break\n \n \n\n ############################\n\n if len(v_df_outliers_final) > 0:\n\n return (v_df_outliers_final)\n\n else :\n return (pd.DataFrame({}))\n print(\"Three are No Outliers\")\n\n",
"_____no_output_____"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\n#a1 = df['series1'].values\n#list1 = a1.tolist()\n#outliers_id_list = [x[0] for x in outliersList]\n# for series ThreeSigmaRule is good\n#df_outliers= fx_ThreeSigmaRule(df['myid'],df['series1'], 2,2)\n# not good for below\n#df_outliers= fx_ThreeSigmaRule(df['myid'],df['fact1'], 1,3)\n\n\n# not good for below\ndf_outliers= fx_ThreeSigmaRule(df['myid'],df['fact2'], 2 , 1)\ndf_outliers\n\nwhere_tuple = (df_detail['fact2_label']== 'outlier_fact2' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')",
"1\n-1357.156695992674\n1341.1268209072482\n"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_ThreeSigmaRule(df['myid'],df['fact1'], 2 , 1)\ndf_outliers\n\nwhere_tuple = (df_detail['fact1_label']== 'outlier_fact1' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# unable to detect inner range outliers ",
"1\n-2027.9371644895723\n2029.33493833573\n"
],
[
"# MAD test Rule\nfrom statsmodels import robust\nimport numpy as np\n\ndef fx_mad_Rule(series_id, series_data, v_number_of_std ):\n '''\n value should be in +- range of -->median + n * MAD/.6745\n Usage : \n df_outliers= fx_mad_Rule(df['myid'],df['series1'], 2)\n df_outliers\n Good for not normal distributed data\n No issue with extreme points\n Drawback :\n Not good when data is normal distribution \n Too agressive\n \n '''\n # warning ignore for verylarge values \n #np.seterr(invalid='ignore')\n #np.errstate(invalid='ignore')\n #np.warnings.filterwarnings('ignore')\n v_df = pd.DataFrame({})\n v_df_outliers_final = pd.DataFrame({})\n v_df['id'] = series_id\n v_df['data'] = series_data\n \n\n #############################\n v_threshold = robust.mad(v_df['data'], c=1) * v_number_of_std / 0.6745\n v_median = np.median(v_df['data'])\n\n print (str(v_median -v_threshold ))\n print (str(v_threshold + v_median) )\n where_tuple = (np.abs(v_df['data'] -v_median)> v_threshold )\n v_df_outliers_final = v_df[where_tuple]\n\n\n ############################\n\n if len(v_df_outliers_final) > 0:\n\n return (v_df_outliers_final)\n\n else :\n print(\"Three are No Outliers\")\n",
"_____no_output_____"
],
[
"plt.scatter(df['myid'].values,df['series1'].values)\n\n\ndf = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_mad_Rule(df['myid'],df['series1'], 1)\ndf_outliers\n\n# where_tuple = (df_detail['series1_label']== 'outlier_series1' )\n# df_detail_filtered = df_detail[where_tuple]\n# df_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# detect outliers --but it is too aggresive",
"1069.33543365\n18913.6645663\n"
],
[
"plt.scatter(df['myid'].values,df['fact2'].values)\n\n\ndf = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_mad_Rule(df['myid'],df['fact2'], 1)\ndf_outliers\n\nwhere_tuple = (df_detail['fact2_label']== 'outlier_fact2' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# unable to detect inner range outliers \n\n",
"-107.759687127\n111.94985619\n"
],
[
"\n\n\ndf = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_mad_Rule(df['myid'],df['fact1'], 1)\ndf_outliers\n\nwhere_tuple = (df_detail['fact1_label']== 'outlier_fact1' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# detect outliers --but it is too aggresive\n\n",
"-987.050197025\n1000.42916581\n"
],
[
"# Boxplot Rule\n\nimport numpy as np\n\ndef fx_boxplot_Rule(series_id, series_data ):\n '''\n value should be in +- range of -->(y > q3 + 1.5 * iqr) or (y < q1 - 1.5 * iqr\n Usage : \n df_outliers= fx_boxplot_Rule(df['myid'],df['series1'])\n df_outliers\n For presense of outliers --less sensitive than 3 sigma but more sensitive to MAD test \n No depenedence of median and mean\n better for moderately asymmetric distribution\n Drawback :\n Too agressive\n \n '''\n # warning ignore for verylarge values \n #np.seterr(invalid='ignore')\n #np.errstate(invalid='ignore')\n #np.warnings.filterwarnings('ignore')\n v_df = pd.DataFrame({})\n v_df_outliers_final = pd.DataFrame({})\n v_df['id'] = series_id\n v_df['data'] = series_data\n \n\n #############################\n q1 = np.percentile(v_df['data'], 25)\n\n q3 = np.percentile(v_df['data'], 75)\n\n iqr = q3 - q1\n\n print (str(q1 - 1.5 * iqr))\n print (str(q3 + 1.5 * iqr) )\n where_tuple1 = (v_df['data'] > q3 + 1.5 * iqr )\n where_tuple2 = (v_df['data'] < q1 - 1.5 * iqr )\n v_df_outliers_final = v_df[where_tuple1 | where_tuple2]\n\n\n ############################\n\n if len(v_df_outliers_final) > 0:\n\n return (v_df_outliers_final)\n\n else :\n print(\"Three are No Outliers\")",
"_____no_output_____"
],
[
"plt.scatter(df['myid'].values,df['series1'].values)\n\n\ndf = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_boxplot_Rule(df['myid'],df['series1'])\ndf_outliers\n\n# where_tuple = (df_detail['series1_label']== 'outlier_series1' )\n# df_detail_filtered = df_detail[where_tuple]\n# df_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# detect outliers --but it is too aggresive",
"-15839.375\n33497.625\n"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_boxplot_Rule(df['myid'],df['fact1'])\ndf_outliers\n\nwhere_tuple = (df_detail['fact1_label']== 'outlier_fact1' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# detect outliers --but it is too aggresive",
"-673.438681724\n661.241453923\n"
],
[
"# Adjusted Boxplot Rule\n\nimport numpy as np\nfrom statsmodels.stats.stattools import medcouple\n\ndef fx_adjusted_boxplot_Rule(series_id, series_data ):\n '''\n value should be in +- range of -->(y > q3 + 1.5 * iqr) or (y < q1 - 1.5 * iqr\n Usage : \n df_outliers= fx_boxplot_Rule(df['myid'],df['series1'])\n df_outliers\n For presense of outliers --less sensitive than 3 sigma but more sensitive to MAD test \n No depenedence of median and mean\n better for moderately asymmetric distribution\n Drawback :\n Too agressive\n \n '''\n # warning ignore for verylarge values \n #np.seterr(invalid='ignore')\n #np.errstate(invalid='ignore')\n #np.warnings.filterwarnings('ignore')\n v_df = pd.DataFrame({})\n v_df_outliers_final = pd.DataFrame({})\n v_df['id'] = series_id\n v_df['data'] = series_data\n \n\n #############################\n q1 = np.percentile(v_df['data'], 25)\n\n q3 = np.percentile(v_df['data'], 75)\n\n iqr = q3 - q1\n \n mc = medcouple(v_df['data'])\n\n \n if (mc >= 0) :\n \n lr = q1 - 1.5 * iqr * np.exp(-4 * mc)\n ur = q3 + 1.5 * iqr * np.exp(3 * mc)\n else :\n lr = q1 - 1.5 * iqr * np.exp(-3 * mc)\n ur = q3 + 1.5 * iqr * np.exp(4 * mc)\n \n \n \n \n print (str(lr))\n print (str(ur) )\n \n where_tuple1 = (v_df['data'] > ur )\n where_tuple2 = (v_df['data'] < lr )\n v_df_outliers_final = v_df[where_tuple1 | where_tuple2]\n\n\n ############################\n\n if len(v_df_outliers_final) > 0:\n\n return (v_df_outliers_final)\n\n else :\n print(\"Three are No Outliers\")",
"_____no_output_____"
],
[
"plt.scatter(df['myid'].values,df['series1'].values)\n\n\ndf = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_adjusted_boxplot_Rule(df['myid'],df['series1'])\ndf_outliers\n\n# detect outliers --but it is too aggresive",
"-19827.3099645\n29258.0554116\n"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\n\ndf_outliers= fx_adjusted_boxplot_Rule (df['myid'],df['fact1'])\ndf_outliers\n\nwhere_tuple = (df_detail['fact1_label']== 'outlier_fact1' )\ndf_detail_filtered = df_detail[where_tuple]\ndf_detail_filtered.merge(df_outliers.rename(columns={'id':'myid'}),how='outer')\n\n# detect outliers --but it is too aggresive",
"-2716.60454909\n2714.03054354\n"
],
[
"# Mahalanobis Rule\n\nimport numpy as np\nfrom statsmodels.stats.stattools import medcouple\n\ndef fx_Mahalanobis_Rule(series_id, df_data ):\n '''\n value should be in +- range of -->(y > q3 + 1.5 * iqr) or (y < q1 - 1.5 * iqr\n Usage : \n df_outliers= fx_Mahalanobis_Rule(df['myid'],df.loc[:, ['dim1_v1','dim1_v2']])\n df_outliers\n For presense of outliers --less sensitive than 3 sigma but more sensitive to MAD test \n No depenedence of median and mean\n better for moderately asymmetric distribution\n Drawback :\n Too agressive\n \n '''\n # warning ignore for verylarge values \n #np.seterr(invalid='ignore')\n #np.errstate(invalid='ignore')\n #np.warnings.filterwarnings('ignore')\n v_df = df_data\n v_df_outliers_final = pd.DataFrame({})\n v_df['id'] = series_id\n \n mydata = df_data.values\n \n \n\n #############################\n q1 = np.percentile(v_df['data'], 25)\n\n q3 = np.percentile(v_df['data'], 75)\n\n iqr = q3 - q1\n \n mc = medcouple(v_df['data'])\n\n \n if (mc >= 0) :\n \n lr = q1 - 1.5 * iqr * np.exp(-4 * mc)\n ur = q3 + 1.5 * iqr * np.exp(3 * mc)\n else :\n lr = q1 - 1.5 * iqr * np.exp(-3 * mc)\n ur = q3 + 1.5 * iqr * np.exp(4 * mc)\n \n \n \n \n print (str(lr))\n print (str(ur) )\n \n where_tuple1 = (v_df['data'] > ur )\n where_tuple2 = (v_df['data'] < lr )\n v_df_outliers_final = v_df[where_tuple1 | where_tuple2]\n\n\n ############################\n\n if len(v_df_outliers_final) > 0:\n\n return (v_df_outliers_final)\n\n else :\n print(\"Three are No Outliers\")",
"_____no_output_____"
],
[
"# ABOD - Adngle Based Outlier Detection\nimport numpy as np\nimport math\nimport sys\nimport pandas as pd\n\n\n# calculation norm ||.|| between each two points\n# define two norm matrix : one for AB vector and one for AC vector in order to be suitable for R version\n\n# add 0.01 In order to enable calculation of identical values (if you have two identical values, then\n# norm of them will equal to zero and you can not divide by zero)\ndef fx_norm(normalizedList):\n\n (nrow, ncol) = normalizedList.shape\n normMatrix_AB = [[] for i in range(nrow)]\n normMatrix_AC = [[] for i in range(nrow)]\n\n for i in range(nrow):\n j = 0\n for j in range(i+1):\n\n if i==j:\n normMatrix_AB[i].append(0.0)\n normMatrix_AC[i].append(0.0)\n continue\n else:\n dist2vec = normalizedList[j] - normalizedList[i]\n\n #for AB\n norm_AB = math.sqrt(sum((i*i for i in dist2vec)))\n normMatrix_AB[i].append(norm_AB)\n\n #FOR AC\n norm_AC = math.sqrt(sum((i*i for i in dist2vec)) + 0.01)\n normMatrix_AC[i].append(norm_AC)\n\n return [normMatrix_AB, normMatrix_AC]\n\n\ndef fx_abod(df_input, topOutliers):\n \n list = df_input.values.tolist()\n column_list = df_input.columns.values.tolist()\n points = np.transpose(np.array(list))\n\n # norm ||.|| between each two points\n normMatrix_AB = fx_norm(points) [0]\n normMatrix_AC = fx_norm(points) [1]\n\n (nrow,ncol) = points.shape\n abodList = []\n\n for a in range(nrow):\n\n angleList = []\n A = points[a]\n\n for b in range(nrow):\n\n if a == b :\n continue\n\n B = points[b]\n\n for c in range(b+1,nrow):\n if a == c :\n continue\n\n C = points[c]\n\n # scalar product between each two vectors\n AB = B - A\n AC = C - A\n scalarProduct = np.dot(AB, AC)\n\n # norm AB from normMatrix\n if a > b:\n norm_AB = normMatrix_AB[a] [b]\n else:\n norm_AB = normMatrix_AB[b] [a]\n\n # norm AC from normMatrix\n if a > c:\n norm_AC = normMatrix_AC[a] [c]\n else:\n norm_AC = normMatrix_AC[c] [a]\n\n\n\n # calculation angle\n try:\n cos_AB_AC = scalarProduct / (norm_AB * norm_AC )\n angle_AB_AC = math.acos(cos_AB_AC)\n factor_AB_AC = round(angle_AB_AC / (norm_AB * norm_AC ), 2)\n\n except ZeroDivisionError:\n sys.exit(\"Error! Division by Zero\")\n\n angleList.append(factor_AB_AC)\n\n abod = np.var(angleList, ddof=1)\n abodList.append(abod)\n\n\n\n # sort data by outlier factor and keep top outliers\n outlierfactor_sorted = sorted(abodList, reverse=False) [ : topOutliers]\n outliersList_sorted = [x for _, x in sorted(zip(abodList, points.tolist()), reverse=False)] [ : topOutliers]\n\n #df_result = pd.DataFrame(outliersList_sorted, columns=column_list)\n #df_result['Factor'] = outlierfactor_sorted\n\n outliersList_summary = pd.DataFrame(\n {'Outliers': outliersList_sorted,\n 'Factor': outlierfactor_sorted,\n })\n\n print(\"\\n\", \"Angle Based Outlier Detection\", \"\\n\")\n #print(df_result)\n return outliersList_summary\n\n",
"_____no_output_____"
],
[
"d1 = {\"a\": 1, \"b\": 23, \"c\": 3, \"d\": 40}\nd2 = {\"a\": 1, \"b\": 23, \"c\": 33, \"d\": 42}\nd3 = {\"a\": 3, \"b\": 20, \"c\": 30, \"d\": 4}\n\ndf = pd.DataFrame([d1,d2,d3])\ndf\n\n",
"_____no_output_____"
],
[
"list = df.values.tolist()\nlist\n\n#xxx= np.transpose(np.array(list))\n#xxx",
"_____no_output_____"
],
[
"column_list = df.columns.values.tolist()\ncolumn_list",
"_____no_output_____"
],
[
"\nlist = [[0.5, 0.5, 1, 1, 2, 0, 0.75],\n [1, 0.5, 1, 0.5, 2, 1.5, 0.75],\n [1, 0.5, 1, 0.5, 2, 1.5, 0.75]]\n\n#First Parameter : data\n#Second Parameter : most top outliers\ncolumn_list = ['a', 'b', 'c', 'd' ,'e' ,'f','g']\ndf1 = pd.DataFrame(list, columns=column_list)\ndf1\nfx_abod(df1, 30)",
"\n Angle Based Outlier Detection \n\n"
],
[
"np.transpose(np.array(fx_abod(df1, 30)))",
"\n Angle Based Outlier Detection \n\n"
],
[
"list = [[0.5, 0.5, 1, 1, 2, 0, 0.75],\n [1, 0.5, 1, 0.5, 2, 1.5, 0.75],\n [1, 0.5, 1, 0.5, 2, 1.5, 0.75]]\n\n#First Parameter : data\n#Second Parameter : most top outliers\nfx_abod(list, 3)",
"\n Angle Based Outlier Detection \n\n Factor Outliers\n0 0.002850 [2.0, 2.0, 2.0]\n1 0.036129 [0.0, 1.5, 1.5]\n2 1.490792 [1.0, 1.0, 1.0]\n"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\ndf.head(2)",
"_____no_output_____"
],
[
"list= df.values.tolist()\nlist = df.loc[:, ['fact2','myid','fact1','dim1_v1','dim1_v2','dim2_v1','dim2_v2','dim2_v3']]\nabod(list, 50)",
"\n Angle Based Outlier Detection \n\n Factor Outliers\n0 0.0 [-36636.26648284877, -33804.54293962172, -2067...\n1 0.0 [-2.233036320239695, -1.6583872347941047, -2.2...\n2 0.0 [-1.570184361860036, -1.6395648648290602, -2.2...\n3 0.0 [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, ...\n4 0.0 [0.8196744232144955, 1.0685756572483065, -1.16...\n5 0.0 [0.9284639453608463, -3.395806533313391, -0.87...\n6 0.0 [1.5936491242846724, 1.8073799387505662, -4.20...\n7 0.0 [1174.1943555200428, 624.7199675172357, -24.46...\n"
],
[
"df = fx_generate_data(v_detail = False)\ndf_detail = fx_generate_data(v_detail = True)\nn1 = df.head(20)",
"_____no_output_____"
],
[
"n1['anomalydetectid']=n1.index",
"C:\\Users\\nishgarg\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"n1",
"_____no_output_____"
],
[
"from os import path\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom statsmodels import robust\nfrom statsmodels.stats.stattools import medcouple\nimport json",
"_____no_output_____"
],
[
"def fx_box_plot (v_uk_id,v_list , v_title,v_filename ):\n my_path = \"data/\" + v_uk_id + \"/\"\n fig, ax = plt.subplots()\n jpg_filename = my_path + v_filename+'.jpg'\n png_filename = my_path + v_filename+'.png'\n x = range(len(v_list))\n ax.boxplot(v_list, patch_artist=True)\n ax.set_title(v_title)\n fig.tight_layout()\n #fig.show()\n fig.savefig(jpg_filename, dpi=1000)\n fig.savefig(png_filename+'.png', dpi=1000)",
"_____no_output_____"
],
[
"def fx_scatter_plot (v_uk_id , v_id_series,v_column_series , v_title,v_filename ):\n my_path = \"db/data/\" + v_uk_id + \"/\"\n fig, ax = plt.subplots()\n jpg_filename = my_path + v_filename+'.jpg'\n png_filename = my_path + v_filename\n ax.scatter(v_id_series,v_column_series)\n ax.set_title(v_title)\n fig.tight_layout()\n #fig.show()\n #fig.savefig(jpg_filename, dpi=1000)\n fig.savefig(png_filename+'.png', dpi=1000)",
"_____no_output_____"
],
[
"def fx_dontuseunsuperviseddontuse(v_uk_id):\n #v_uk_id= 123456\n my_path = \"data/\" + v_uk_id + \"/\"\n v_input_json_file = my_path + \"details.json\"\n\n with open(v_input_json_file, encoding=\"utf-8\") as data_file:\n input_json = json.load(data_file)\n\n v_input_csv = my_path + input_json['filename']\n \n df = pd.read_csv(v_input_csv)\n df['anomalydetectid']=df.index\n ######################\n v_id = 'anomalydetectid'\n v_analysis_columns_list = input_json['dimension'] \n df_outliers = pd.DataFrame({})\n #v_analysis_columns = ''.join(v_analysis_columns_list)\n for my_analysis_column in v_analysis_columns_list:\n v_df_temp = fx_ThreeSigmaRule(df[v_id], df[my_analysis_column], 2, 1)\n v_df_temp['Outlier Type'] = 'Three Sigma Rule'\n v_df_temp['Analysis Column'] = my_analysis_column\n df_outliers = pd.concat([df_outliers, v_df_temp])\n ######################\n \n x_groupby_type = df_outliers.groupby(['Analysis Column'])\n df2 = x_groupby_type.count()\n df2.reset_index(inplace=True)\n df3 = df2.sort_values(['data'], ascending=True).head(4)\n \n for i, r in df3.iterrows():\n print (r['Analysis Column'])\n v_filename = 'image'+str(i)\n v_list = df.loc[:, [r['Analysis Column']]]\n v_title = 'Outliers for '+r['Analysis Column']\n v_column=r['Analysis Column']\n #fx_box_plot (v_uk_id,v_list , v_title,v_filename )\n #plt.scatter(df['anomalydetectid'].values,df[v_column].values)\n fx_scatter_plot(v_uk_id,df['anomalydetectid'].values,df[v_column].values, v_title,v_filename)\n \n\n return df_outliers",
"_____no_output_____"
],
[
"import json\ndef fx_unsupervised(v_uk_id):\n #v_uk_id= 123456\n my_path = \"db/data/\" + v_uk_id + \"/\"\n v_input_json_file = my_path + \"details.json\"\n v_output_json_file = my_path + \"result_desc.json\"\n v_output_result_csv = my_path+\"result_individual_columns.csv\"\n v_output_result_csv = my_path+\"result123.csv\"\n with open(v_input_json_file, encoding=\"utf-8\") as data_file:\n input_json = json.load(data_file)\n\n v_input_csv = my_path + input_json['filename']\n\n df = pd.read_csv(v_input_csv)\n df['anomalydetectid']=df.index\n ######################\n v_id = 'anomalydetectid'\n v_analysis_columns_list = input_json['dimension'] \n df_outliers = pd.DataFrame({})\n #v_analysis_columns = ''.join(v_analysis_columns_list)\n for my_analysis_column in v_analysis_columns_list:\n v_df_temp = fx_ThreeSigmaRule(df[v_id], df[my_analysis_column], 2, 1)\n v_df_temp['Outlier Type'] = 'Three Sigma Rule'\n v_df_temp['Analysis Column'] = my_analysis_column\n df_outliers = pd.concat([df_outliers, v_df_temp])\n ######################\n \n df_outliers.to_csv(v_output_result_csv, encoding=\"utf-8\", index=False, header=True)\n\n x_groupby_type = df_outliers.groupby(['Analysis Column'])\n df2 = x_groupby_type.count()\n df2.reset_index(inplace=True)\n df3 = df2.sort_values(['data'], ascending=True).head(4)\n\n for i, r in df3.iterrows():\n v_filename = 'image'+str(i)\n v_list = df.loc[:, [r['Analysis Column']]]\n v_title = 'Outliers for '+r['Analysis Column']\n v_column=r['Analysis Column']\n #fx_box_plot (v_uk_id,v_list , v_title,v_filename )\n #plt.scatter(df['anomalydetectid'].values,df[v_column].values)\n fx_scatter_plot(v_uk_id,df['anomalydetectid'].values,df[v_column].values, v_title,v_filename)\n\n ##################################json code \n v_output_json_contents = {\n \"image_title1\": \"nish1\",\n \"image_title2\": \"jfbgcjshhgsj\",\n \"image_title3\": \"jfbgcjshhgsj\",\n \"image_title4\": \"jfbgcjshhgsj\",\n \"image_name1\": \"image1.png\",\n \"image_name2\": \"image2.png\",\n \"image_name3\": \"image3.png\",\n \"image_name4\": \"image4.png\",\n \"image_desc1\": \"jfbgcjshhgsj\",\n \"image_desc2\": \"jfbgcjshhgsj\",\n \"image_desc3\": \"jfbgcjshhgsj\",\n \"image_desc4\": \"jfbgcjshhgsj\",\n \"model\": [{\"model_desc\": \"ssss\", \"model_file\": \"ssss\"}, {\"model_desc\": \"ssss\", \"model_file\": \"ssss\"}]\n }\n \n \n with open(v_output_json_file, 'w') as outfile:\n json.dump(v_output_json_contents, outfile)\n ################################## \n return df_outliers",
"_____no_output_____"
],
[
"df1111 = fx_unsupervised('123456')\ndf1111",
"1\n-4.044802755225941\n4.046361856910618\n1\n-4.037451100221525\n4.0531843623713515\n1\n-4.233880568959717\n4.25448684362637\n1\n-4.247559567099503\n4.242101247187139\n1\n-4.245344522843074\n4.234237634265637\n"
],
[
"my_path = \"db/data/\" + '123456' + \"/\"\nxxx = my_path + \"custom_anomaly_data.csv\"\ndf = pd.read_csv(xxx)\ndf['anomalydetectid']=df.index\ndf\n\n",
"_____no_output_____"
],
[
"xxxxx = pd.merge(df1111, df, on='anomalydetectid')\nxxxxx",
"_____no_output_____"
],
[
"df1['Analysis Column'].value_counts()",
"_____no_output_____"
],
[
"x_groupby_type = df1.groupby(['Analysis Column'])\ndf2 = x_groupby_type.count()\ndf2.reset_index(inplace=True)\ndf3 = df2.sort_values(['data'], ascending=True).head(4)\ndf3['Analysis Column']",
"_____no_output_____"
],
[
"import random\ndef uniqueid():\n seed = random.getrandbits(32)\n while True:\n yield seed\n seed += 1\nUsage:\nunique_sequence = uniqueid()\nid1 = next(unique_sequence)\nid2 = next(unique_sequence)\nid3 = next(unique_sequence)\nids = list(itertools.islice(unique_sequence, 1000))\n",
"_____no_output_____"
],
[
"import random\ndef fx_uniqueid():\n seed = random.getrandbits(32)\n while True:\n yield seed\n seed += 1",
"_____no_output_____"
],
[
"unique_sequence = uniqueid()\nid1 = next(unique_sequence)\nid1",
"_____no_output_____"
],
[
"def fx_analysis(v_uk_id):\n\n\n my_path = \"data/\"+v_uk_id+\"/\"\n v_input_json_file = my_path+\"details.json\"\n v_output_result_csv = my_path+\"result.csv\"\n v_output_json_file = my_path+\"result_desc.json\"\n\n\n with open(v_input_json_file, encoding=\"utf-8\") as data_file:\n input_json = json.load(data_file)\n\n v_input_csv = my_path + input_json['filename']\n \n v_learning_type = input_json['learning_type']\n \n\n if v_learning_type == 'unsupervised' :\n fx_unsupervised(v_uk_id)\n else :\n fx_unsupervised(v_uk_id)\n \n\n return 'Success'",
"_____no_output_____"
],
[
"def upload_complete():\n\n \"\"\"Saves imported folder to server\"\"\"\n\n unique_sequence = fx_uniqueid()\n my_id = next(unique_sequence)\n\n UPLOAD_FOLDER = 'db/data' + str(my_id)\n\n os.makedirs(UPLOAD_FOLDER)\n\n ALLOWED_EXTENSIONS = set(['csv'])\n\n app = Flask(__name__)\n app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER\n\n\n return render_template(\"index.html\")\n",
"_____no_output_____"
],
[
"import mpu.io\nimport tkinter\nimport scipy\n#!pip install mpu",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf77992b7d7d0b6da4e9f27b2f59a7956243542 | 160,235 | ipynb | Jupyter Notebook | examples/table.ipynb | slaclab/lume-elegant | b4c1e9d2ab72c2502bd6b937ae5b518116aa2675 | [
"Apache-2.0"
] | null | null | null | examples/table.ipynb | slaclab/lume-elegant | b4c1e9d2ab72c2502bd6b937ae5b518116aa2675 | [
"Apache-2.0"
] | null | null | null | examples/table.ipynb | slaclab/lume-elegant | b4c1e9d2ab72c2502bd6b937ae5b518116aa2675 | [
"Apache-2.0"
] | 1 | 2020-12-12T23:29:42.000Z | 2020-12-12T23:29:42.000Z | 494.552469 | 151,276 | 0.939763 | [
[
[
"# Parse SDDS table",
"_____no_output_____"
]
],
[
[
"from elegant.parsers import parse_sdds_table",
"_____no_output_____"
],
[
"?parse_sdds_table",
"_____no_output_____"
],
[
"dat = parse_sdds_table('data/LCLS2scH.twi', ['s', 'betax', 'betay', 'etax'])\ndat",
"_____no_output_____"
]
],
[
[
"## Plot",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12,4))\nax.plot(dat['s'], dat['betax'], label=r'$\\beta_x$')\nax.plot(dat['s'], dat['betay'], label=r'$\\beta_y$')\nplt.legend()\nax.set_xlabel('s (m)')\nax.set_ylabel(r'$\\beta_{x,y}$ (m)')",
"_____no_output_____"
]
],
[
[
"## DataFrame\n\nThis is easily formed into a dataframe",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.DataFrame(dat)\ndf",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf77e01ddba0112ba1fcbe1f01588ff9e7a8b31 | 15,931 | ipynb | Jupyter Notebook | module3/s1_keywords.ipynb | CarolScalioni/tac | 7c814a0af6840620dbcd82a9fdd779561eac4189 | [
"MIT"
] | null | null | null | module3/s1_keywords.ipynb | CarolScalioni/tac | 7c814a0af6840620dbcd82a9fdd779561eac4189 | [
"MIT"
] | null | null | null | module3/s1_keywords.ipynb | CarolScalioni/tac | 7c814a0af6840620dbcd82a9fdd779561eac4189 | [
"MIT"
] | null | null | null | 28.499106 | 523 | 0.533363 | [
[
[
"# Extraction de Keywords",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import os\nimport yake",
"_____no_output_____"
]
],
[
[
"## Extraire les mots clés d'un document avec Yake",
"_____no_output_____"
],
[
"https://github.com/LIAAD/yake",
"_____no_output_____"
]
],
[
[
"# Création d'une liste de mots à ignorer\nignored = set([\"conseil communal\", \"conseil général\"])\nignored",
"_____no_output_____"
],
[
"# Instantier l'extracteur de mots clés\nkw_extractor = yake.KeywordExtractor(lan=\"fr\", top=50)\nkw_extractor",
"_____no_output_____"
],
[
"# Lister les Fichiers\ndata_path = \"../data/txt/\"\nfiles = os.listdir(data_path)",
"_____no_output_____"
],
[
"# Imprimer le nombre de fichiers identifiés\nlen(files)",
"_____no_output_____"
],
[
"# Les dix premiers fichiers\nfiles[:20]",
"_____no_output_____"
],
[
"# Enlever les fichiers qui ne commencent pas par Bxl_\nbxl_files = [f for f in files if f.startswith('Bxl_')]\nlen(bxl_files)",
"_____no_output_____"
],
[
"# Choisir un fichier\nthis_file = bxl_files[7]\nthis_file",
"_____no_output_____"
],
[
"# Récupérer le texte du fichier\ntext = open(os.path.join(data_path, this_file), 'r').read()\ntext[:500]",
"_____no_output_____"
],
[
"# Extraire les mots clés de ce texte\nkeywords = kw_extractor.extract_keywords(text)",
"_____no_output_____"
],
[
"keywords",
"_____no_output_____"
],
[
"# Extraire les mots clés de ce texte\nkeywords = kw_extractor.extract_keywords(text.lower())",
"_____no_output_____"
],
[
"keywords",
"_____no_output_____"
],
[
"# Ne garder que les bonnes mots\nkept = []\nfor kw, score in keywords:\n words = kw.split()\n if kw.lower() not in ignored:\n kept.append(kw)\nkept",
"_____no_output_____"
]
],
[
[
"## Faire la même opération sur tous les documents",
"_____no_output_____"
]
],
[
[
"for f in sorted(bxl_files)[:10]:\n text = open(os.path.join(data_path, f), 'r').read()\n keywords = kw_extractor.extract_keywords(text.lower())\n kept = []\n for kw, score in keywords:\n words = kw.split()\n if len(words) == 2 and kw.lower() not in ignored:\n kept.append(kw)\n print(f\"{f} mentions these keywords: {', '.join(kept)}...\")",
"Bxl_1847_Tome_I1_Part_1.txt mentions these keywords: marchés couverts, d'un marché, marché couvert, nouveau marché, marché dans, marché saint-jean, marché projeté, marchés actuels, marchés marché...\nBxl_1847_Tome_I1_Part_2.txt mentions these keywords: belgique communale, l'administration communale...\nBxl_1847_Tome_I1_Part_3.txt mentions these keywords: rue royale, bons communaux, d'un marché, d'une place, qu'il faut...\nBxl_1847_Tome_I1_Part_4.txt mentions these keywords: rue royale, l'instruction primaire, qu'il faut, loi communale, rue duquesnoy, conseil provincial...\nBxl_1847_Tome_I1_Part_5.txt mentions these keywords: parce qu'il, qu'il faut...\nBxl_1848_Tome_I1_Part_1.txt mentions these keywords: d'un conseil, ouvriers patentés, qu'il faut...\nBxl_1848_Tome_I1_Part_2.txt mentions these keywords: qu'il serait, hectolitres d'eau, travaux publics, mètre cube, société civile...\nBxl_1848_Tome_I1_Part_3.txt mentions these keywords: distribution d'eau, qu'il serait, travaux publics, quantité d'eau, d'un réservoir, d'un système...\nBxl_1849_Tome_I1_Part_1.txt mentions these keywords: règlement général, règlement organique, grandes caves...\nBxl_1849_Tome_I1_Part_2.txt mentions these keywords: voie publique, travaux publics, présent règlement, conseil central, rue royale...\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf78c29e97725689a7fcee1caef4b854216b3b3 | 644,486 | ipynb | Jupyter Notebook | CardioVascular Disease Prediction/cardiovascular-disease-Prediction.ipynb | raerae33/CardioVascular-Disease-Prediction | 52252d87ed6033c22631504cc352e917ab07ae22 | [
"MIT"
] | null | null | null | CardioVascular Disease Prediction/cardiovascular-disease-Prediction.ipynb | raerae33/CardioVascular-Disease-Prediction | 52252d87ed6033c22631504cc352e917ab07ae22 | [
"MIT"
] | null | null | null | CardioVascular Disease Prediction/cardiovascular-disease-Prediction.ipynb | raerae33/CardioVascular-Disease-Prediction | 52252d87ed6033c22631504cc352e917ab07ae22 | [
"MIT"
] | null | null | null | 394.18104 | 163,432 | 0.927258 | [
[
[
"# Importing important libraries.\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom warnings import filterwarnings\nfilterwarnings('ignore')\nplt.style.use('seaborn')\n",
"_____no_output_____"
],
[
"data = pd.read_csv('cardio_train.csv',sep=';')\nprint(f'Dataset Shape: {data.shape}')",
"Dataset Shape: (70000, 13)\n"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"Below are some of the key assumptions that we can make about the data and will look to validate them \nwith the data in hand.\n1. With the increase in age chances of heart disease increases.\n2. Effect of height and weight. We assume that with more BMI chances of heart diesease is more.\n4. ap_hi > ap_lo. With the increaes of bp the chances of heart attack are more. Check if we have patients \n with low bp but still have the disease.\n5. With increase of cholesterol the chances of heart disease increases as per scientific tests.\n6. Increase in blood glucose levels could be a cause of increased heart risk.\n7. Check about how patient drinking and smoking habbits would increase the chances of heart risk. \n Are drinking men/women more prone to having a heart disease ?\n8. Physical Activity is assumed to help in lower cholesterol and thus lower chances of heart disease.\n\nFEATURE ENGINEERING STEPS.\n1. Use height and weight to calculate BMI of a patient and see if it has some impact on the target variable.\n2. Combine smoking and alcohol as a single feature using feature interaction.\n3. We can think of creating a feature based on age and gender of a person to check if he/she is more likely to have diseased.\n\nEDA \n1. Mens below 65 are more prone to disease than women. However, above 65 both of them share a almost common rate.\n2. Normal blood pressure range is 120/80 for ap_hi/ap_lo respectively. Check if we are having a heart diseased person with low bp. \n3. Cholesterol, glucose and hi bp effect on patient health.",
"_____no_output_____"
]
],
[
[
"# Identifying missing values and duplicates first.\ndata.isna().sum()",
"_____no_output_____"
],
[
"duplicates = len(data) - len(data.drop(['id'],axis=1).drop_duplicates())\ndata.drop(['id'],axis=1,inplace=True)\ndata.drop_duplicates(inplace=True)\nprint(f'{duplicates} duplicate records dropped.')",
"24 duplicate records dropped.\n"
],
[
"data.shape",
"_____no_output_____"
]
],
[
[
"From the above we can see that we do not have any missing values into our dataset and also have removed 24 duplicate records.",
"_____no_output_____"
]
],
[
[
"# Let us now begin first with finding some quick descriptive stats about our data.\nprint(f'{data.dtypes.value_counts()}')",
"int64 11\nfloat64 1\ndtype: int64\n"
],
[
"print('Let us now get a quick summary of features available.')\ndata.describe().T.round(2)",
"Let us now get a quick summary of features available.\n"
],
[
"# Let us first have a look at our target variable.\nfig, ax = plt.subplots(1,1)\nsns.countplot(data['cardio'], ax = ax)\nfor i in ax.patches:\n height = i.get_height()\n ax.text(i.get_x()+i.get_width()/2,height,'{:.2f}'.format((i.get_height()/len(data['cardio']))*100,'%'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Wow. Looks like target variable is pretty balanced, so we need not to worry about class imbalance in our problem.\n",
"_____no_output_____"
]
],
[
[
"# Age is given in days. Transforming it into years for better understanding and checking relation with the target variable.\ndata['age'] = data['age']/365",
"_____no_output_____"
],
[
"fig, (ax1,ax2) = plt.subplots(1,2, figsize=(20,10))\nsns.distplot(data['age'][data['cardio']==0], ax = ax1, color='green')\nsns.distplot(data['age'][data['cardio']==1], ax = ax1,color='coral')\nax1.set_title('Age Distribution')\nax1.legend()\n\nsns.distplot(data['age'][(data['gender']==1) & (data['cardio']==1)],ax = ax2,color='pink')\nsns.distplot(data['age'][(data['gender']==2) & (data['cardio']==1)],ax = ax2,color='blue')\nax2.set_title('Disease count distribution by gender, aged below 54.')\nplt.show()",
"No handles with labels found to put in legend.\n"
]
],
[
[
"People above the age of 54 are more likely to have diseased then below, also males below 50 are more likely to have been diagnosed with heart disease than females which confirms our assumption, even though the difference is not that drastic.",
"_____no_output_____"
]
],
[
[
"fig, (ax1) = plt.subplots(1,1, figsize=(10,10))\nsns.boxenplot(data['cardio'],(data['height']*0.0328084),ax=ax1)\nax1.set_title('Height / Diseased')\nplt.show()",
"_____no_output_____"
]
],
[
[
"From the above plot we can see that there are certain outliers in the feature.\nFor eg:\nThere are persons with more than 8 foot height which definitely looks and outlier \nAlso, there are few with even less then 3 foot in height which could be children. \nTo confirm this we need to check their weight and age and decide if they are outliers or could be a valid entry.\n",
"_____no_output_____"
]
],
[
[
"fig, (ax1,ax2) = plt.subplots(1,2, figsize=(20,10))\nsns.scatterplot(data['age'],data['height'][(data['height']*0.0328084)<4]*0.0328084,hue=data['cardio'],ax=ax1)\nax1.set_title('Height vs Age')\nsns.scatterplot(data['weight'],data['height'][(data['height']*0.0328084)<4]*0.0328084,hue=data['cardio'],ax=ax2)\nax2.set_title('Height vs Weight')\nplt.show()",
"_____no_output_____"
]
],
[
[
"From the above we can see that the people with below 4 foot in height are mostly aged above 40 and have a weight above 40kg mostly.\nThis definitely confirms that they are not children. Now for our analytical purposes we can delete such records from our data as they are hinting more towards outliers.",
"_____no_output_____"
]
],
[
[
"# Converting height in cms to foot.\ndata['height'] = data['height']*0.0328084 \nfilt =(data['height']>8) | (data['height']<3) \n\ndata.drop(index = list(data[filt].index),inplace=True)\nprint(f'Dataset: {data.shape}')",
"Dataset: (69950, 12)\n"
],
[
"fig, (ax1,ax2) = plt.subplots(1,2, figsize=(20,5))\nsns.boxenplot(data['cardio'],(data['weight']),ax=ax1)\nax1.set_title('Weight / Diseased')\nsns.scatterplot(data['weight'],data['height'],ax=ax2,hue=data['cardio'])\nax2.set_title('height vs weight')\nplt.show()",
"_____no_output_____"
]
],
[
[
"From the above plots we can see that there are persons with more than 155 kgs of weight with height less than 4.5 foot which seems like a bit abnormal.\nAlso, there are people with less than 25kg of weight and there are ones with more than 175 kg of weight which looks like an outlier to me.\nWe will eliminate all such records from our analysis.",
"_____no_output_____"
]
],
[
[
"# 1. Weight < 25 kg\nfilt1 = data['weight']<25\ndata.drop(index=list(data[filt1].index),inplace=True)\n\n# 2. Weight > 175 kg\nfilt2 = data['weight']>175\ndata.drop(index=list(data[filt2].index),inplace=True)\n\n# 3. Height < 4.5 & Weight > 150 kg\nfilt3 = (data['height']<4.5) & (data['weight']>150)\ndata.drop(index=list(data[filt3].index),inplace=True)",
"_____no_output_____"
],
[
"# Gender\nfig,(ax) = plt.subplots(1,1)\ntmp = pd.crosstab(data['gender'],data['cardio'],normalize='index').round(4)*100\ntmp.reset_index()\ntmp.columns = ['Not Diseased','Diseased']\nax1 = sns.countplot(data['gender'],order = list(tmp.index))\nax2 = ax1.twinx()\nsns.pointplot(tmp.index,tmp['Diseased'],order = list(tmp.index),ax=ax2, color='red')\nfor x in ax1.patches:\n height = x.get_height()\n ax1.text(x.get_x()+x.get_width()/2,height,'{:.2f}{}'.format((height/len(data))*100,'%'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Looks like men are more likely to have diseased then women.",
"_____no_output_____"
]
],
[
[
"# ap_hi\nfilt = (data['ap_hi']<90) | (data['ap_hi']>140)\nprint(f'Normal systolic blood pressure range is between 90 and 120. However, from our dataset we can see that we have {len(data[filt])} records that are not falling within the normal range. We can replace them with their median values.')",
"Normal systolic blood pressure range is between 90 and 120. However, from our dataset we can see that we have 10206 records that are not falling within the normal range. We can replace them with their median values.\n"
],
[
"data['ap_hi'].replace(data[filt]['ap_hi'].values,data['ap_hi'].median(),inplace=True)",
"_____no_output_____"
],
[
"sns.boxenplot(data['cardio'],data['ap_lo'][data['ap_lo']<150])\nplt.show()",
"_____no_output_____"
],
[
"# cholesterol\ntmp = pd.crosstab(data['cholesterol'],data['cardio'],normalize='index')\ntmp.reset_index()\ntmp.columns = ['not diseased','diseased']\nfig, ax = plt.subplots(1,1)\nsns.countplot(data['cholesterol'],order=list(tmp.index), ax=ax)\nplot2 = ax.twinx()\nsns.pointplot(tmp.index,tmp['diseased'],order=list(tmp.index),ax=plot2)\nfor patch in ax.patches:\n height = patch.get_height()\n ax.text(patch.get_x()+patch.get_width()/2,height,'{:.2f}{}'.format(height/len(data['cholesterol'])*100,'%'))\nplt.show()\n\n",
"_____no_output_____"
]
],
[
[
"The above plot shows that cholesterol has a great impact over the diseased state of a person.",
"_____no_output_____"
]
],
[
[
"# Glucose\ntmp = pd.crosstab(data['gluc'],data['cardio'],normalize='index')\ntmp.reset_index()\ntmp.columns = ['not diseased','diseased']\nfig, ax = plt.subplots(1,1)\nsns.countplot(data['gluc'],order=list(tmp.index), ax=ax)\nplot2 = ax.twinx()\nsns.pointplot(tmp.index,tmp['diseased'],order=list(tmp.index),ax=plot2)\nfor patch in ax.patches:\n height = patch.get_height()\n ax.text(patch.get_x()+patch.get_width()/2,height,'{:.2f}{}'.format(height/len(data['gluc'])*100,'%'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Similar to cholesterol, a person with high glucose levels is also more prone to have got diseased. Diabetic people BEWARE !",
"_____no_output_____"
],
[
"We would now combine the smoking and drinking habbits of a person into a single feature **'***smoke/drink***' **and study its impact.",
"_____no_output_____"
]
],
[
[
"data['smoke/drink'] = data['smoke'].apply(str)+'|'+data['alco'].apply(str)\n\ntmp = pd.crosstab(data['smoke/drink'],data['cardio'],normalize='index')\ntmp.reset_index()\ntmp.columns = ['Not diseased','diseased']\n\nfig, ax = plt.subplots(1,1)\nsns.countplot(data['smoke/drink'],order=list(tmp.index), ax=ax)\nplot2 = ax.twinx()\nsns.pointplot(tmp.index,tmp['diseased'],order=list(tmp.index),ax=plot2)\nfor patch in ax.patches:\n height = patch.get_height()\n ax.text(patch.get_x()+patch.get_width()/2,height,'{:.2f}{}'.format(height/len(data['smoke/drink'])*100,'%'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Amongst all the people who dosen't smoke but drink seems to have the highest chances of having diseased. This seems a bit off from what the normal belief.",
"_____no_output_____"
]
],
[
[
"df_smoke_drink = pd.get_dummies(data['smoke/drink'],prefix='smoke/drink',drop_first=True)\ndata = pd.concat([data,df_smoke_drink],axis=1)\ndata.drop(['smoke/drink'],axis=1,inplace=True)\n# data.head()",
"_____no_output_____"
]
],
[
[
"We would also now create a feature BMI using the height and weight of a person and see it's impact on target variable.",
"_____no_output_____"
]
],
[
[
"# BMI = weight(kg)/height(m2)\ndata['BMI'] = data['weight']/(((data['height']/0.0328084)*.01)**2)",
"_____no_output_____"
],
[
"fig, (ax1,ax2) = plt.subplots(1,2, figsize=(20,10))\nsns.boxenplot(data['cardio'],data['BMI'],ax=ax1)\nsns.distplot(data[data['cardio']==0]['BMI'],color='g',ax=ax2)\nsns.distplot(data[data['cardio']==1]['BMI'],color='b',ax=ax2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"From the above plot we can see that chances of people getting diseased is more when there BMI increases beyond 25.",
"_____no_output_____"
]
],
[
[
"# Modelling** ( WIP )\nStill working on it but have posted it. Might help someone.\n\nPlease UPVOTE if you liked the kernel.\n",
"_____no_output_____"
]
],
[
[
"# The very first thing that we need to do is to break our data into training and test sets. \nfrom sklearn.model_selection import train_test_split\ntrain,test = train_test_split(data, test_size = 0.25, random_state=42)\nprint (f'The shapes of our train & test data is {train.shape} and {test.shape} respectively.')",
"The shapes of our train & test data is (52445, 16) and (17482, 16) respectively.\n"
],
[
"# Logistic Regression model assumes that there should be no multi-colinearity amongst the variables. \nfig, ax = plt.subplots(1,1, figsize=(20,10))\nsns.heatmap(train.corr().sort_values(by='cardio'), annot=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"1. Here we will be implementing Logistic Regression both in statsmodel and sklearn. Both of them have there own pros.\n\n\neg:- sklearn provides ease of implementation while the logistic regression gives us better model statistics ",
"_____no_output_____"
]
],
[
[
"# Logistic Regresssion - Selecting best penalty value for our Regularized model in scikit- learn\n\nX = np.array(train.drop(['cardio','height','weight','gender','alco','smoke'], axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio','height','weight','gender','alco','smoke'], axis=1))\ny_act = np.array(test['cardio'])\n\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX = scaler.fit_transform(X)\nX_test = scaler.transform(X_test)\n\nfrom sklearn.model_selection import KFold\nkfold = KFold(n_splits=10, random_state=42)\n\nfrom sklearn.linear_model import LogisticRegression\nlog_classifier = LogisticRegression()\n\nfrom sklearn.model_selection import GridSearchCV\nparams = {'C':[0.001, 0.1,1,10,100,1000]}\ngrid = GridSearchCV(log_classifier, cv=kfold, param_grid=params)\ngrid.fit(X,y)\ngrid.best_params_",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score, confusion_matrix\nlog_classifier = LogisticRegression(C=10)\n\nlog_classifier.fit(X,y)\nprint(f'Train Score: {log_classifier.score(X,y)}')\n\ny_pred = log_classifier.predict(X_test)\nprint(f'Test Score: {accuracy_score(y_act,y_pred)}')",
"Train Score: 0.6877300028601392\nTest Score: 0.6856766960302025\n"
],
[
"# Statsmodel implementation.\n\nimport statsmodels.api as sm\nx1 = sm.add_constant(X)\nfeatures = list(train.drop(['cardio','height','weight','gender','alco','smoke'],axis=1).columns)\nfeatures.insert(0,'const')\nlog_reg = sm.Logit(y,x1)\nresults = log_reg.fit()\nresults.summary(xname=features)",
"Optimization terminated successfully.\n Current function value: 0.595693\n Iterations 6\n"
],
[
"print(f'Accuracy {(results.pred_table()[0][0]+results.pred_table()[1][1])/len(train)}')\nprint(f'{results.pred_table()}')",
"Accuracy 0.6876918676708933\n[[18837. 7454.]\n [ 8925. 17229.]]\n"
],
[
"X_test = test.drop(['cardio','height','weight','gender','alco','smoke'], axis=1)\nx1_test = sm.add_constant(X_test)\ny_pred = results.predict(x1_test)",
"_____no_output_____"
]
],
[
[
"From the above we can see that we have received an overall accuracy of around 69% using our Logistic Regression model.",
"_____no_output_____"
]
],
[
[
"# Decision Tree Classifier\n\nX = np.array(train.drop(['cardio'], axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'], axis=1))\ny_act = np.array(test['cardio'])\n\nfrom sklearn.tree import DecisionTreeClassifier\ndt = DecisionTreeClassifier(criterion = 'gini', random_state=42, max_depth=10)\ndt.fit(X,y)\nprint(f'Train Accuracy for Decision Tree is : {dt.score(X,y)}')",
"Train Accuracy for Decision Tree is : 0.7401468204785966\n"
],
[
"print(f'Test Accuracy Score for Decision Tree is ; {accuracy_score(y_act,dt.predict(X_test))}')\nconfusion_matrix(y_act,dt.predict(X_test))",
"Test Accuracy Score for Decision Tree is ; 0.7167944171147466\n"
],
[
"# Support Vector Machines\nX = np.array(train.drop(['cardio'],axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'],axis=1))\ny_act = test['cardio']\n\nfrom sklearn.preprocessing import MinMaxScaler\nscaler = MinMaxScaler()\nX = scaler.fit_transform(X)\nX_test = scaler.transform(X_test)\n\nfrom sklearn.svm import SVC\nsvc = SVC()\n\n# params = {'C':[0.1, 1, 10, 100], 'gamma':[1, 0.1, 0.01, 0.001]}\n# grid = GridSearchCV(svc, param_grid=params)\n# grid.fit(X,y)\n# grid.best_params_\n\nsvc.fit(X,y)\nprint(f'Train Score: {svc.score(X,y)}')\n\n\ny_pred = svc.predict(X_test)\nprint(f'Test Score: {accuracy_score(y_act,y_pred)}')",
"_____no_output_____"
],
[
"# Naive Bayes Classifier\nX = np.array(train.drop(['cardio'],axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'],axis=1))\ny_act = test['cardio']\n\nfrom sklearn.naive_bayes import GaussianNB\ngb = GaussianNB()\ngb.fit(X,y)\nprint(f'Train Score: {gb.score(X,y)}')\n\n\ny_pred = gb.predict(X_test)\nprint(f'Test Score: {accuracy_score(y_act,y_pred)}')\n\n",
"_____no_output_____"
]
],
[
[
"## Ensemble Methods",
"_____no_output_____"
]
],
[
[
"# Random Forest\n\nX = np.array(train.drop(['cardio'], axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'], axis=1))\ny_act = np.array(test['cardio'])\n\nfrom sklearn.ensemble import RandomForestClassifier\nparam = {'n_estimators': [10, 20, 40, 80, 160, 300], 'random_state': [42], 'criterion': ['gini'], 'max_depth': [2, 4, 8, 16, 32]}\nrf = RandomForestClassifier()\ngrid = GridSearchCV(rf,param)\ngrid.fit(X,y)\ngrid.best_params_\n",
"_____no_output_____"
],
[
"rf_upd = RandomForestClassifier(n_estimators=40, criterion='gini', max_depth=8, random_state=42)\nrf_upd.fit(X,y)\nprint(f'Train Score: {rf_upd.score(X,y)}')\n\ny_pred = rf_upd.predict(X_test)\nprint(f'Test Accuracy: {accuracy_score(y_act,y_pred)}')",
"_____no_output_____"
],
[
"Feature_importances = pd.concat([pd.Series(test.drop(['cardio','ap_lo'], axis=1).columns),pd.Series(rf_upd.feature_importances_)],axis=1).sort_values(by=1, ascending=False)\nFeature_importances.columns = ['Feature','Weights']\nFeature_importances",
"_____no_output_____"
],
[
"# Gradient Boosting \nX = np.array(train.drop(['cardio'],axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'],axis=1))\ny_act = test['cardio']\n\nfrom xgboost import XGBClassifier\nxgb = XGBClassifier()\nxgb.fit(X,y)\nprint(f'Train Score: {xgb.score(X,y)}')\n\ny_pred = xgb.predict(X_test)\nprint(f'Test Accuracy: {accuracy_score(y_act,y_pred)}')",
"_____no_output_____"
]
],
[
[
"## Deep Learning ",
"_____no_output_____"
]
],
[
[
"# ANN\nimport tensorflow as tf\nimport keras as ks\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\nX = np.array(train.drop(['cardio'],axis=1))\ny = np.array(train['cardio'])\nX_test = np.array(test.drop(['cardio'],axis=1))\ny_act = test['cardio']\n\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX = scaler.fit_transform(X)\nX_test = scaler.fit_transform(X_test)\n\nclassifier = Sequential()\n\n\nclassifier.add(Dense(8, input_shape=(15,), activation = 'relu'))\nclassifier.add(Dense(8, activation = 'relu'))\n\nclassifier.add(Dense(1, activation = 'sigmoid'))\n\nclassifier.compile('adam',loss='binary_crossentropy', metrics=['accuracy'])\n\nclassifier.fit(X,y, batch_size= 10, epochs = 100)",
"_____no_output_____"
],
[
"y_pred = classifier.predict(X_test)\ny_pred = (y_pred >= 0.5)\nprint(f'Test Accuracy for Deep Learning: {accuracy_score(y_act,y_pred)}')",
"_____no_output_____"
]
],
[
[
"The max accuracy that we have achieved by running our deep learning model with default parameters and running for 100 epochs is 71.52% on test data.\n\nHowever, there is lot of scope for improvement in all the above models which I will be working on in the next version.\n\nPlease UPVOTE if you really find it helpful and also provide your valuable feedback in comments.\nThis would really help me as well as others to learn better.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecf79552e9447929910838946a7aa2fc0070b334 | 62,014 | ipynb | Jupyter Notebook | Chapter09/Kaggle_Winners.ipynb | omdgit/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn | 6169ee9a256955e6d1b5cf4dd794c6741c464fbb | [
"MIT"
] | 55 | 2020-10-19T02:51:33.000Z | 2022-03-31T04:09:37.000Z | Chapter09/Kaggle_Winners.ipynb | alequech/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn | b52048729891f1f3011491e181d10c88e9e867fd | [
"MIT"
] | 1 | 2022-03-21T03:56:36.000Z | 2022-03-21T03:56:36.000Z | Chapter09/Kaggle_Winners.ipynb | alequech/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn | b52048729891f1f3011491e181d10c88e9e867fd | [
"MIT"
] | 52 | 2020-01-10T06:34:31.000Z | 2022-03-29T14:03:39.000Z | 32.968634 | 117 | 0.408456 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import cross_val_score\nfrom xgboost import XGBClassifier, XGBRFClassifier\nfrom sklearn.ensemble import RandomForestClassifier, StackingClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split, StratifiedKFold\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.ensemble import VotingClassifier\n# Silence warnings\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"import pandas as pd\ndf = pd.read_csv('cab_rides.csv', nrows=10000)\ndf.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 10000 entries, 0 to 9999\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 distance 10000 non-null float64\n 1 cab_type 10000 non-null object \n 2 time_stamp 10000 non-null int64 \n 3 destination 10000 non-null object \n 4 source 10000 non-null object \n 5 price 9227 non-null float64\n 6 surge_multiplier 10000 non-null float64\n 7 id 10000 non-null object \n 8 product_id 10000 non-null object \n 9 name 10000 non-null object \ndtypes: float64(3), int64(1), object(6)\nmemory usage: 781.4+ KB\n"
],
[
"df[df.isna().any(axis=1)]",
"_____no_output_____"
],
[
"df.dropna(inplace=True)",
"_____no_output_____"
],
[
"df['date'] = pd.to_datetime(df['time_stamp'])\ndf.head()",
"_____no_output_____"
],
[
"df['date'] = pd.to_datetime(df['time_stamp']*(10**6))\ndf.head()",
"_____no_output_____"
],
[
"import datetime as dt\ndf['month'] = df['date'].dt.month\ndf['hour'] = df['date'].dt.hour\ndf['dayofweek'] = df['date'].dt.dayofweek",
"_____no_output_____"
],
[
"def weekend(row):\n if row['dayofweek'] in [5,6]:\n return 1\n else:\n return 0\n\ndf['weekend'] = df.apply(weekend, axis=1)",
"_____no_output_____"
],
[
"def rush_hour(row):\n if (row['hour'] in [6,7,8,9,15,16,17,18]) & (row['weekend'] == 0):\n return 1\n else:\n return 0\n\ndf['rush_hour'] = df.apply(rush_hour, axis=1)",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df['cab_type'].value_counts()",
"_____no_output_____"
],
[
"df['cab_freq'] = df.groupby('cab_type')['cab_type'].transform('count')",
"_____no_output_____"
],
[
"df['cab_freq'] = df['cab_freq']/len(df)",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"from category_encoders.target_encoder import TargetEncoder",
"_____no_output_____"
],
[
"encoder = TargetEncoder()\ndf['cab_type_mean'] = encoder.fit_transform(df['cab_type'], df['price'])",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"from sklearn.datasets import load_breast_cancer",
"_____no_output_____"
],
[
"X, y = load_breast_cancer(return_X_y=True)",
"_____no_output_____"
],
[
"kfold = StratifiedKFold(n_splits=5)",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\n\ndef classification_model(model):\n # Obtain scores of cross-validation using 5 splits\n scores = cross_val_score(model, X, y, cv=kfold)\n\n # Return mean score\n return scores.mean()",
"_____no_output_____"
],
[
"classification_model(XGBClassifier())",
"_____no_output_____"
],
[
"classification_model(XGBClassifier(booster='gblinear'))",
"_____no_output_____"
],
[
"classification_model(XGBClassifier(booster='dart', one_drop=True))",
"_____no_output_____"
],
[
"classification_model(RandomForestClassifier(random_state=2))",
"_____no_output_____"
],
[
"classification_model(LogisticRegression(max_iter=10000))",
"_____no_output_____"
],
[
"classification_model(XGBClassifier(n_estimators=800, max_depth=4, colsample_bylevel=0.8))",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)",
"_____no_output_____"
],
[
"def y_pred(model):\n model.fit(X_train, y_train)\n y_pred = model.predict(X_test)\n score = accuracy_score(y_pred, y_test)\n print(score)\n return y_pred",
"_____no_output_____"
],
[
"y_pred_gbtree = y_pred(XGBClassifier())",
"0.951048951048951\n"
],
[
"y_pred_dart = y_pred(XGBClassifier(booster='dart', one_drop=True))",
"0.951048951048951\n"
],
[
"y_pred_forest = y_pred(RandomForestClassifier(random_state=2))",
"0.9370629370629371\n"
],
[
"y_pred_logistic = y_pred(LogisticRegression(max_iter=10000))",
"0.9370629370629371\n"
],
[
"y_pred_xgb = y_pred(XGBClassifier(max_depth=2, n_estimators=500, learning_rate=0.1))",
"0.965034965034965\n"
],
[
"df_pred = pd.DataFrame(data= np.c_[y_pred_gbtree, y_pred_dart, y_pred_forest, y_pred_logistic, y_pred_xgb], \n columns=['gbtree', 'dart', 'forest', 'logistic', 'xgb'])",
"_____no_output_____"
],
[
"df_pred.corr()",
"_____no_output_____"
],
[
"estimators = []\nlogistic_model = LogisticRegression(max_iter=10000)\nestimators.append(('logistic', logistic_model))\nxgb_model = XGBClassifier(max_depth=2, n_estimators=500, learning_rate=0.1)\nestimators.append(('xgb', xgb_model))\nrf_model = RandomForestClassifier(random_state=2)\nestimators.append(('rf', rf_model))\nensemble = VotingClassifier(estimators)\nscores = cross_val_score(ensemble, X, y, cv=kfold)\nprint(scores.mean())",
"0.9754075454122031\n"
],
[
"base_models = []\nbase_models.append(('lr', LogisticRegression()))\nbase_models.append(('xgb', XGBClassifier()))\nbase_models.append(('rf', RandomForestClassifier(random_state=2)))\n# define meta learner model\nmeta_model = LogisticRegression()\n# define the stacking ensemble\nclf = StackingClassifier(estimators=base_models, final_estimator=meta_model)\nscores = cross_val_score(clf, X, y, cv=kfold)\nprint(scores.mean())",
"0.9789318428815401\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf7b5877e597a834a7958c660822c8f10e4acb9 | 116,108 | ipynb | Jupyter Notebook | notebooks/event2mind_sandbox_md.ipynb | rhofvendahl/VisualParse | f27eff06de75e7d39ead11f25685f1c2de4b44fb | [
"Unlicense"
] | 1 | 2019-02-28T09:32:21.000Z | 2019-02-28T09:32:21.000Z | notebooks/event2mind_sandbox_md.ipynb | rhofvendahl/visualParse | f27eff06de75e7d39ead11f25685f1c2de4b44fb | [
"Unlicense"
] | 1 | 2019-12-10T21:22:54.000Z | 2019-12-10T21:22:54.000Z | notebooks/event2mind_sandbox_md.ipynb | rhofvendahl/visualParse | f27eff06de75e7d39ead11f25685f1c2de4b44fb | [
"Unlicense"
] | null | null | null | 46.147854 | 2,959 | 0.532366 | [
[
[
"# Text to Propositions",
"_____no_output_____"
]
],
[
[
"import spacy\nimport textacy\n\nnlp = spacy.load('en_core_web_md')",
"_____no_output_____"
],
[
"doc = nlp(\"So I have had a good day today. I found out we got the other half of our funding for my travel grant, which paid for my friend to come with me. So that’s good, she and I will both get some money back. I took my dogs to the pet store so my girl dog could get a new collar, but she wanted to beat everyone up. This is an ongoing issue with her. She’s so little and cute too but damn she acts like she’s gonna go for the jugular with everyone she doesn’t know! She did end up with a cute new collar tho, it has pineapples on it. I went to the dentist and she’s happy with my Invisalign progress. We have three more trays and then she does an impression to make sure my teeth are where they need to be before they get the rest of the trays. YAY! And I don’t have to make another payment until closer to the end of my treatment. I had some work emails with the festival, and Jessie was bringing up some important points, and one of our potential artists was too expensive to work with, so Mutual Friend was asking for names for some other people we could work with. So I suggested like, three artists, and Jessie actually liked the idea of one of them doing it. Which is nice. I notice she is very encouraging at whatever I contribute to our collective. It’s sweet. I kind of know this is like, the only link we have with each other right now besides social media, so it seems like she’s trying to make sure I know she still wants me to be involved and doesn’t have bad feelings for me. And there was a short period when I was seriously thinking of leaving the collective and not working with this festival anymore. I was so sad, and felt so upset, and didn’t know what to do about Jessie. It felt really close to me throwing in the towel. But I hung on through the festival and it doesn’t seem so bad from this viewpoint now with more time that has passed. And we have been gentle, if reserved, with each other. I mean her last personal email to me however many weeks ago wasn’t very nice. But it seems like we’ve been able to put it aside for work reasons. I dunno. I still feel like if anything was gonna get mended between us, she would need to make the first moves on that. I really don’t want to try reaching out and get rejected even as a friend again. I miss her though. And sometimes I think she misses me. But I don’t want to approach her assuming we both miss each other and have her turn it on me again and make out like all these things are all in my head. I don’t know about that butch I went on a date with last night. I feel more of a friend vibe from her, than a romantic one. I can’t help it, I am just not attracted to butches. And I don’t know how to flirt with them. And I don’t think of them in a sexy way. But I WOULD like another butch buddy. I mean yeah maybe Femmes do play games, or maybe I just chased all the wrong Femmes. Maybe I’ll just leave this and not think about it much until I get back to town in January.\")",
"_____no_output_____"
]
],
[
[
"## Extract Subject Verb Object Triples",
"_____no_output_____"
]
],
[
[
"svo_triples = textacy.extract.subject_verb_object_triples(doc)\n\nfor triple in svo_triples:\n print(triple)",
"(I, have had, day)\n(we, got, half)\n(she, get, money)\n(I, get, money)\n(I, took, dogs)\n(girl dog, could get, collar)\n(she, wanted, to beat)\n(This, is, issue)\n(she, ’s gon, na go)\n(it, has, pineapples)\n(We, have, trays)\n(she, does, impression)\n(they, need, to be)\n(they, get, rest)\n(I, don’t have, to make)\n(I, had, work emails)\n(Jessie, was bringing, points)\n(Jessie, liked, idea)\n(this, is, link)\n(she, ’s trying, to make)\n(there, was, period)\n(It, felt, throwing)\n(it, doesn’t seem, bad)\n(I, mean, email)\n(I, mean, to)\n(anything, was gon, na get mended)\n(she, would need, to make)\n(I, don’t want, to try)\n(I, miss, her)\n(she, misses, me)\n(I, don’t want, to approach)\n(we, miss, other)\n(her, turn, it)\n(I, feel, more)\n(I, can’t help, it)\n(I, don’t know, to flirt)\n(Femmes, do play, games)\n(I, chased, Femmes)\n(I, leave, this)\n"
]
],
[
[
"## Extract Named Entities",
"_____no_output_____"
]
],
[
[
"for ent in doc.ents:\n print(ent.text, ent.label_)",
"a good day DATE\ntoday DATE\nthe other half CARDINAL\nthree CARDINAL\nJessie PERSON\none CARDINAL\nMutual Friend ORG\nthree CARDINAL\nJessie PERSON\none CARDINAL\nJessie PERSON\nfirst ORDINAL\nlast night TIME\nFemmes PERSON\nJanuary DATE\n"
],
[
"people = [ent for ent in doc.ents if ent.label_ == 'PERSON']\npeople",
"_____no_output_____"
],
[
"# returns (entity, cue, fragment)\nstatements = textacy.extract.semistructured_statements(doc, 'I', cue='feel')\n\nfor entity, cue, fragment in statements:\n print(entity, cue, '-->', fragment)",
"I am feeling --> kinda tired\nI feel --> overwhelmed, a bit, maybe hungry\nI feel --> stressed certainly, too much to do maybe\n"
],
[
"# get cues\nall_statements = []\nfor sent in doc.sents:\n verbs = textacy.spacier.utils.get_main_verbs_of_sent(sent)\n print('sent:', sent, '\\nverbs:', verbs)\n for verb in verbs:\n objects = textacy.spacier.utils.get_objects_of_verb(verb)\n subjects = textacy.spacier.utils.get_subjects_of_verb(verb)\n for subject in subjects:\n statements = textacy.extract.semistructured_statements(doc, subject.text, verb.lemma_)\n for statement in statements:\n print(subject, verb, statement)\n all_statements += [statement]\n \n print('\\n')\nfor statement in set(all_statements):\n print(statement)",
"sent: I guess I am feeling kinda tired. \nverbs: [guess, feeling]\nI guess (I, guess, I am feeling kinda tired)\nI feeling (I, am feeling, kinda tired)\nI feeling (I, feel, overwhelmed, a bit, maybe hungry)\nI feeling (I, feel, stressed certainly, too much to do maybe)\n\n\nsent: I feel overwhelmed, a bit, maybe hungry. \nverbs: [feel]\nI feel (I, am feeling, kinda tired)\nI feel (I, feel, overwhelmed, a bit, maybe hungry)\nI feel (I, feel, stressed certainly, too much to do maybe)\n\n\nsent: I dunno. \nverbs: [dunno]\n\n\nsent: I find myself wanting something, but I'm not sure what it is. \nverbs: [find, wanting, 'm, is]\nI find (I, find, myself wanting something, but I'm not sure what it is)\nI 'm (I, 'm, not sure what it is)\nI 'm (I, 'm, not totally sure what I should be doing)\n\n\nsent: I feel stressed certainly, too much to do maybe? \nverbs: [feel, do]\nI feel (I, am feeling, kinda tired)\nI feel (I, feel, overwhelmed, a bit, maybe hungry)\nI feel (I, feel, stressed certainly, too much to do maybe)\n\n\nsent: But I'm not totally sure what I should be doing? \nverbs: ['m, doing]\nI 'm (I, 'm, not sure what it is)\nI 'm (I, 'm, not totally sure what I should be doing)\n\n\n(I, feel, stressed certainly, too much to do maybe)\n(I, 'm, not totally sure what I should be doing)\n(I, am feeling, kinda tired)\n(I, 'm, not sure what it is)\n(I, feel, overwhelmed, a bit, maybe hungry)\n(I, find, myself wanting something, but I'm not sure what it is)\n(I, guess, I am feeling kinda tired)\n"
],
[
"from allennlp.predictors import Predictor\npredictor = Predictor.from_path(\"https://s3-us-west-2.amazonaws.com/allennlp/models/decomposable-attention-elmo-2018.02.19.tar.gz\")",
"12/05/2018 21:56:34 - INFO - allennlp.models.archival - loading archive file https://s3-us-west-2.amazonaws.com/allennlp/models/decomposable-attention-elmo-2018.02.19.tar.gz from cache at /home/russell/.allennlp/cache/1dbdfb3ce5af46c5b83353727b579a5596d45a121d59199f1c838928a87e3796.21e6e14db76ce734b669577cc3046333c6bc853767246356b4a8b2c6a85249a8\n12/05/2018 21:56:34 - INFO - allennlp.models.archival - extracting archive file /home/russell/.allennlp/cache/1dbdfb3ce5af46c5b83353727b579a5596d45a121d59199f1c838928a87e3796.21e6e14db76ce734b669577cc3046333c6bc853767246356b4a8b2c6a85249a8 to temp dir /tmp/tmpdvw280lq\n12/05/2018 21:56:42 - INFO - allennlp.common.params - type = default\n12/05/2018 21:56:42 - INFO - allennlp.data.vocabulary - Loading token dictionary from /tmp/tmpdvw280lq/vocabulary.\n12/05/2018 21:56:42 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.models.model.Model'> from params {'initializer': [['.*linear_layers.*weight', {'type': 'xavier_normal'}], ['.*token_embedder_tokens\\\\._projection.*weight', {'type': 'xavier_normal'}]], 'type': 'decomposable_attention', 'aggregate_feedforward': {'activations': ['relu', 'linear'], 'dropout': [0.2, 0], 'hidden_dims': [200, 3], 'input_dim': 400, 'num_layers': 2}, 'similarity_function': {'type': 'dot_product'}, 'compare_feedforward': {'activations': 'relu', 'dropout': 0.2, 'hidden_dims': 200, 'input_dim': 2048, 'num_layers': 2}, 'attend_feedforward': {'activations': 'relu', 'dropout': 0.2, 'hidden_dims': 200, 'input_dim': 1024, 'num_layers': 2}, 'text_field_embedder': {'elmo': {'do_layer_norm': False, 'type': 'elmo_token_embedder', 'dropout': 0.2, 'options_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.options_file', 'weight_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.weight_file'}}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.type = decomposable_attention\n12/05/2018 21:56:42 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.models.decomposable_attention.DecomposableAttention'> from params {'initializer': [['.*linear_layers.*weight', {'type': 'xavier_normal'}], ['.*token_embedder_tokens\\\\._projection.*weight', {'type': 'xavier_normal'}]], 'aggregate_feedforward': {'activations': ['relu', 'linear'], 'dropout': [0.2, 0], 'hidden_dims': [200, 3], 'input_dim': 400, 'num_layers': 2}, 'similarity_function': {'type': 'dot_product'}, 'compare_feedforward': {'activations': 'relu', 'dropout': 0.2, 'hidden_dims': 200, 'input_dim': 2048, 'num_layers': 2}, 'attend_feedforward': {'activations': 'relu', 'dropout': 0.2, 'hidden_dims': 200, 'input_dim': 1024, 'num_layers': 2}, 'text_field_embedder': {'elmo': {'do_layer_norm': False, 'type': 'elmo_token_embedder', 'dropout': 0.2, 'options_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.options_file', 'weight_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.weight_file'}}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:42 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder'> from params {'elmo': {'do_layer_norm': False, 'type': 'elmo_token_embedder', 'dropout': 0.2, 'options_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.options_file', 'weight_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.weight_file'}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.type = basic\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.embedder_to_indexer_map = None\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.allow_unmatched_keys = False\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.token_embedders = None\n12/05/2018 21:56:42 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.token_embedders.token_embedder.TokenEmbedder'> from params {'do_layer_norm': False, 'type': 'elmo_token_embedder', 'dropout': 0.2, 'options_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.options_file', 'weight_file': '/tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.weight_file'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.type = elmo_token_embedder\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.options_file = /tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.options_file\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.weight_file = /tmp/tmpdvw280lq/fta/model.text_field_embedder.elmo.weight_file\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.requires_grad = False\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.do_layer_norm = False\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.dropout = 0.2\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.namespace_to_cache = None\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.projection_dim = None\n12/05/2018 21:56:42 - INFO - allennlp.common.params - model.text_field_embedder.elmo.scalar_mix_parameters = None\n12/05/2018 21:56:42 - INFO - allennlp.modules.elmo - Initializing ELMo\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.attend_feedforward.input_dim = 1024\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.attend_feedforward.num_layers = 2\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.attend_feedforward.hidden_dims = 200\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.attend_feedforward.activations = relu\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.attend_feedforward.dropout = 0.2\n12/05/2018 21:56:56 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.similarity_functions.similarity_function.SimilarityFunction'> from params {'type': 'dot_product'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.similarity_function.type = dot_product\n12/05/2018 21:56:56 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.similarity_functions.dot_product.DotProductSimilarity'> from params {} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7f8405bc7748>}\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.similarity_function.scale_output = False\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.compare_feedforward.input_dim = 2048\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.compare_feedforward.num_layers = 2\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.compare_feedforward.hidden_dims = 200\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.compare_feedforward.activations = relu\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.compare_feedforward.dropout = 0.2\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.aggregate_feedforward.input_dim = 400\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.aggregate_feedforward.num_layers = 2\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.aggregate_feedforward.hidden_dims = [200, 3]\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.aggregate_feedforward.activations = ['relu', 'linear']\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.aggregate_feedforward.dropout = [0.2, 0]\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.initializer = [['.*linear_layers.*weight', {'type': 'xavier_normal'}], ['.*token_embedder_tokens\\\\._projection.*weight', {'type': 'xavier_normal'}]]\n12/05/2018 21:56:56 - INFO - allennlp.common.params - model.initializer.list.list.type = xavier_normal\n"
],
[
"prediction = predictor.predict(\n hypothesis=\"Two women are sitting on a blanket near some rocks talking about politics.\",\n premise=\"Two women are wandering along the shore drinking iced tea.\"\n)\nprediction",
"_____no_output_____"
],
[
"type(prediction['premise_tokens'][0])",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"doc = nlp(\"I guess I am feeling kinda tired. I feel overwhelmed, a bit, maybe hungry. I dunno. I find myself wanting something, but I'm not sure what it is. I feel stressed certainly, too much to do maybe? But I'm not totally sure what I should be doing? Now it's a lot later and it's really time for me to get to bed...but a part of me wants to stay up, nonetheless\")",
"_____no_output_____"
],
[
"results = pd.DataFrame([], columns=['premise', 'hypothesis', 'entailment', 'contradiction', 'neutral', 'e+c'])\ni = 0\nfor premise in doc.sents:\n# entailment, contradiction, neutral = None\n for hypothesis in doc.sents:\n if (premise != hypothesis):\n prediction = predictor.predict(hypothesis=hypothesis.text, premise=premise.text)\n entailment, contradiction, neutral = prediction['label_probs']\n results.loc[i] = [premise.text, hypothesis.text, entailment, contradiction, neutral, (entailment + (1 - contradiction)) / 2]\n i += 1",
"_____no_output_____"
],
[
"results.sort_values(by='e+c', ascending=False).loc[results['neutral'] < .5]",
"_____no_output_____"
],
[
"hypothesis = 'I feel stressed'\n\nresults = pd.DataFrame([], columns=['premise', 'hypothesis', 'entailment', 'contradiction', 'neutral'])\ni = 0\nfor premise in doc.sents:\n prediction = predictor.predict(hypothesis=hypothesis, premise=premise.text)\n entailment, contradiction, neutral = prediction['label_probs']\n results.loc[i] = [premise.text, hypothesis, entailment, contradiction, neutral]\n i += 1",
"_____no_output_____"
],
[
"results.sort_values(by='entailment', ascending=False)",
"_____no_output_____"
],
[
"def demo(shape):\n nlp = spacy.load('en_vectors_web_lg')\n nlp.add_pipe(KerasSimilarityShim.load(nlp.path / 'similarity', nlp, shape[0]))\n\n doc1 = nlp(u'The king of France is bald.')\n doc2 = nlp(u'France has no king.')\n\n print(\"Sentence 1:\", doc1)\n print(\"Sentence 2:\", doc2)\n\n entailment_type, confidence = doc1.similarity(doc2)\n print(\"Entailment type:\", entailment_type, \"(Confidence:\", confidence, \")\")",
"_____no_output_____"
],
[
"from textacy.vsm import Vectorizer\nvectorizer = Vectorizer(\n tf_type='linear', apply_idf=True, idf_type='smooth', norm='l2',\n min_df=3, max_df=0.95, max_n_terms=100000\n)",
"_____no_output_____"
],
[
"model = textacy.tm.TopicModel('nmf', n_topics=20)\nmodel.fit",
"_____no_output_____"
],
[
"import textacy.keyterms",
"_____no_output_____"
],
[
"terms = textacy.keyterms.key_terms_from_semantic_network(doc)\nterms",
"_____no_output_____"
],
[
"terms = textacy.keyterms.sgrank(doc)\nterms",
"_____no_output_____"
],
[
"doc.text",
"_____no_output_____"
],
[
"import textacy.lexicon_methods",
"_____no_output_____"
],
[
"textacy.lexicon_methods.download_depechemood(data_dir='data')",
"12/05/2018 21:16:22 - INFO - textacy.lexicon_methods - Downloaded DepecheMood (4MB) from https://github.com/marcoguerini/DepecheMood/releases/download/v1.0/DepecheMood_V1.0.zip and wrote it to data\n"
],
[
"textacy.lexicon_methods.emotional_valence(words=[word for word in doc], dm_data_dir='data/DepecheMood_V1.0')",
"_____no_output_____"
],
[
"from event2mind_hack import load_event2mind_archive\nfrom allennlp.predictors.predictor import Predictor\n\narchive = load_event2mind_archive('data/event2mind.tar.gz')\npredictor = Predictor.from_archive(archive)\npredictor.predict(\n source=\"PersonX drops a hint\"\n)",
"12/06/2018 17:58:27 - INFO - event2mind_hack - loading archive file data/event2mind.tar.gz\n12/06/2018 17:58:27 - INFO - event2mind_hack - extracting archive file data/event2mind.tar.gz to temp dir /tmp/tmp0dlhchct\n12/06/2018 17:58:28 - INFO - allennlp.common.params - vocabulary.type = default\n12/06/2018 17:58:28 - INFO - allennlp.data.vocabulary - Loading token dictionary from /tmp/tmp0dlhchct/vocabulary.\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'event2mind_hack.Model'> from params {'embedding_dropout': 0.2, 'encoder': {'bidirectional': True, 'hidden_size': 50, 'input_size': 300, 'num_layers': 1, 'type': 'gru'}, 'max_decoding_steps': 10, 'source_embedder': {'token_embedders': {'tokens': {'embedding_dim': 300, 'trainable': False, 'type': 'embedding', 'vocab_namespace': 'source_tokens'}}}, 'target_namespace': 'target_tokens', 'type': 'event2mind'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fccd4e752e8>}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.type = event2mind\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'event2mind_hack.Event2Mind'> from params {'embedding_dropout': 0.2, 'encoder': {'bidirectional': True, 'hidden_size': 50, 'input_size': 300, 'num_layers': 1, 'type': 'gru'}, 'max_decoding_steps': 10, 'source_embedder': {'token_embedders': {'tokens': {'embedding_dim': 300, 'trainable': False, 'type': 'embedding', 'vocab_namespace': 'source_tokens'}}}, 'target_namespace': 'target_tokens'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fccd4e752e8>}\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder'> from params {'token_embedders': {'tokens': {'embedding_dim': 300, 'trainable': False, 'type': 'embedding', 'vocab_namespace': 'source_tokens'}}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fccd4e752e8>}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.type = basic\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.embedder_to_indexer_map = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.allow_unmatched_keys = False\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.token_embedders.token_embedder.TokenEmbedder'> from params {'embedding_dim': 300, 'trainable': False, 'type': 'embedding', 'vocab_namespace': 'source_tokens'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fccd4e752e8>}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.type = embedding\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.num_embeddings = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.vocab_namespace = source_tokens\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.embedding_dim = 300\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.pretrained_file = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.projection_dim = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.trainable = False\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.padding_index = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.max_norm = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.norm_type = 2.0\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.scale_grad_by_freq = False\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.source_embedder.token_embedders.tokens.sparse = False\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.embedding_dropout = 0.2\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.seq2vec_encoders.seq2vec_encoder.Seq2VecEncoder'> from params {'bidirectional': True, 'hidden_size': 50, 'input_size': 300, 'num_layers': 1, 'type': 'gru'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fccd4e752e8>}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.type = gru\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.batch_first = True\n12/06/2018 17:58:28 - INFO - allennlp.common.params - Converting Params object to dict; logging of default values will not occur when dictionary parameters are used subsequently.\n12/06/2018 17:58:28 - INFO - allennlp.common.params - CURRENTLY DEFINED PARAMETERS: \n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.bidirectional = True\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.hidden_size = 50\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.input_size = 300\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.num_layers = 1\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.encoder.batch_first = True\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.max_decoding_steps = 10\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.beam_size = 10\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.target_names = None\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.target_namespace = target_tokens\n12/06/2018 17:58:28 - INFO - allennlp.common.params - model.target_embedding_dim = None\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.dataset_readers.dataset_reader.DatasetReader'> from params {'source_token_indexers': {'tokens': {'namespace': 'source_tokens', 'type': 'single_id'}}, 'source_tokenizer': {'type': 'word', 'word_splitter': {'type': 'spacy'}}, 'target_token_indexers': {'tokens': {'namespace': 'target_tokens'}}, 'target_tokenizer': {'type': 'word'}, 'type': 'event2mind'} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.type = event2mind\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.dataset_readers.event2mind.Event2MindDatasetReader'> from params {'source_token_indexers': {'tokens': {'namespace': 'source_tokens', 'type': 'single_id'}}, 'source_tokenizer': {'type': 'word', 'word_splitter': {'type': 'spacy'}}, 'target_token_indexers': {'tokens': {'namespace': 'target_tokens'}}, 'target_tokenizer': {'type': 'word'}} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.tokenizers.tokenizer.Tokenizer'> from params {'type': 'word', 'word_splitter': {'type': 'spacy'}} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.source_tokenizer.type = word\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.tokenizers.word_tokenizer.WordTokenizer'> from params {'word_splitter': {'type': 'spacy'}} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.tokenizers.word_splitter.WordSplitter'> from params {'type': 'spacy'} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.source_tokenizer.word_splitter.type = spacy\n12/06/2018 17:58:28 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.data.tokenizers.word_splitter.SpacyWordSplitter'> from params {} and extras {}\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.source_tokenizer.word_splitter.language = en_core_web_sm\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.source_tokenizer.word_splitter.pos_tags = False\n12/06/2018 17:58:28 - INFO - allennlp.common.params - dataset_reader.source_tokenizer.word_splitter.parse = False\n"
],
[
"import math\nmath.exp(-1)",
"_____no_output_____"
],
[
"import pandas as pd\nimport math",
"_____no_output_____"
],
[
"xintent = pd.DataFrame({\n 'tokens': prediction['xintent_top_k_predicted_tokens'],\n 'p_log': prediction['xintent_top_k_log_probabilities']\n})\nxintent['p'] = xintent['p_log'].apply(math.exp)\nxintent.sort_values(by='p', ascending=False)",
"_____no_output_____"
],
[
"xreact = pd.DataFrame({\n 'tokens': prediction['xreact_top_k_predicted_tokens'],\n 'p_log': prediction['xreact_top_k_log_probabilities']\n})\nxreact['p'] = xreact['p_log'].apply(math.exp)\nxreact.sort_values(by='p', ascending=False)",
"_____no_output_____"
],
[
"oreact = pd.DataFrame({\n 'tokens': prediction['oreact_top_k_predicted_tokens'],\n 'p_log': prediction['oreact_top_k_log_probabilities']\n})\noreact['p'] = oreact['p_log'].apply(math.exp)\noreact.sort_values(by='p', ascending=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf7b69e5a6aeb34f6dcbec08df4787d9208640f | 322,116 | ipynb | Jupyter Notebook | Yeast_semi-supervised.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | Yeast_semi-supervised.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | Yeast_semi-supervised.ipynb | anihamde/VAE_protein_function | 172773da234953e24a73a1e75265fa230bb57791 | [
"MIT"
] | null | null | null | 98.990781 | 72,608 | 0.817038 | [
[
[
"### Using a Variational Auto-encoder to predict protein fitness from evolutionary data\n\nJuly 20, 2017\n### Sam Sinai and Eric Kelsic\n\n\n## For the blog post associated with this notebook see [this post](https://samsinai.github.io/jekyll/update/2017/08/14/Using-a-Variational-Autoencoder-to-predict-protein-function.html). \n\n\nThis notebook it organized in 3 sections. In section 1 we show our workflow for pre-processing the biological data. We then train the model on the alignment data in section 2. In section 3 we compare the predictions of the model on the [PABP yeast](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3851721/) dataset. In section 4 we report the results from analyzing multiple other datasets. Finally we pose some questions with regards to improving the model for interested researcher.",
"_____no_output_____"
]
],
[
[
"# Generic imports\nfrom __future__ import print_function\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport math,random,re\nimport time\nfrom sklearn.decomposition import PCA\n\nfrom Bio import SeqIO",
"_____no_output_____"
],
[
"#Machine learning/Stats imports \nfrom scipy.stats import norm\nfrom scipy.stats import spearmanr,pearsonr\nfrom sklearn.preprocessing import normalize\nfrom sklearn.model_selection import train_test_split\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport torch.distributions as D",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"from sklearn.ensemble import GradientBoostingRegressor",
"_____no_output_____"
],
[
"from models import *",
"_____no_output_____"
],
[
"import umap",
"_____no_output_____"
],
[
"import warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)",
"_____no_output_____"
]
],
[
[
"## 1. Data pre-processing\n\nDefining the alphabet that is used for Amino-Acids throughout.",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 1\nfrom helper_tools import *\nfrom helper_tools_for_plotting import *",
"_____no_output_____"
],
[
"n_data = 50000\nvarlen = True\nunaligned = True",
"_____no_output_____"
],
[
"data=pdataframe_from_alignment_file(\"PABP_YEAST_hmmerbit_plmc_n5_m30_f50_t0.2_r115-210_id100_b48.a2m\",n_data)\nprint (\"number of data points: \",len(data))\ndata_set_size=len(data)\ndata.head()",
"number of data points: 50000\n"
],
[
"print (\"length of sequence:\", len(data.iloc[0][\"sequence\"]))#, len(data.iloc[0][\"seq\"]))\nprint (\"sample sequence: \", data.iloc[0][\"sequence\"])",
"length of sequence: 96\nsample sequence: qrdpslrkKGSGNIFIKNLHPDIDNKALYDTFSVFGDILSSKIATDENGKSKGFGFVHFEEEGAAKEAIDALNGMLLNGQEIYVAPHLSRkerdsq\n"
],
[
"indices=index_of_non_lower_case_dot(data.iloc[0][\"sequence\"])\ndata[\"seq\"]=list(map(prune_seq,data[\"sequence\"],[varlen]*len(data[\"sequence\"]),[unaligned]*len(data[\"sequence\"])))\ndata.head()",
"_____no_output_____"
],
[
"PRUNED_SEQ_LENGTH=max( [len(data.iloc[i][\"seq\"]) for i in range(len(data))] )\nprint (\"pruned sequence length:\", PRUNED_SEQ_LENGTH)",
"pruned sequence length: 96\n"
],
[
"def add_stop_token(seq):\n num_stops = PRUNED_SEQ_LENGTH-len(seq)\n return seq+num_stops*'/'",
"_____no_output_____"
],
[
"data['seq'] = list(map(add_stop_token,data['seq']))",
"_____no_output_____"
],
[
"uniquechars = set()\nfor i in data['seq']:\n uniquechars = uniquechars.union(i)",
"_____no_output_____"
],
[
"#Invariants\n# ORDER_KEY=\"XILVAGMFYWEDQNHCRKSTPBZ-\"[::-1]\n# ORDER_LIST=list(ORDER_KEY)\nORDER_LIST = list(uniquechars)\nORDER_LIST = sorted(ORDER_LIST,reverse=True)",
"_____no_output_____"
],
[
"with open (\"PABP_YEAST_hmmerbit_t0.2_r50000.reweight\",\"rb\") as to_read:\n new_weights=np.load(to_read)\n\n#new_weights=reweight_sequences(data[\"seq\"],0.1)\nlen(new_weights),new_weights[:10]",
"_____no_output_____"
],
[
"#Encode training data in one_hot vectors\ntraining_data_one_hot=[]\nlabels=[]\nfor i, row in data.iterrows():\n training_data_one_hot.append(translate_string_to_one_hot(row[\"seq\"],ORDER_LIST))\nprint (len(training_data_one_hot),len(training_data_one_hot[0]),len(training_data_one_hot[0][0]))\n#plt.imshow(training_data_one_hot[0],cmap=\"Greys\")\ntraining_data=np.array([np.array(list(sample.T.flatten())) for sample in training_data_one_hot])\n# training_data=np.array([np.array(list(sample.flatten())).T for sample in training_data_one_hot])\nprint(training_data.shape)",
"50000 22 96\n(50000, 2112)\n"
],
[
"np.argmax(training_data_one_hot[0],axis=0)",
"_____no_output_____"
],
[
"exp_data_full=pd.read_csv(\n \"PABP_YEAST_Fields2013-singles.csv\", sep=\";\", comment=\"#\"\n)\nprint (\"number of mutants: \",len(exp_data_full))\nexp_data_full.head()\nexp_data_full.iloc[87]",
"number of mutants: 1188\n"
],
[
"exp_data_full.corr(method=\"spearman\")",
"_____no_output_____"
],
[
"OFFSET=117\n#Deciding offset requires investigating the dataset and alignment.\nexp_data_singles=pd.DataFrame(columns=exp_data_full.columns)\n#decide starting index depending on how the file is \"headered\"\nfor i,row in exp_data_full[1:].iterrows():\n pos=re.split(r'(\\d+)', row.mutant) \n if int(pos[1])-OFFSET in indices:\n exp_data_singles=exp_data_singles.append(row)\nexp_data_singles=exp_data_singles.reset_index()\ntarget_values_singles=list(exp_data_singles[\"linear\"])\nexp_data_singles.head(10) ",
"_____no_output_____"
],
[
"mutation_data=[re.split(r'(\\d+)', s) for s in exp_data_singles.mutant]\nwt_sequence=data.iloc[0].seq\nmutants=mutate_single(wt_sequence,mutation_data,offset=0,index=3) #note that you change index to 1\n\n#sanity checks\nprint (len(mutants),len(exp_data_singles))\n#the mutant should be in the correct place\nprint (list(zip(wt_sequence,mutants[3]))[:10])",
"1187 1187\n[('Q', 'Q'), ('R', 'R'), ('D', 'D'), ('P', 'N'), ('S', 'S'), ('L', 'L'), ('R', 'R'), ('K', 'K'), ('K', 'K'), ('G', 'G')]\n"
],
[
"#Test data with wt at 0 index\none_hot_mutants=[]\nmutants_plus=[data.iloc[0][\"seq\"]]+mutants\nfor mutant in mutants_plus:\n one_hot_mutants.append(translate_string_to_one_hot(\"\".join(mutant),ORDER_LIST))\n\ntest_data_plus=np.array([np.array(list(sample.flatten())).T for sample in one_hot_mutants])",
"_____no_output_____"
],
[
"exp_data_full=pd.read_csv(\n \"PABP_YEAST_Fields2013-doubles.csv\", sep=\";\", comment=\"#\"\n)\nprint (\"number of mutants: \",len(exp_data_full))\nexp_data_full.head()\nexp_data_full.iloc[0]",
"number of mutants: 36522\n"
],
[
"exp_data_full.corr(method=\"spearman\")",
"_____no_output_____"
],
[
"OFFSET=160\n#Deciding offset requires investigating the dataset and alignment.\nexp_data_doubles=pd.DataFrame(columns=exp_data_full.columns)\n#decide starting index depending on how the file is \"headered\"\nfor i,row in exp_data_full[0:].iterrows():\n pos=re.split(r'(\\d+)', row.mutant)\n if int(pos[1])-OFFSET in indices and int(pos[3])-OFFSET in indices:\n exp_data_doubles=exp_data_doubles.append(row)\nexp_data_doubles=exp_data_doubles.reset_index()\nexp_data_doubles.head(5)",
"_____no_output_____"
],
[
"target_values_doubles=list(exp_data_doubles[\"XY_Enrichment_score\"])\nexp_data_doubles.corr(method=\"spearman\")",
"_____no_output_____"
],
[
"mutation_data1=[re.split(r'(\\d+)', s.split(\",\")[0]) for s in exp_data_doubles.mutant]\nmutation_data2=[re.split(r'(\\d+)', s.split(\",\")[1]) for s in exp_data_doubles.mutant]\nwt_sequence=data.iloc[0].seq\n\nmutants_double=mutate_double(wt_sequence,mutation_data1,mutation_data2,offset=0,index=46)\n\n#sanity checks\nprint (len(mutants_double),len(exp_data_doubles))\n#the mutant should be in the correct place\nprint (list(zip(wt_sequence,mutants_double[2]))[40:50])",
"13876 13876\n[('S', 'S'), ('K', 'K'), ('I', 'I'), ('A', 'A'), ('T', 'T'), ('D', 'D'), ('E', 'A'), ('N', 'I'), ('G', 'G'), ('K', 'K')]\n"
],
[
"#Test data with wt at 0 index\none_hot_mutants=[]\nmutants_plus=[data.iloc[0][\"seq\"]]+mutants_double\nfor mutant in mutants_plus:\n one_hot_mutants.append(translate_string_to_one_hot(\"\".join(mutant),ORDER_LIST))\n\ntest_data_doubles_plus=np.array([np.array(list(sample.flatten())).T for sample in one_hot_mutants])",
"_____no_output_____"
],
[
"training_data[4].shape",
"_____no_output_____"
],
[
"all_test_data=np.vstack([test_data_plus,test_data_doubles_plus[1:]])\nall_test_data_flattened=np.array([np.array(list(sample.flatten())).T for sample in all_test_data])",
"_____no_output_____"
]
],
[
[
"## Basic Functions",
"_____no_output_____"
]
],
[
[
"def build_PCA(data, n_components):\n pca = PCA(n_components=n_components)\n pca.fit(data)\n return pca\n\ndef feed_PCA(pca, data):\n return pca.transform(data)\n\ndef split_data(xdata, ydata, train_size):\n x_train, x_test, y_train, y_test = train_test_split(xdata, ydata, train_size=train_size, random_state=10)\n return (x_train, x_test, y_train, y_test)\n\ndef train_test(x_train, x_test, y_train, y_test, model_type=\"reg\"):\n if (model_type == \"reg\"):\n reg = GradientBoostingRegressor(max_depth=6)\n reg.fit(x_train, y_train)\n return (spearmanr(reg.predict(x_test), y_test))\n# return (reg.score(x_test, y_test))\n \n elif (model_type == \"clf\"):\n clf = RandomForestClassifier()\n clf.fit(x_train, y_train > 0.5)\n return (clf.score(x_test, y_test > 0.5))\n \n return None \n\ndef augmented_learning(data, additional_data, labels, n_components, train_size):\n x_train, x_test, y_train, y_test = split_data(data, labels, train_size=train_size)\n \n augmented_data = np.concatenate([x_train, additional_data])\n pca = build_PCA(augmented_data, n_components=n_components)\n transformed_train = feed_PCA(pca, x_train)\n transformed_test = feed_PCA(pca, x_test)\n\n \n reg_score = train_test(transformed_train, transformed_test, y_train, y_test, model_type=\"reg\")\n clf_score = train_test(transformed_train, transformed_test, y_train, y_test, model_type=\"clf\")\n \n return (reg_score, clf_score)\n\ndef normal_learning(data, labels, train_size):\n x_train, x_test, y_train, y_test = split_data(data, labels, train_size=train_size)\n\n reg_score = train_test(x_train, x_test, y_train, y_test, model_type=\"reg\")\n clf_score = train_test(x_train, x_test, y_train, y_test, model_type=\"clf\")\n \n return (reg_score, clf_score)\n\ndef PCA_learning(data, labels, n_components, train_size):\n x_train, x_test, y_train, y_test = split_data(data, labels, train_size=train_size)\n \n pca = build_PCA(x_train, n_components=n_components)\n transformed_train = feed_PCA(pca, x_train)\n transformed_test = feed_PCA(pca, x_test)\n \n reg_score = train_test(transformed_train, transformed_test, y_train, y_test, model_type=\"reg\")\n clf_score = train_test(transformed_train, transformed_test, y_train, y_test, model_type=\"clf\")\n \n return (reg_score, clf_score)",
"_____no_output_____"
]
],
[
[
"## Baseline One-hot model ",
"_____no_output_____"
]
],
[
[
"# mutants, target_values_singles\n# mutants_double, target_values_doubles",
"_____no_output_____"
],
[
"plt.hist(target_values_singles)\nplt.title(\"Singles dataset fitness distribution\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(target_values_doubles)\nplt.title(\"Doubles dataset fitness distribution\")\nplt.show()",
"_____no_output_____"
],
[
"accuracies = normal_learning(test_data_plus[1:], np.array(target_values_singles), train_size=0.7)",
"_____no_output_____"
],
[
"accuracies[0].correlation, accuracies[1]",
"_____no_output_____"
],
[
"accuracies_doubles = normal_learning(test_data_doubles_plus[1:], np.array(target_values_doubles), train_size=0.7)",
"_____no_output_____"
],
[
"accuracies_doubles[0].correlation, accuracies_doubles[1]",
"_____no_output_____"
]
],
[
[
"## Data-Augmentation in low-data settings",
"_____no_output_____"
],
[
"### Singles",
"_____no_output_____"
]
],
[
[
"training_sizes = [10, 25, 50, 75, 100, 300, 500, 1000] #, 2000, 5000, 10000]\nn_components = 30\n\nreg_score_normal = []\nreg_score_augmented = []\nreg_score_PCA = []\nclf_score_normal = []\nclf_score_augmented = []\n\nfor size in training_sizes:\n aug_run = augmented_learning(test_data_plus[1:], training_data, np.array(target_values_singles), n_components, size)\n normal_run = normal_learning(test_data_plus[1:], np.array(target_values_singles), size)\n pca_run = PCA_learning(test_data_plus[1:], np.array(target_values_singles), min(n_components,size), size)\n reg_score_normal.append(normal_run[0])\n reg_score_augmented.append(aug_run[0])\n reg_score_PCA.append(pca_run[0])\n# clf_score_normal.append(normal_run[1])\n# clf_score_augmented.append(aug_run[1])",
"_____no_output_____"
],
[
"[i[0] for i in reg_score_normal]",
"_____no_output_____"
],
[
"reg_score_normal[0][0]",
"_____no_output_____"
],
[
"plt.scatter(training_sizes, [i[0] for i in reg_score_normal], label=\"One-hot\")\nplt.scatter(training_sizes, [i[0] for i in reg_score_augmented], label=\"PCA Augmented\")\nplt.scatter(training_sizes, [i[0] for i in reg_score_PCA], label=\"PCA without augmentation\")\n\nplt.xlabel(\"Training size\")\nplt.title(\"PCA = 30, Regression Accuracy\")\nplt.legend()\nplt.savefig(\"PCA_30_Reg_singles.png\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Doubles",
"_____no_output_____"
]
],
[
[
"training_sizes = [10, 25, 50, 75, 100, 300, 500, 1000] #, 2000, 5000, 10000]\nn_components = 100\n\nreg_score_normal = []\nreg_score_augmented = []\nreg_score_PCA = []\nclf_score_normal = []\nclf_score_augmented = []\n\nfor size in training_sizes:\n aug_run = augmented_learning(test_data_doubles_plus[1:], training_data, np.array(target_values_doubles), n_components, size)\n normal_run = normal_learning(test_data_doubles_plus[1:], np.array(target_values_doubles), size)\n pca_run = PCA_learning(test_data_doubles_plus[1:], np.array(target_values_doubles), min(n_components,size), size)\n reg_score_normal.append(normal_run[0])\n reg_score_augmented.append(aug_run[0])\n reg_score_PCA.append(pca_run[0])\n# clf_score_normal.append(normal_run[1])\n# clf_score_augmented.append(aug_run[1])",
"_____no_output_____"
],
[
"[i[0] for i in reg_score_normal]",
"_____no_output_____"
],
[
"reg_score_normal[0][0]",
"_____no_output_____"
],
[
"plt.scatter(training_sizes, [i[0] for i in reg_score_normal], label=\"One-hot\")\nplt.scatter(training_sizes, [i[0] for i in reg_score_augmented], label=\"PCA Augmented\")\nplt.scatter(training_sizes, [i[0] for i in reg_score_PCA], label=\"PCA without augmentation\")\n\nplt.xlabel(\"Training size\")\nplt.title(\"PCA = 30, Regression Accuracy\")\nplt.legend()\nplt.savefig(\"PCA_30_Reg_doubles.png\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## UMAP Experiments",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = split_data(test_data_plus[1:], np.array(target_values_singles), train_size=1000)",
"_____no_output_____"
],
[
"pooled_data = np.concatenate([training_data[:1000], x_train])",
"_____no_output_____"
],
[
"UMAP_embedding = umap.UMAP(n_neighbors=15,\n min_dist=0.05,\n n_components=2,\n metric='manhattan').fit(pooled_data)",
"/home/anirudh_suresh/.local/lib/python3.5/site-packages/umap/spectral.py:229: UserWarning: Embedding a total of 11 separate connected components using meta-embedding (experimental)\n n_components\n/home/anirudh_suresh/.local/lib/python3.5/site-packages/sklearn/manifold/spectral_embedding_.py:237: UserWarning: Graph is not fully connected, spectral embedding may not work as expected.\n warnings.warn(\"Graph is not fully connected, spectral embedding\"\n"
],
[
"xtrain_UMAP = UMAP_embedding.transform(x_train)\nxtest_UMAP = UMAP_embedding.transform(x_test)",
"_____no_output_____"
],
[
"pooled_UMAP = UMAP_embedding.transform(pooled_data)",
"_____no_output_____"
],
[
"plt.scatter(pooled_UMAP[:, 0], pooled_UMAP[:, 1])\nplt.show()",
"_____no_output_____"
],
[
"Reg = GradientBoostingRegressor()\nReg.fit(xtrain_UMAP, y_train)",
"_____no_output_____"
],
[
"Reg.score(xtest_UMAP, y_test)",
"_____no_output_____"
],
[
"UMAP_sarkisyan = UMAP_embedding.transform(sarkisyan_data)",
"_____no_output_____"
],
[
"# UMAP_sarkisyan = UMAP_embedding.transform(sarkisyan_data)\n\nX_train, X_test, y_train, y_test = train_test_split(UMAP_sarkisyan, sarkisyan['function'], \n test_size = 0.3, random_state=10)\n\nnaiveClf = RandomForestClassifier()\nnaiveClf.fit(X_train, y_train)\nclf_score = naiveClf.score(X_test, y_test)\n\nX_train, X_test, y_train, y_test = train_test_split(UMAP_sarkisyan, sarkisyan['quantitative_function'], \n test_size = 0.3, random_state=10)\nnaiveReg = RandomForestRegressor()\nnaiveReg.fit(X_train, y_train)\nreg_score = naiveReg.score(X_test, y_test)",
"//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n//anaconda/envs/ML_env/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"clf_score",
"_____no_output_____"
],
[
"reg_score",
"_____no_output_____"
],
[
"plt.scatter(UMAP_sarkisyan[:,0][:10000], UMAP_sarkisyan[:, 1][:10000])\nplt.show()",
"_____no_output_____"
],
[
"UMAP_sarkisyan_20 = UMAP_sarkisyan.copy()\northolog_umap_20 = a.copy()",
"_____no_output_____"
]
],
[
[
"This concludes the pre-processing we need to do on the data.\n\n## 2. Training the model\nWe now move on to define our neural network. This is essentially a vanilla VAE in keras (with some optimization on hyperparameters). For optimization purposes we define a callback function that reports the predictive power of the model in the end of each epoch. Note that while this passes the -test data- through the model, it is kosher because we never pass in the values we are actually interested in and the network is not in \"training phase\", i.e. no weights are updated during this pass. ",
"_____no_output_____"
]
],
[
[
"class rho_vs_mutants():\n def __init__(self,mutants,test_set_size,aa_size,sequence_size):\n self.mutants=mutants\n self.sample_size=test_set_size\n self.aa_size=aa_size\n self.sequence_size=sequence_size\n self.scores=[]\n self.count_batch=0\n def on_train_begin(self, logs={}):\n self.losses = []\n def on_batch_end(self, batch, logs={}):\n self.losses.append(logs.get('loss'))\n #This allows us to track the \"progress\" of the model on different epochs\n def on_epoch_end(self,model,batch,logs):\n x_decoded=model(test_data_plus[0:self.sample_size],batch_size=batch_size)\n digit = x_decoded[0].reshape(self.aa_size,self.sequence_size)\n digit_wt = normalize(digit,axis=0, norm='l1')\n wt_prob=compute_log_probability(digit,digit_wt)\n fitnesses=[]\n for sample in range(1,self.sample_size):\n digit = x_decoded[sample].reshape(self.aa_size,self.sequence_size)\n digit = normalize(digit,axis=0, norm='l1')\n fitness=compute_log_probability(test_data_plus[sample].reshape(self.aa_size,self.sequence_size),digit)-wt_prob\n fitnesses.append(fitness)\n print (\",\"+str(spearmanr(fitnesses,target_values_singles[:self.sample_size-1])))\n self.scores.append(spearmanr(fitnesses,target_values_singles[:self.sample_size-1])[0])",
"_____no_output_____"
]
],
[
[
"Now we are ready to specify the network architecture, this is adapted from [here](https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py).",
"_____no_output_____"
]
],
[
[
"# torch.sum(1 + model.z_log_var - (model.z_mean)**2 - torch.exp(model.z_log_var),-1)",
"_____no_output_____"
],
[
"PRUNED_SEQ_LENGTH",
"_____no_output_____"
],
[
"batch_size = 20\noriginal_dim=len(ORDER_LIST)*PRUNED_SEQ_LENGTH\noutput_dim=len(ORDER_LIST)*PRUNED_SEQ_LENGTH\nlatent_dim = 2\nintermediate_dim=250\nnb_epoch = 10\nepsilon_std = 1.0\nnp.random.seed(42) \n\nloss1 = nn.CrossEntropyLoss()\n\ndef vae_loss(x_true, x_decoded_mean, z_mean, z_log_var):\n xent_loss = original_dim * loss1(x_decoded_mean, x_true)\n kl_loss = -0.5 * torch.sum(1 + z_log_var - (z_mean)**2 - torch.exp(z_log_var))\n# print (\"xent loss: \", xent_loss)\n# print (\"KL loss: \", kl_loss)\n return (xent_loss + kl_loss), xent_loss, kl_loss",
"_____no_output_____"
]
],
[
[
"And run it through our training data.",
"_____no_output_____"
]
],
[
[
"training_size = 50000 #so batchingw orks\nx_train=training_data[:training_size] #this needs to be divisible by batch size and less than or equal to dataset size\nx_train = x_train.astype('float32')\nx_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))\n",
"_____no_output_____"
],
[
"vae_type = 'full'",
"_____no_output_____"
],
[
"if vae_type == 'full':\n print (\"training on full\")\n univ_dropout = [0.2]*3\n dropout_enc = univ_dropout\n dropout_dec = univ_dropout\n\n layers_enc = nn.ModuleList([nn.Linear(original_dim,intermediate_dim),nn.Dropout(dropout_enc[0]),nn.ELU()])\n for i in range(2):\n layers_enc.append(nn.Linear(intermediate_dim,intermediate_dim))\n layers_enc.append(nn.Dropout(dropout_enc[i+1]))\n layers_enc.append(nn.ELU())\n\n layers_dec = nn.ModuleList([nn.Linear(latent_dim,intermediate_dim),nn.Dropout(dropout_dec[0]),nn.ELU()])\n for i in range(2):\n layers_dec.append(nn.Linear(intermediate_dim,intermediate_dim))\n layers_dec.append(nn.Dropout(dropout_dec[i+1]))\n layers_dec.append(nn.ELU())\n\n layers_dec.append(nn.Linear(intermediate_dim,output_dim))\n\n layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\nelif vae_type == 'conv':\n out_conv_enc = [50,100]\n kernels_enc = [3,5]\n dilations_enc = [1,3]\n maxpools_enc = [4,3]\n paddings_enc = [(5,5,0,0)]\n \n out_lin_enc = [100,500]\n dropout_enc = [0.2,0.2]\n \n out_lin_dec = [100,150]\n dropout_dec = [0.2,0.2]\n \n layers_enc_pre_view = nn.ModuleList([nn.Conv1d(len(ORDER_LIST),out_conv_enc[0],kernels_enc[0],stride=1,dilation=dilations_enc[0]),\n nn.ELU(),\n nn.MaxPool1d(maxpools_enc[0],padding=0),\n nn.ZeroPad2d(paddings_enc[0]),\n nn.Conv1d(out_conv_enc[0],out_conv_enc[1],kernels_enc[1],stride=1,dilation=dilations_enc[1]),\n nn.ELU(),\n# nn.MaxPool1d(4,padding=0),\n# nn.ZeroPad2d((5,5,0,0)),\n# nn.Conv1d(out_conv_enc[1],out_conv_enc[2],kernels_enc[2],stride=1,dilation=dilations_enc[2]),\n# nn.ELU(),\n nn.MaxPool1d(maxpools_enc[1],padding=0)])\n \n inp_len = PRUNED_SEQ_LENGTH\n paddings_enc.append((0,0,0,0))\n for i in range(len(out_conv_enc)):\n inp_len = conv_size_func(inp_len,dilations_enc[i],kernels_enc[i])\n inp_len = inp_len//maxpools_enc[i]\n inp_len += (paddings_enc[i][0]+paddings_enc[i][1])\n \n enc_view = inp_len*out_conv_enc[-1]\n print('post-convolutional size is ', enc_view)\n \n layers_enc_post_view = nn.ModuleList([nn.Linear(enc_view,out_lin_enc[0]),\n nn.Dropout(dropout_enc[0]),\n nn.ELU(),\n nn.Linear(out_lin_enc[0],out_lin_enc[1]),\n nn.Dropout(dropout_enc[1]),\n nn.ELU()])\n \n layers_dec = nn.ModuleList([nn.Linear(latent_dim,out_lin_dec[0]),\n nn.Dropout(dropout_dec[0]),\n nn.ELU(),\n nn.Linear(out_lin_dec[0],out_lin_dec[1]),\n nn.Dropout(dropout_dec[1]),\n nn.ELU(),\n nn.Linear(out_lin_dec[1],output_dim)])\n \n layers_ae = nn.ModuleList([nn.Linear(out_lin_enc[-1],latent_dim),nn.Linear(out_lin_enc[-1],latent_dim)])\nelif vae_type == 'rec':\n univ_dropout = [0.2]*2\n dropout_enc = univ_dropout\n dropout_dec = univ_dropout\n hid_size = [20,10]\n dec_lin = False\n \n num_layers = 2\n num_layers_dec = 2\n bid = True\n num_dirs = 2 if bid else 1\n \n \n layers_enc = nn.ModuleList([nn.RNN(len(ORDER_LIST),hid_size[0],num_layers=num_layers,batch_first=True,dropout=univ_dropout[0],bidirectional=bid)])\n\n\n if dec_lin:\n layers_post_rec_enc = nn.ModuleList([nn.Linear(164,intermediate_dim),\n nn.Dropout(dropout_enc[0]),\n nn.ELU(),\n nn.Linear(intermediate_dim,intermediate_dim),\n nn.Dropout(dropout_enc[1]),\n nn.ELU()]) # for now, not being used in rec model\n\n\n # layers_pre_rec_dec = nn.ModuleList([nn.Linear(latent_dim,100),\n # nn.Dropout(dropout_dec[0]),\n # nn.ELU()])\n # # 25 below bc bidirectional 2 layers means we have to divide 100 by 2*2\n # layers_dec = nn.ModuleList([nn.RNN(50,25,num_layers=2,batch_first=True,dropout=0.2,bidirectional=True)])\n # layers_post_rec_dec = nn.ModuleList([nn.Linear(25*2,len(ORDER_LIST))])\n\n # layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\n layers_dec = nn.ModuleList([nn.Linear(latent_dim,intermediate_dim),\n nn.Dropout(.2),\n nn.ELU(),\n nn.Linear(intermediate_dim,intermediate_dim*2),\n nn.Dropout(.2),\n nn.ELU(),\n nn.Linear(intermediate_dim*2,output_dim)])\n \n layers_dec_post_rec = 0\n \n layers_ae = nn.ModuleList([nn.Linear(intermediate_dim,latent_dim),nn.Linear(intermediate_dim,latent_dim)])\n \n else: # dec_lin = False\n layers_post_rec_enc = 0\n \n layers_dec = nn.ModuleList([nn.Linear(latent_dim,hid_size[1]),nn.RNN(len(ORDER_LIST),hid_size[1],num_layers=num_layers_dec,batch_first=True,dropout=univ_dropout[1],bidirectional=bid)])\n \n layers_dec_post_rec = nn.ModuleList([nn.Linear(hid_size[1]*num_dirs,len(ORDER_LIST))])\n \n layers_ae = nn.ModuleList([nn.Linear(hid_size[0],latent_dim),nn.Linear(hid_size[0],latent_dim)])\n \n ",
"training on full\n"
],
[
"losses_train = []\nlosses_test = []\naccuracies_train = []\naccuracies_test = []\nxents_train = []\nxents_test = []\nkls_train = []\nkls_test = []\n\nif vae_type == 'full':\n print (\"training full\")\n model = VAE(layers_enc,layers_ae,layers_dec)\n\n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.001)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n\n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n\n for epoch in range(nb_epoch):\n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,len(ORDER_LIST)*PRUNED_SEQ_LENGTH) # 1968)\n\n train = torch.Tensor(train)\n\n \n \n for batch in train:\n out = model(batch)\n\n batch = batch.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n batch = torch.argmax(batch,1)\n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n loss,_,_ = vae_loss(batch,out,model.z_mean,model.z_log_var)\n \n optimizer.zero_grad()\n loss.backward() \n optimizer.step()\n \n model.eval()\n\n out_train = model(torch.Tensor(x_train_data))\n out_train = torch.Tensor(out_train)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(torch.Tensor(x_val_data))\n out_val = torch.Tensor(out_val)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train,_,_ = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n loss_val,_,_ = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n \n losses_train.append(loss_train)\n losses_test.append(loss_val)\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n \n print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n\nelif vae_type == 'conv':\n print (\"conv\")\n model = VAE_conv(layers_enc_pre_view,enc_view,layers_enc_post_view,layers_ae,layers_dec)\n \n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.001)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n\n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n\n for epoch in range(nb_epoch):\n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n train = torch.Tensor(train)\n train = train.transpose(-2,-1)\n\n for batch in train:\n out = model(batch)\n\n batch = batch.transpose(-2,-1).reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n batch = torch.argmax(batch,1)\n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n loss,_,_ = vae_loss(batch,out,model.z_mean,model.z_log_var)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n model.eval()\n\n out_train = model(torch.Tensor(x_train_data).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)).transpose(-2,-1))\n out_train = torch.Tensor(out_train)\n out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_train = torch.argmax(out_train,dim=1)\n bool_train = (classpreds_train==ins_train)\n class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n out_val = model(torch.Tensor(x_val_data).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)).transpose(-2,-1))\n out_val = torch.Tensor(out_val)\n out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n classpreds_val = torch.argmax(out_val,dim=1)\n bool_val = (classpreds_val==ins_val)\n class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n loss_train,_,_ = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n loss_val,_,_ = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n\n losses_train.append(loss_train)\n losses_test.append(loss_val)\n accuracies_train.append(class_acc_train)\n accuracies_test.append(class_acc_val)\n \n print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n \nelif vae_type == 'rec':\n print (\"rec\")\n if lang_mod:\n print(\"language model training\")\n else:\n print(\"vae training\")\n \n alpha = 50000\n beta = 0.005\n print('KL annealing terms: alpha = {}, beta = {}'.format(alpha,beta))\n \n model = VAE_rec(layers_enc,layers_post_rec_enc,layers_ae,0,layers_dec,layers_dec_post_rec)\n \n if cuda:\n model = model.cuda()\n \n prams = list(model.parameters())\n\n optimizer = torch.optim.Adam(prams, lr = 0.01)\n\n x_train_data, x_val_data = train_test_split(x_train, test_size = 0.1)\n \n# print('FAKE TRAINING SET TO ASSESS REC VALIDITY')\n# x_train_data = np.array([[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]*3690000).reshape(45000,1968)\n \n# import pdb; pdb.set_trace()\n \n ins_train = x_train_data.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_train = torch.Tensor(ins_train)\n ins_train = torch.argmax(ins_train,1)\n\n ins_val = x_val_data.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n ins_val = torch.Tensor(ins_val)\n ins_val = torch.argmax(ins_val,1)\n \n ins_train = create_tensor(ins_train,gpu=cuda)\n ins_val = create_tensor(ins_val,gpu=cuda)\n \n \n# ## Printing model perf before\n# model.eval()\n \n# out_train = model(create_tensor(torch.Tensor(x_train_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n# out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n# classpreds_train = torch.argmax(out_train,dim=1)\n# bool_train = (classpreds_train==ins_train)\n# class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n# out_val = model(create_tensor(torch.Tensor(x_val_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n# out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n# classpreds_val = torch.argmax(out_val,dim=1)\n# bool_val = (classpreds_val==ins_val)\n# class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n# loss_train,xent_train,kl_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n# kl_train = sigmoid(beta*(-alpha))*kl_train # annealing\n# loss_train = xent_train + kl_train # annealing\n# loss_val,xent_val,kl_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n# kl_val = sigmoid(beta*(-alpha))*kl_val # annealing\n# loss_val = xent_val + kl_val # annealing\n\n# losses_train.append(loss_train.item())\n# losses_test.append(loss_val.item())\n# accuracies_train.append(class_acc_train)\n# accuracies_test.append(class_acc_val)\n# xents_train.append(xent_train.item())\n# xents_test.append(xent_val.item())\n# kls_train.append(kl_train.item())\n# kls_test.append(kl_val.item())\n\n# print('Pre-training | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n# %( loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )\n \n for epoch in range(nb_epoch):\n print('Epoch {}'.format(epoch))\n \n model.train()\n\n train = np.random.permutation(x_train_data)\n train = train.reshape(-1,batch_size,PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n train = create_tensor(torch.Tensor(train),gpu=cuda)\n\n xents = []\n kls = []\n \n num_dum = -1\n\n optimizer.zero_grad()\n \n for batch in train:\n num_dum += 1\n out = model(batch,True,lang_mod)\n \n# import pdb; pdb.set_trace()\n batch = torch.argmax(batch,-1)\n batch = batch.reshape(-1)\n \n out = out.reshape(batch_size*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n loss,xent,kl = vae_loss(batch,out,model.z_mean,model.z_log_var)\n mult = epoch*len(x_train_data)/batch_size + num_dum # annealing\n kl = sigmoid(beta*(mult-alpha))*kl # annealing\n loss = xent + kl # annealing\n if num_dum % 1000 == 0:\n print((batch==torch.argmax(out,-1)).sum().item()/(batch_size*PRUNED_SEQ_LENGTH*1.0))\n xents.append(xent)\n kls.append(kl)\n\n if lang_mod:\n xent.backward()\n else:\n loss.backward() \n \n# for layer, paramval in model.named_parameters():\n# print(layer,paramval.grad)\n \n optimizer.step()\n \n# import pdb; pdb.set_trace()\n print('xent mean is:',torch.stack(xents).mean().item())\n print('kl mean is:',torch.stack(kls).mean().item())\n\n# model.eval()\n \n# # import pdb; pdb.set_trace()\n# out_train = model(create_tensor(torch.Tensor(x_train_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n# out_train = out_train.reshape(len(x_train_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n# classpreds_train = torch.argmax(out_train,dim=1)\n# bool_train = (classpreds_train==ins_train)\n# class_acc_train = bool_train.sum().item()/bool_train.shape[0]\n\n# out_val = model(create_tensor(torch.Tensor(x_val_data),gpu=cuda).reshape(-1,PRUNED_SEQ_LENGTH,len(ORDER_LIST)),False,lang_mod)\n# out_val = out_val.reshape(len(x_val_data)*PRUNED_SEQ_LENGTH,len(ORDER_LIST))\n\n# classpreds_val = torch.argmax(out_val,dim=1)\n# bool_val = (classpreds_val==ins_val)\n# class_acc_val = bool_val.sum().item()/bool_val.shape[0]\n\n# loss_train,xent_train,kl_train = vae_loss(ins_train,out_train,model.z_mean,model.z_log_var)\n# mult = epoch*len(x_train_data)/batch_size + num_dum # annealing\n# kl_train = sigmoid(beta*(mult-alpha))*kl_train # annealing\n# loss_train = xent_train + kl_train # annealing\n# loss_val,xent_val,kl_val = vae_loss(ins_val,out_val,model.z_mean,model.z_log_var)\n# kl_val = sigmoid(beta*(mult-alpha))*kl_val # annealing\n# loss_val = xent_val + kl_val # annealing\n \n# losses_train.append(loss_train.item())\n# losses_test.append(loss_val.item())\n# accuracies_train.append(class_acc_train)\n# accuracies_test.append(class_acc_val)\n# xents_train.append(xent_train.item())\n# xents_test.append(xent_val.item())\n# kls_train.append(kl_train.item())\n# kls_test.append(kl_val.item())\n \n# print(classpreds_train)\n# print(classpreds_val)\n \n# print('Epoch %s | Training Loss: %s, Training Accuracy: %s, Validation Loss: %s, Validation Accuracy: %s'\n# %( epoch, loss_train.item(), class_acc_train, loss_val.item(), class_acc_val ) )",
"training full\nEpoch 0 | Training Loss: 51845.8828125, Training Accuracy: 0.5613212962962963, Validation Loss: 51856.19140625, Validation Accuracy: 0.5625208333333334\nEpoch 1 | Training Loss: 59084.19921875, Training Accuracy: 0.5698942129629629, Validation Loss: 59098.73046875, Validation Accuracy: 0.5718708333333333\nEpoch 2 | Training Loss: 60201.77734375, Training Accuracy: 0.5765546296296297, Validation Loss: 60201.64453125, Validation Accuracy: 0.5805270833333334\nEpoch 3 | Training Loss: 70057.5, Training Accuracy: 0.5996157407407408, Validation Loss: 70068.3984375, Validation Accuracy: 0.6023125\nEpoch 4 | Training Loss: 70746.8984375, Training Accuracy: 0.6048523148148148, Validation Loss: 70770.703125, Validation Accuracy: 0.6059270833333333\nEpoch 5 | Training Loss: 71713.140625, Training Accuracy: 0.6109006944444444, Validation Loss: 71734.3671875, Validation Accuracy: 0.6121541666666667\nEpoch 6 | Training Loss: 68660.671875, Training Accuracy: 0.62168125, Validation Loss: 68688.46875, Validation Accuracy: 0.6224645833333333\nEpoch 7 | Training Loss: 71220.96875, Training Accuracy: 0.6178604166666667, Validation Loss: 71246.765625, Validation Accuracy: 0.616675\nEpoch 8 | Training Loss: 71854.71875, Training Accuracy: 0.6007594907407408, Validation Loss: 71880.1171875, Validation Accuracy: 0.60060625\n"
]
],
[
[
"Let's explore the latent space",
"_____no_output_____"
]
],
[
[
"fit_xtrain = model(torch.Tensor(test_data_plus)).detach()\nz_means = model.z_mean.detach()",
"_____no_output_____"
],
[
"transposed_zmeans = np.array(z_means).transpose()\n\nplt.scatter(transposed_zmeans[0], transposed_zmeans[1], s = 1, linewidths = 0)\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.cluster import KMeans\n\nz_means_np = np.array(z_means)\nkmeans = KMeans(n_clusters=12, random_state=1).fit(z_means_np)",
"_____no_output_____"
],
[
"sample_points=len(z_means_np)\n\nlatent_dim = 2\nfig = plt.figure(figsize=(12,12))\ncounter=0\ncmap=kmeans.labels_\nfor z1 in range(latent_dim):\n for z2 in range(z1+1,latent_dim):\n counter+=1\n fig.add_subplot(latent_dim,latent_dim,counter)\n plt.title(str(z1)+\"_\"+str(z2))\n plt.scatter(z_means_np[:, z1][::-1], z_means_np[:, z2][::-1],c=cmap[::-1], s = 15, alpha=0.1,marker=\"o\")\n# plt.scatter(z_means_np[:, z1][::-1], z_means_np[:, z2][::-1],c=\"y\" ,alpha=0.3,marker=\"o\")\n plt.scatter(z_means_np[0][z1], z_means_np[0][z2],c=\"r\" ,alpha=1,s=40,marker=\"s\")\n plt.xlabel(\"Latent dim\"+str(z1+1))\n plt.ylabel(\"Latent dim\"+str(z2+1));\nplt.savefig(\"Try2_originalDropout.png\")\n",
"_____no_output_____"
],
[
"plt.pcolor(x_train[0].reshape(PRUNED_SEQ_LENGTH, len(ORDER_LIST)).transpose(1, 0))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Training a classifier over the latent space",
"_____no_output_____"
]
],
[
[
"class_x = test_data_plus\nclass_y = np.array(target_values_singles)\nprint('USING SINGLES')\n\n# class_x = test_data_doubles_plus\n# class_y = np.array(target_values_doubles)\n# print('USING DOUBLES')",
"_____no_output_____"
],
[
"fit_total = model(torch.Tensor(class_x)).detach()\nlatent_data = model.z_mean.detach()",
"_____no_output_____"
],
[
"fit_total.shape",
"_____no_output_____"
],
[
"latent_data.shape",
"_____no_output_____"
],
[
"fit_total",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(np.array(latent_data[1:]), class_y, \n test_size = 0.3, random_state=10)",
"_____no_output_____"
],
[
"latentReg = GradientBoostingRegressor()\nlatentReg.fit(X_train, y_train)\n# latentReg.predict(X_test)\nlatentReg.score(X_test, y_test)",
"_____no_output_____"
],
[
"predic_train = latentReg.predict(X_train)\npredic_test = latentReg.predict(X_test)\nspearmanr(predic_train, y_train), spearmanr(predic_test, y_test)",
"_____no_output_____"
],
[
"plt.scatter(X_train[:,0], y_train)\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(y_train)\nplt.show()",
"_____no_output_____"
],
[
"px_data = np.concatenate([np.array(fitnesses).reshape(-1, 1), np.array(fitnesses_vs_wt).reshape(-1, 1),\n np.array(fitnesses_vs_avg).reshape(-1, 1)], axis = 1)",
"_____no_output_____"
],
[
"total_data = np.concatenate([latent_data, torch.Tensor(px_data)], axis = 1)",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(np.array(total_data)[1:], class_y, \n train_size = 1000, random_state=10)\n\nlatentReg = GradientBoostingRegressor()\nlatentReg.fit(X_train, y_train)\n# latentReg.predict(X_test)\nlatentReg.score(X_test, y_test)",
"_____no_output_____"
],
[
"spearmanr(latentReg.predict(X_test), y_test)",
"_____no_output_____"
]
],
[
[
"## Calculating P(X)",
"_____no_output_____"
]
],
[
[
"m = torch.nn.Softmax()",
"_____no_output_____"
],
[
"PRUNED_SEQ_LENGTH",
"_____no_output_____"
],
[
"reshaped_fit = np.array(m(fit_total.reshape(len(fit_total) * PRUNED_SEQ_LENGTH, \n len(ORDER_LIST))).reshape(len(fit_total), PRUNED_SEQ_LENGTH, len(ORDER_LIST))\n .transpose(2, 1))\n\n\n\n\n\n",
"/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:2: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n \n"
],
[
"plt.pcolor(reshaped_fit[0])",
"_____no_output_____"
],
[
"fit_total.shape",
"_____no_output_____"
],
[
"len(ORDER_LIST)",
"_____no_output_____"
],
[
"digit_wt",
"_____no_output_____"
],
[
"sample_size=len(fit_total)\nsample_for_averging_size=100\nsequence_size=PRUNED_SEQ_LENGTH\ndigit_size = len(ORDER_LIST)\n\ndigit = reshaped_fit[0]#fit_xtrain_softmax_reshaped[0]\ndigit_wt = digit\ndigit_wt = normalize(digit,axis=0, norm='l1')\n# print (digit_wt)\n\n\nwt_prob=compute_log_probability(reshaped_fit[0].reshape(digit_size, sequence_size),digit_wt)\n#print (\"wt_log_prob: \", wt_prob)\n\nwt_probs=[]\ndigit_avg=np.zeros((digit_size, sequence_size))\n\n\nsample_indices=random.sample(range(sample_size),sample_for_averging_size)\n\ncounter=0\nfor sample in sample_indices:\n digit = reshaped_fit[sample]\n# print (digit)\n# print (digit_avg)\n# digit_wt_i = normalize(digit,axis=0, norm='l1')\n digit_wt_i = digit\n \n# print (digit_wt_i)\n \n digit_avg+=np.array(digit_wt_i) * 1. / sample_for_averging_size\n \n wt_p=compute_log_probability(reshaped_fit[sample].reshape(digit_size, sequence_size),digit_wt_i)\n wt_probs.append(wt_p)\n counter+=1\n \naverage_wt_p=np.mean(wt_probs)\n\nfitnesses_vs_wt=[]\nfitnesses=[] #first plug in just the sequences\nfitnesses_vs_avg=[] \n\nfor sample in range(0,sample_size):\n digit = reshaped_fit[sample]\n# digit = normalize(digit,axis=0, norm='l1')\n \n fitness=compute_log_probability(reshaped_fit[sample].reshape(digit_size, sequence_size),digit)-wt_prob\n fitnesses.append(fitness)\n \n fitness=compute_log_probability(reshaped_fit[sample].reshape(digit_size, sequence_size),digit_wt)-wt_prob\n fitnesses_vs_wt.append(fitness)\n \n fitness=compute_log_probability(reshaped_fit[sample].reshape(digit_size, sequence_size),digit_avg)-average_wt_p\n fitnesses_vs_avg.append(fitness)\n \n \ntest_data = class_y\nprint (\"Spearman\",spearmanr(fitnesses_vs_avg[1:],test_data[:sample_size]))\nprint (\"Pearson\", pearsonr(fitnesses_vs_avg[1:],test_data[:sample_size]))\nprint ('------------------------------')\nprint (\"Spearman\",spearmanr(fitnesses_vs_wt[1:],test_data[:sample_size]))\nprint (\"Pearson\", pearsonr(fitnesses_vs_wt[1:],test_data[:sample_size]))\nprint ('------------------------------')\nprint (\"Spearman\",spearmanr(fitnesses[1:],test_data[:sample_size]))\nprint (\"Pearson\", pearsonr(fitnesses[1:],test_data[:sample_size]))",
"/home/anirudh_suresh/VAE_protein_function/helper_tools.py:115: RuntimeWarning: divide by zero encountered in log\n log_prod_mat=np.log(prod_mat)\n"
],
[
"plt.scatter(fitnesses_vs_wt, sarkisyan['quantitative_function'][1:sample_size])",
"_____no_output_____"
],
[
"reshaped_fit_sarkisyan[0].reshape(digit_size, sequence_size).T",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf7dc004a98ca1111e7e9eb74416278aeeb6962 | 9,212 | ipynb | Jupyter Notebook | Array/1229/1366. Rank Teams by Votes.ipynb | YuHe0108/Leetcode | 90d904dde125dd35ee256a7f383961786f1ada5d | [
"Apache-2.0"
] | 1 | 2020-08-05T11:47:47.000Z | 2020-08-05T11:47:47.000Z | Array/1229/1366. Rank Teams by Votes.ipynb | YuHe0108/LeetCode | b9e5de69b4e4d794aff89497624f558343e362ad | [
"Apache-2.0"
] | null | null | null | Array/1229/1366. Rank Teams by Votes.ipynb | YuHe0108/LeetCode | b9e5de69b4e4d794aff89497624f558343e362ad | [
"Apache-2.0"
] | null | null | null | 23.321519 | 505 | 0.41218 | [
[
[
"from collections import defaultdict\n\nclass Solution:\n def rankTeams(self, votes):\n count = defaultdict(dict)\n for v in votes:\n for i, a in enumerate(v):\n if i not in count[a]:\n count[a][i] = 1\n else:\n count[a][i] += 1\n print(count)",
"_____no_output_____"
],
[
"import collections\n\nclass Solution:\n def rankTeams(self, votes) -> str: \n base=[''.join(a) for a in zip(*votes)]\n cnt=[collections.Counter(x) for x in base]\n ans = list(votes[0])\n ans.sort(key=lambda x: tuple([-a[x] for a in cnt]))\n return ''.join(ans)",
"_____no_output_____"
],
[
"solution = Solution()\nsolution.rankTeams(votes = [\"ABC\",\"ACB\",\"ABC\",\"ACB\",\"ACB\"])",
"[Counter({'A': 5}), Counter({'C': 3, 'B': 2}), Counter({'B': 3, 'C': 2})]\n['A', 'B', 'C']\n['A', 'B', 'C']\n['A', 'C', 'B']\n"
],
[
"from collections import defaultdict\n\nclass Solution:\n def rankTeams(self, votes):\n count = defaultdict(dict)\n for v in votes:\n for i, a in enumerate(v):\n if i not in count[a]:\n count[a][i] = 1\n else:\n count[a][i] += 1\n \n values = {}\n for a, v in count.items():\n v = sorted(v.items(), key=lambda x:x[0])\n item = ''\n for k, v1 in v:\n item += str(k) * v1\n values[a] = item\n print(values)\n outs = sorted(values.items(), key=lambda x:(x[1], x[0]))\n res = [k for k, v in outs]\n return ''.join(res)",
"_____no_output_____"
],
[
"solution = Solution()\nsolution.rankTeams([\"ZMYLBOPHRQICNWFXTVKAGUEDSJ\"])",
"[Counter({'Z': 1}), Counter({'M': 1}), Counter({'Y': 1}), Counter({'L': 1}), Counter({'B': 1}), Counter({'O': 1}), Counter({'P': 1}), Counter({'H': 1}), Counter({'R': 1}), Counter({'Q': 1}), Counter({'I': 1}), Counter({'C': 1}), Counter({'N': 1}), Counter({'W': 1}), Counter({'F': 1}), Counter({'X': 1}), Counter({'T': 1}), Counter({'V': 1}), Counter({'K': 1}), Counter({'A': 1}), Counter({'G': 1}), Counter({'U': 1}), Counter({'E': 1}), Counter({'D': 1}), Counter({'S': 1}), Counter({'J': 1})]\n['Z', 'M', 'Y', 'L', 'B', 'O', 'P', 'H', 'R', 'Q', 'I', 'C', 'N', 'W', 'F', 'X', 'T', 'V', 'K', 'A', 'G', 'U', 'E', 'D', 'S', 'J']\n['Z', 'M', 'Y', 'L', 'B', 'O', 'P', 'H', 'R', 'Q', 'I', 'C', 'N', 'W', 'F', 'X', 'T', 'V', 'K', 'A', 'G', 'U', 'E', 'D', 'S', 'J']\n['Z', 'M', 'Y', 'L', 'B', 'O', 'P', 'H', 'R', 'Q', 'I', 'C', 'N', 'W', 'F', 'X', 'T', 'V', 'K', 'A', 'G', 'U', 'E', 'D', 'S', 'J']\n"
],
[
"from functools import cmp_to_key\n\nclass Solution:\n def rankTeams(self, votes) -> str:\n counts = {}\n length = len(votes[0]) # 一共给多少人排名\n for char in votes[0]: # 初始化每个人的得票都是 0\n counts[char] = [0] * length\n for vote in votes:\n for i, char in enumerate(vote):\n counts[char][i] += 1\n # print(counts)\n counts = sorted(list(counts.items()), key=lambda x:x[0])\n # print(counts)\n\n def sort_key(x, y) -> bool:\n # 自定义排序\n x, y = list(x[1]), list(y[1])\n for i in range(len(x)):\n if x[i] != y[i]:\n return y[i]-x[i] # 对于相同位置的数字,谁的数值更大谁得分更高\n else:\n continue\n return True\n \n counts = sorted(counts, key=cmp_to_key(sort_key))\n # print(counts)\n ans = ''.join([x[0] for x in counts])\n return ans",
"_____no_output_____"
],
[
"solution = Solution()\nsolution.rankTeams(votes = [\"ABC\",\"ACB\",\"ABC\",\"ACB\",\"ACB\"])",
"[0, 2, 3] [5, 0, 0]\n[0, 3, 2] [0, 2, 3]\n[0, 3, 2] [0, 2, 3]\n[0, 3, 2] [5, 0, 0]\n"
],
[
"'AAAB' > 'AAA'",
"_____no_output_____"
],
[
"'0001' > '0000'",
"_____no_output_____"
],
[
"def helper(a):\n if a > 1:\n return True\n return False\na = [1, 2, 3]\nb = sorted(a, key=helper)",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
],
[
"chr(0 + 65)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf80400567f79f0c2526100b6576bba42ceb2b6 | 2,213 | ipynb | Jupyter Notebook | Interface202106_02/Chapter2.ipynb | zfukuoka/Copying_a_sutra | 2d0b1f781fc029ae0108b639e893708a8c45cee2 | [
"BSD-2-Clause"
] | null | null | null | Interface202106_02/Chapter2.ipynb | zfukuoka/Copying_a_sutra | 2d0b1f781fc029ae0108b639e893708a8c45cee2 | [
"BSD-2-Clause"
] | null | null | null | Interface202106_02/Chapter2.ipynb | zfukuoka/Copying_a_sutra | 2d0b1f781fc029ae0108b639e893708a8c45cee2 | [
"BSD-2-Clause"
] | null | null | null | 24.588889 | 250 | 0.441934 | [
[
[
"<a href=\"https://colab.research.google.com/github/zfukuoka/Copying_a_sutra/blob/master/Interface202106_02/Chapter2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# 第2章 Pythonプログラムが実行される仕組み\n",
"_____no_output_____"
]
],
[
[
"# リスト 番号なし1\n\nimport dis\n\ndef my_subtract(x, y):\n return x - y\n\n\ndis.dis(my_subtract)",
" 6 0 LOAD_FAST 0 (x)\n 2 LOAD_FAST 1 (y)\n 4 BINARY_SUBTRACT\n 6 RETURN_VALUE\n"
]
],
[
[
"## バイトコード\n\n* 上記は実装のバイトコードを表示する仕組み\n* 仮想マシンはレジスタはなく、スタック\n * このため、LOAD_FAST でスタックに値を展開\n * そして、計算結果もスタックに格納\n* 減算命令は BINARY_SUBTRACT で実行\n* 関数からの復帰は RETURN_VALUE で実行\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecf8191459f1dfaee7033e8d6572dd7c330aae3c | 6,905 | ipynb | Jupyter Notebook | DataCollection/craigslist_scraper.ipynb | wyndwarrior/HouseRank | 0697c39b8625724df989f081cacb20751d33f88c | [
"MIT"
] | null | null | null | DataCollection/craigslist_scraper.ipynb | wyndwarrior/HouseRank | 0697c39b8625724df989f081cacb20751d33f88c | [
"MIT"
] | null | null | null | DataCollection/craigslist_scraper.ipynb | wyndwarrior/HouseRank | 0697c39b8625724df989f081cacb20751d33f88c | [
"MIT"
] | null | null | null | 29.508547 | 108 | 0.461694 | [
[
[
"import pandas as pd\nimport urllib\nfrom bs4 import BeautifulSoup as bs4\nimport requests\n%pylab inline",
"_____no_output_____"
]
],
[
[
"# Scraper 1\n\nFast craigslist scraper. Only gets price, size, title, city",
"_____no_output_____"
]
],
[
[
"url_base = 'http://sfbay.craigslist.org/search/eby/apa'\nparams = dict(search_distance=4, postal=94720)\nrsp = requests.get(url_base, params=params)\nhtml = bs4(rsp.text, 'html.parser')\napts = html.find_all('p', attrs={'class': 'row'})",
"http://sfbay.craigslist.org/search/eby/apa?postal=94720&search_distance=4\n"
],
[
"import time\ncl_data = []\nfor i in [0,100,200,300,400,500,600,700,800,900,1000,1100]:\n params = dict(search_distance=4, postal=94720,s=i)\n rsp = requests.get(url_base, params=params)\n html = bs4(rsp.text, 'html.parser')\n apts = html.find_all('p', attrs={'class': 'row'})\n for apt in apts:\n url = \"https://sfbay.craigslist.org\" + apt.find('a', attrs={'class': 'hdrlnk'})['href']\n try:\n size = apt.findAll(attrs={'class': 'housing'})[0].text\n except IndexError:\n size = \"Not Listed\"\n title = apt.find('a',attrs={'class': 'hdrlnk'}).text\n try:\n price = apt.findAll(attrs={'class': 'price'})[0].text\n except IndexError:\n price = \"Not Listed\"\n location = apt.findAll(attrs={'class': 'pnr'})[0].text\n #print url,size,title,price,location\n cl_string = url + \",\" + size + \",\" + title + \",\" + price + \",\" + location + \"\\n\"\n cl_data.append(cl_string)\n time.sleep(5)",
"_____no_output_____"
],
[
"f1=open('cl.csv', 'w+')\nf1.write('url,size,title,price,location\\n')\nfor data in cl_data:\n try:\n f1.write(data)\n except:\n pass\nf1.close()\n\nprint \"done\"",
"done\n"
]
],
[
[
"# Scraper 2\n\nMore thorough, grabs size, price, city, lat/long, features, open house, images",
"_____no_output_____"
]
],
[
[
"import time, json\ncl_data = []\nfor i in [400,500,600,700,800,900,1000]:\n time.sleep(3)\n url_base = 'http://sfbay.craigslist.org/search/eby/apa'\n params = dict(search_distance=4, postal=94720,s=i)\n rsp = requests.get(url_base, params=params)\n html = bs4(rsp.text, 'html.parser')\n apts = html.find_all('p', attrs={'class': 'row'})\n #for apt in apts:\n data = {}\n for apt in apts:\n time.sleep(1)\n url = \"https://sfbay.craigslist.org\" + apt.find('a', attrs={'class': 'hdrlnk'})['href']\n r = urllib.urlopen(url).read()\n soup = bs4(r)\n final_dict = {}\n title = soup.findAll(\"span\", {\"id\": \"titletextonly\"})[0].text\n try:\n size = soup.find(\"span\", {\"class\": \"housing\"}).text\n except:\n size = \"n/a\"\n try:\n price = soup.findAll(\"span\", {\"class\": \"price\"})[0].text\n except:\n price = \"n/a\"\n try:\n city = soup.findAll(\"small\")[0].text\n except:\n city = \"n/a\"\n try:\n longitude = soup.findAll(\"div\", {\"class\": \"viewposting\"})[0]['data-longitude']\n latitude = soup.findAll(\"div\", {\"class\": \"viewposting\"})[0]['data-latitude']\n except:\n longitude = \"n/a\"\n latitude = \"n/a\"\n try:\n features = soup.find(id='postingbody').text\n except:\n features = \"n/a\"\n try:\n open_house = soup.find(\"span\", {\"class\": \"otherpostings\"}).text\n except:\n open_house = \"n/a\"\n images = []\n gmap = \"n/a\"\n for a in soup.find_all('a', href=True):\n if \"images.craigslist.org\" in a['href']:\n images.append(a['href'])\n if \"maps.google.com\" in a['href']:\n gmap = a['href']\n final_dict['title'] = title\n final_dict['price'] = price\n final_dict['city'] = city\n final_dict['longitude'] = longitude\n final_dict['latitude'] = latitude\n final_dict['features'] = features\n final_dict['open_house'] = open_house\n final_dict['images'] = images\n final_dict['gmap'] = gmap\n final_dict['size'] = size\n\n data[url] = final_dict\n \n filename = \"data\" + str(i) + \".json\"\n with open(filename, 'w') as outfile:\n json.dump(data, outfile)\n\n \n ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf823628346335a332142a9a75ee662bf829095 | 114,552 | ipynb | Jupyter Notebook | autoencoder2.ipynb | jeonjw25/kamp_AI_data_competition | e8a55a7513c2fe98160c12a413d7735f4279b0df | [
"MIT"
] | null | null | null | autoencoder2.ipynb | jeonjw25/kamp_AI_data_competition | e8a55a7513c2fe98160c12a413d7735f4279b0df | [
"MIT"
] | null | null | null | autoencoder2.ipynb | jeonjw25/kamp_AI_data_competition | e8a55a7513c2fe98160c12a413d7735f4279b0df | [
"MIT"
] | null | null | null | 136.371429 | 35,122 | 0.827511 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom tensorflow.keras.layers import Dense, Dropout\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.callbacks import EarlyStopping\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\nfrom sklearn.metrics import confusion_matrix\n",
"_____no_output_____"
],
[
"# 데이터 불러오기 및 중복제거\nlabeled_data = pd.read_csv('labeled.csv')\nunlabeled_data = pd.read_csv('unlabeled.csv')\nlabeled_data['PassOrFail'] = labeled_data['PassOrFail'].replace('Y', 0).replace('N', 1)\nlabeled_data.drop_duplicates(inplace=True)\n",
"_____no_output_____"
],
[
"# unlabeled data features 확인\nunlabeled_data.columns\n",
"_____no_output_____"
],
[
"# labeled data features 확인\nlabeled_data.columns\n",
"_____no_output_____"
],
[
"def make_input(data, machine_name, product_name):\n machine_ = data['EQUIP_NAME'] == machine_name\n product_ = data['PART_NAME'] == product_name\n data = data[machine_ & product_]\n data.drop(['_id', 'TimeStamp', 'PART_FACT_PLAN_DATE', 'PART_FACT_SERIAL', 'PART_NAME', 'EQUIP_CD', 'EQUIP_NAME', 'Mold_Temperature_1','Mold_Temperature_2','Mold_Temperature_5','Mold_Temperature_6','Mold_Temperature_7','Mold_Temperature_8','Mold_Temperature_9','Mold_Temperature_10','Mold_Temperature_11','Mold_Temperature_12'], axis=1, inplace=True)\n return data",
"_____no_output_____"
],
[
"# unlabeled에서 우진2호기만 뽑아내고 cn7, rg3, cn7+rg3 데이터구분\nmachine_name = '650톤-우진2호기'\nproduct_name = [\"CN7 W/S SIDE MLD'G LH\", \"CN7 W/S SIDE MLD'G RH\", \"RG3 MOLD'G W/SHLD, RH\", \"RG3 MOLD'G W/SHLD, LH\"]\ncn7lh = make_input(unlabeled_data, machine_name, product_name[0])\ncn7rh = make_input(unlabeled_data, machine_name, product_name[1])\n\nrg3lh = make_input(unlabeled_data, machine_name, product_name[2])\nrg3rh = make_input(unlabeled_data, machine_name, product_name[3])\n\ncn7_train = pd.concat([cn7lh, cn7rh], ignore_index=True) # unlabeled에서 cn7만 추출\nrg3_train = pd.concat([rg3lh, rg3rh], ignore_index=True) # unlabeled에서 rg3만 추출\ncn_rg_train = pd.concat([cn7lh, cn7rh, rg3lh, rg3rh], ignore_index=True) # unlabeled에 cn7 + rg3\n\n# 각각 필요없는 col 제거\ncn7_train.drop(['Unnamed: 0', 'Switch_Over_Position', 'Barrel_Temperature_7', 'PART_NO', 'ERR_FACT_QTY'], axis=1, inplace=True)\nrg3_train.drop(['Unnamed: 0', 'Plasticizing_Position', 'PART_NO', 'ERR_FACT_QTY'], axis=1, inplace=True)\ncn_rg_train.drop(['Unnamed: 0', 'PART_NO', 'ERR_FACT_QTY'], axis=1, inplace=True)\n\n",
"C:\\Users\\jeonj\\anaconda3\\lib\\site-packages\\pandas\\core\\frame.py:4906: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n"
],
[
"for column in cn_rg_train.columns:\n if cn_rg_train['Injection_Time'][cn_rg_train[column]==0].count() == len(cn_rg_train):\n print(column) ",
"_____no_output_____"
],
[
"# 트레이닝셋 shape 확인\nprint(\"CN7+RG3: \", cn_rg_train.shape)\nprint(\"CN7: \", cn7_train.shape)\nprint(\"RG3: \", rg3_train.shape)",
"CN7+RG3: (71180, 26)\nCN7: (35239, 24)\nRG3: (35941, 25)\n"
],
[
"# 테스트 데이터셋 구성\n# labeled 에서 cn7, rg3, cn7+rg3 데이터 구분\nproduct_name = [\"CN7 W/S SIDE MLD'G LH\", \"CN7 W/S SIDE MLD'G RH\", \"RG3 MOLD'G W/SHLD, RH\", \"RG3 MOLD'G W/SHLD, LH\"]\ncn7lh = make_input(labeled_data, machine_name, product_name[0])\ncn7rh = make_input(labeled_data, machine_name, product_name[1])\n\nrg3lh = make_input(labeled_data, machine_name, product_name[2])\nrg3rh = make_input(labeled_data, machine_name, product_name[3])\n\ncn7_test = pd.concat([cn7lh, cn7rh], ignore_index=True) # cn7\nrg3_test = pd.concat([rg3lh, rg3rh], ignore_index=True) # rg3\ncn_rg_test = pd.concat([cn7lh, cn7rh, rg3lh, rg3rh], ignore_index=True) # cn7 + rg3\n",
"_____no_output_____"
],
[
"#각 테스트셋 양품 불량품 갯수확인 \ncn_pdf = cn7_test.loc[cn7_test['PassOrFail'] == 0].copy()\ncn_ndf = cn7_test.loc[cn7_test['PassOrFail'] == 1].copy()\n\nrg_pdf = rg3_test.loc[rg3_test['PassOrFail'] == 0].copy()\nrg_ndf = rg3_test.loc[rg3_test['PassOrFail'] == 1].copy()\n\ncn_rg_pdf = cn_rg_test.loc[cn_rg_test['PassOrFail'] == 0].copy()\ncn_rg_ndf = cn_rg_test.loc[cn_rg_test['PassOrFail'] == 1].copy()\n\nprint(\"CN7 양품: \", cn_pdf.shape, \"CN7 불량품:\" , cn_ndf.shape)\nprint(\"RG3 양품: \", rg_pdf.shape, \"RG3 불량품:\" , rg_ndf.shape)\nprint(\"CN+RG 양품: \", cn_rg_pdf.shape, \"CN+RG 불량품:\" , cn_rg_ndf.shape)",
"CN7 양품: (3946, 28) CN7 불량품: (28, 28)\nRG3 양품: (1224, 28) RG3 불량품: (32, 28)\nCN+RG 양품: (5170, 28) CN+RG 불량품: (60, 28)\n"
],
[
"# 테스트셋 feature 제거\ncn_pdf.drop(['PassOrFail', 'Reason', 'Switch_Over_Position', 'Barrel_Temperature_7'], axis=1, inplace=True) #col = 24\ncn_ndf.drop(['PassOrFail', 'Reason', 'Switch_Over_Position', 'Barrel_Temperature_7'], axis=1, inplace=True) #col = 24\n\nrg_pdf.drop(['PassOrFail', 'Reason', 'Plasticizing_Position'], axis=1, inplace=True) #col = 25\nrg_ndf.drop(['PassOrFail', 'Reason', 'Plasticizing_Position'], axis=1, inplace=True) #col = 25\n\ncn_rg_pdf.drop(['PassOrFail', 'Reason'], axis=1, inplace=True) #col = 26\ncn_rg_ndf.drop(['PassOrFail', 'Reason'], axis=1, inplace=True) #col = 26\n\n\n",
"_____no_output_____"
],
[
"# shape 확인\nprint(\"######### 테스트셋 ###########\")\nprint(\"CN7 양품: \", cn_pdf.shape, \"CN7 불량품:\" , cn_ndf.shape)\nprint(\"RG3 양품: \", rg_pdf.shape, \"RG3 불량품:\" , rg_ndf.shape)\nprint(\"CN+RG 양품: \", cn_rg_pdf.shape, \"CN+RG 불량품:\" , cn_rg_ndf.shape)\nprint(\"\\n######### 트레이닝셋 ###########\")\nprint(\"CN7: \", cn7_train.shape)\nprint(\"RG3: \", rg3_train.shape)\nprint(\"CN7+RG3: \", cn_rg_train.shape)\n",
"######### 테스트셋 ###########\nCN7 양품: (3946, 24) CN7 불량품: (28, 24)\nRG3 양품: (1224, 25) RG3 불량품: (32, 25)\nCN+RG 양품: (5170, 26) CN+RG 불량품: (60, 26)\n\n######### 트레이닝셋 ###########\nCN7: (35239, 24)\nRG3: (35941, 25)\nCN7+RG3: (71180, 26)\n"
],
[
"# 데이터셋 정규화\nscaler = MinMaxScaler()\ncn7_X = scaler.fit_transform(cn7_train) # 언라벨데이터\ncn7_Y = scaler.transform(cn_pdf) # 라벨\ncn7_N = scaler.fit_transform(cn_ndf) # 라벨\n\nrg3_X = scaler.fit_transform(rg3_train) # 언라벨\nrg3_Y = scaler.transform(rg_pdf) # 라벨\nrg3_N = scaler.fit_transform(rg_ndf) # 라벨\n\ncnrg_X = scaler.fit_transform(cn_rg_train) # 언라벨\ncnrg_Y = scaler.transform(cn_rg_pdf) # 라벨\ncnrg_N = scaler.transform(cn_rg_ndf) # 라벨\n\n# sol2) CN7 정규화를 먼저 해버리고 트레이닝 테스트셋 나누기\n# cn7_df = pd.concat([cn7_train, cn_pdf, cn_ndf], ignore_index=True)\n# norm_cn7 = scaler.fit_transform(cn7_df)\n# cn7_X = norm_cn7[:35239]\n# cn7_Y = norm_cn7[35239:39185]\n# cn7_N = norm_cn7[39185:]\n\n",
"_____no_output_____"
],
[
"# 모델링\ndropout_encoder = Sequential([\n Dropout(0.3),\n Dense(15, activation='relu'),\n Dense(5, activation='relu')\n])\n\ndropout_decoder = Sequential([\n Dense(15, activation='relu', input_shape=[5]),\n Dense(cn7_X.shape[1], activation='relu')\n])\n\ndropout_AE = Sequential([dropout_encoder, dropout_decoder])",
"_____no_output_____"
],
[
"dropout_AE.compile(loss='mse', optimizer=Adam(lr=0.01), metrics=['accuracy'])\nhistory = dropout_AE.fit(cn7_X, cn7_X, batch_size=30, epochs=100, validation_split=0.2)\ncallbacks = [EarlyStopping(monitor='val_loss', patience=7, mode='min')]",
"Epoch 1/100\n940/940 [==============================] - 1s 903us/step - loss: 0.0865 - accuracy: 0.3479 - val_loss: 0.0844 - val_accuracy: 0.9729\nEpoch 2/100\n940/940 [==============================] - 1s 922us/step - loss: 0.0828 - accuracy: 0.3568 - val_loss: 0.0823 - val_accuracy: 0.9729\nEpoch 3/100\n940/940 [==============================] - 1s 978us/step - loss: 0.0824 - accuracy: 0.3588 - val_loss: 0.0818 - val_accuracy: 0.9729\nEpoch 4/100\n940/940 [==============================] - 1s 985us/step - loss: 0.0822 - accuracy: 0.3644 - val_loss: 0.0819 - val_accuracy: 0.9729\nEpoch 5/100\n940/940 [==============================] - 1s 967us/step - loss: 0.0821 - accuracy: 0.3616 - val_loss: 0.0817 - val_accuracy: 0.3253\nEpoch 6/100\n940/940 [==============================] - 1s 968us/step - loss: 0.0820 - accuracy: 0.3654 - val_loss: 0.0815 - val_accuracy: 0.9729\nEpoch 7/100\n940/940 [==============================] - 1s 921us/step - loss: 0.0818 - accuracy: 0.3514 - val_loss: 0.0816 - val_accuracy: 0.9729\nEpoch 8/100\n940/940 [==============================] - 1s 1ms/step - loss: 0.0818 - accuracy: 0.3571 - val_loss: 0.0815 - val_accuracy: 0.9729\nEpoch 9/100\n940/940 [==============================] - 1s 908us/step - loss: 0.0817 - accuracy: 0.3436 - val_loss: 0.0814 - val_accuracy: 0.9729\nEpoch 10/100\n940/940 [==============================] - 1s 840us/step - loss: 0.0822 - accuracy: 0.3558 - val_loss: 0.0837 - val_accuracy: 0.3253\nEpoch 11/100\n940/940 [==============================] - 1s 904us/step - loss: 0.0823 - accuracy: 0.3571 - val_loss: 0.0845 - val_accuracy: 0.9729\nEpoch 12/100\n940/940 [==============================] - 1s 931us/step - loss: 0.0733 - accuracy: 0.3495 - val_loss: 0.0637 - val_accuracy: 0.3253\nEpoch 13/100\n940/940 [==============================] - 1s 959us/step - loss: 0.0644 - accuracy: 0.3648 - val_loss: 0.0640 - val_accuracy: 0.9729\nEpoch 14/100\n940/940 [==============================] - 1s 988us/step - loss: 0.0644 - accuracy: 0.3498 - val_loss: 0.0641 - val_accuracy: 0.9729\nEpoch 15/100\n940/940 [==============================] - 1s 975us/step - loss: 0.0645 - accuracy: 0.3531 - val_loss: 0.0637 - val_accuracy: 0.3276\nEpoch 16/100\n940/940 [==============================] - 1s 827us/step - loss: 0.0528 - accuracy: 0.3147 - val_loss: 0.0361 - val_accuracy: 0.9729\nEpoch 17/100\n940/940 [==============================] - 1s 716us/step - loss: 0.0470 - accuracy: 0.3555 - val_loss: 0.0363 - val_accuracy: 0.3253\nEpoch 18/100\n940/940 [==============================] - 1s 717us/step - loss: 0.0470 - accuracy: 0.3521 - val_loss: 0.0361 - val_accuracy: 0.9729\nEpoch 19/100\n940/940 [==============================] - 1s 717us/step - loss: 0.0470 - accuracy: 0.3572 - val_loss: 0.0357 - val_accuracy: 0.9729\nEpoch 20/100\n940/940 [==============================] - 1s 709us/step - loss: 0.0468 - accuracy: 0.3501 - val_loss: 0.0357 - val_accuracy: 0.3253\nEpoch 21/100\n940/940 [==============================] - 1s 715us/step - loss: 0.0386 - accuracy: 0.3485 - val_loss: 0.0133 - val_accuracy: 0.9729\nEpoch 22/100\n940/940 [==============================] - 1s 708us/step - loss: 0.0237 - accuracy: 0.3404 - val_loss: 0.0129 - val_accuracy: 0.9729\nEpoch 23/100\n940/940 [==============================] - 1s 713us/step - loss: 0.0237 - accuracy: 0.3389 - val_loss: 0.0123 - val_accuracy: 0.9729\nEpoch 24/100\n940/940 [==============================] - 1s 724us/step - loss: 0.0238 - accuracy: 0.3400 - val_loss: 0.0130 - val_accuracy: 0.9729\nEpoch 25/100\n940/940 [==============================] - 1s 716us/step - loss: 0.0238 - accuracy: 0.3435 - val_loss: 0.0133 - val_accuracy: 0.9729\nEpoch 26/100\n940/940 [==============================] - 1s 721us/step - loss: 0.0238 - accuracy: 0.3334 - val_loss: 0.0124 - val_accuracy: 0.9729\nEpoch 27/100\n940/940 [==============================] - 1s 862us/step - loss: 0.0237 - accuracy: 0.3392 - val_loss: 0.0124 - val_accuracy: 0.9729\nEpoch 28/100\n940/940 [==============================] - 1s 845us/step - loss: 0.0237 - accuracy: 0.3350 - val_loss: 0.0129 - val_accuracy: 0.9729\nEpoch 29/100\n940/940 [==============================] - 1s 898us/step - loss: 0.0237 - accuracy: 0.3429 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 30/100\n940/940 [==============================] - 1s 832us/step - loss: 0.0237 - accuracy: 0.3416 - val_loss: 0.0122 - val_accuracy: 0.9729\nEpoch 31/100\n940/940 [==============================] - 1s 887us/step - loss: 0.0237 - accuracy: 0.3294 - val_loss: 0.0125 - val_accuracy: 0.3253\nEpoch 32/100\n940/940 [==============================] - 1s 842us/step - loss: 0.0237 - accuracy: 0.3373 - val_loss: 0.0130 - val_accuracy: 0.9729\nEpoch 33/100\n940/940 [==============================] - 1s 929us/step - loss: 0.0237 - accuracy: 0.3347 - val_loss: 0.0129 - val_accuracy: 0.3253\nEpoch 34/100\n940/940 [==============================] - 1s 905us/step - loss: 0.0238 - accuracy: 0.3382 - val_loss: 0.0121 - val_accuracy: 0.3253\nEpoch 35/100\n940/940 [==============================] - 1s 905us/step - loss: 0.0237 - accuracy: 0.3427 - val_loss: 0.0124 - val_accuracy: 0.9729\nEpoch 36/100\n940/940 [==============================] - 1s 908us/step - loss: 0.0239 - accuracy: 0.3403 - val_loss: 0.0136 - val_accuracy: 0.9729\nEpoch 37/100\n940/940 [==============================] - 1s 924us/step - loss: 0.0238 - accuracy: 0.3428 - val_loss: 0.0127 - val_accuracy: 0.3253\nEpoch 38/100\n940/940 [==============================] - 1s 1ms/step - loss: 0.0239 - accuracy: 0.3548 - val_loss: 0.0128 - val_accuracy: 0.9729\nEpoch 39/100\n940/940 [==============================] - 1s 911us/step - loss: 0.0237 - accuracy: 0.3428 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 40/100\n940/940 [==============================] - 1s 808us/step - loss: 0.0237 - accuracy: 0.3370 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 41/100\n940/940 [==============================] - 1s 977us/step - loss: 0.0237 - accuracy: 0.3459 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 42/100\n940/940 [==============================] - 1s 1ms/step - loss: 0.0239 - accuracy: 0.3430 - val_loss: 0.0125 - val_accuracy: 0.3506\nEpoch 43/100\n940/940 [==============================] - 1s 981us/step - loss: 0.0240 - accuracy: 0.3333 - val_loss: 0.0128 - val_accuracy: 0.9729\nEpoch 44/100\n940/940 [==============================] - 1s 916us/step - loss: 0.0239 - accuracy: 0.3502 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 45/100\n940/940 [==============================] - 1s 947us/step - loss: 0.0238 - accuracy: 0.3416 - val_loss: 0.0135 - val_accuracy: 0.9729\nEpoch 46/100\n940/940 [==============================] - 1s 903us/step - loss: 0.0236 - accuracy: 0.3366 - val_loss: 0.0130 - val_accuracy: 0.9655\nEpoch 47/100\n940/940 [==============================] - 1s 1ms/step - loss: 0.0237 - accuracy: 0.3397 - val_loss: 0.0124 - val_accuracy: 0.3506\nEpoch 48/100\n940/940 [==============================] - 1s 897us/step - loss: 0.0237 - accuracy: 0.3475 - val_loss: 0.0128 - val_accuracy: 0.9729\nEpoch 49/100\n940/940 [==============================] - 1s 946us/step - loss: 0.0237 - accuracy: 0.3521 - val_loss: 0.0134 - val_accuracy: 0.9729\nEpoch 50/100\n940/940 [==============================] - 1s 864us/step - loss: 0.0237 - accuracy: 0.3440 - val_loss: 0.0132 - val_accuracy: 0.9729\nEpoch 51/100\n940/940 [==============================] - 1s 861us/step - loss: 0.0237 - accuracy: 0.3392 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 52/100\n940/940 [==============================] - 1s 962us/step - loss: 0.0237 - accuracy: 0.3293 - val_loss: 0.0133 - val_accuracy: 0.9729\nEpoch 53/100\n940/940 [==============================] - 1s 925us/step - loss: 0.0237 - accuracy: 0.3411 - val_loss: 0.0126 - val_accuracy: 0.3253\nEpoch 54/100\n940/940 [==============================] - 1s 872us/step - loss: 0.0238 - accuracy: 0.3299 - val_loss: 0.0133 - val_accuracy: 0.9729\nEpoch 55/100\n940/940 [==============================] - 1s 901us/step - loss: 0.0238 - accuracy: 0.3378 - val_loss: 0.0123 - val_accuracy: 0.9729\nEpoch 56/100\n940/940 [==============================] - 1s 889us/step - loss: 0.0238 - accuracy: 0.3368 - val_loss: 0.0125 - val_accuracy: 0.9729\nEpoch 57/100\n940/940 [==============================] - 1s 925us/step - loss: 0.0237 - accuracy: 0.3481 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 58/100\n940/940 [==============================] - 1s 908us/step - loss: 0.0237 - accuracy: 0.3538 - val_loss: 0.0128 - val_accuracy: 0.9729\nEpoch 59/100\n940/940 [==============================] - 1s 889us/step - loss: 0.0237 - accuracy: 0.3518 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 60/100\n940/940 [==============================] - 1s 866us/step - loss: 0.0236 - accuracy: 0.3445 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 61/100\n940/940 [==============================] - 1s 943us/step - loss: 0.0238 - accuracy: 0.3334 - val_loss: 0.0126 - val_accuracy: 0.3253\nEpoch 62/100\n940/940 [==============================] - 1s 886us/step - loss: 0.0239 - accuracy: 0.3284 - val_loss: 0.0122 - val_accuracy: 0.9729\nEpoch 63/100\n940/940 [==============================] - 1s 886us/step - loss: 0.0239 - accuracy: 0.3429 - val_loss: 0.0125 - val_accuracy: 0.9729\nEpoch 64/100\n940/940 [==============================] - 1s 864us/step - loss: 0.0238 - accuracy: 0.3621 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 65/100\n940/940 [==============================] - 1s 870us/step - loss: 0.0237 - accuracy: 0.3586 - val_loss: 0.0125 - val_accuracy: 0.3253\nEpoch 66/100\n940/940 [==============================] - 1s 866us/step - loss: 0.0238 - accuracy: 0.3508 - val_loss: 0.0121 - val_accuracy: 0.9729\nEpoch 67/100\n940/940 [==============================] - 1s 862us/step - loss: 0.0238 - accuracy: 0.3416 - val_loss: 0.0124 - val_accuracy: 0.9729\nEpoch 68/100\n940/940 [==============================] - 1s 873us/step - loss: 0.0237 - accuracy: 0.3427 - val_loss: 0.0125 - val_accuracy: 0.9729\nEpoch 69/100\n940/940 [==============================] - 1s 992us/step - loss: 0.0239 - accuracy: 0.3360 - val_loss: 0.0127 - val_accuracy: 0.3253\nEpoch 70/100\n940/940 [==============================] - 1s 902us/step - loss: 0.0240 - accuracy: 0.3442 - val_loss: 0.0123 - val_accuracy: 0.3253\nEpoch 71/100\n940/940 [==============================] - 1s 789us/step - loss: 0.0239 - accuracy: 0.3506 - val_loss: 0.0127 - val_accuracy: 0.3253\nEpoch 72/100\n940/940 [==============================] - 1s 953us/step - loss: 0.0238 - accuracy: 0.3580 - val_loss: 0.0127 - val_accuracy: 0.3253\nEpoch 73/100\n940/940 [==============================] - 1s 864us/step - loss: 0.0238 - accuracy: 0.3527 - val_loss: 0.0125 - val_accuracy: 0.9729\nEpoch 74/100\n940/940 [==============================] - 1s 774us/step - loss: 0.0238 - accuracy: 0.3585 - val_loss: 0.0121 - val_accuracy: 0.9729\nEpoch 75/100\n940/940 [==============================] - 1s 776us/step - loss: 0.0241 - accuracy: 0.3541 - val_loss: 0.0126 - val_accuracy: 0.3253\nEpoch 76/100\n940/940 [==============================] - 1s 779us/step - loss: 0.0238 - accuracy: 0.3599 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 77/100\n940/940 [==============================] - 1s 824us/step - loss: 0.0238 - accuracy: 0.3437 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 78/100\n940/940 [==============================] - 1s 793us/step - loss: 0.0239 - accuracy: 0.3472 - val_loss: 0.0123 - val_accuracy: 0.9729\nEpoch 79/100\n940/940 [==============================] - 1s 793us/step - loss: 0.0237 - accuracy: 0.3535 - val_loss: 0.0129 - val_accuracy: 0.3253\nEpoch 80/100\n940/940 [==============================] - 1s 803us/step - loss: 0.0241 - accuracy: 0.3369 - val_loss: 0.0124 - val_accuracy: 0.3253\nEpoch 81/100\n940/940 [==============================] - 1s 779us/step - loss: 0.0239 - accuracy: 0.3662 - val_loss: 0.0123 - val_accuracy: 0.9729\nEpoch 82/100\n940/940 [==============================] - 1s 776us/step - loss: 0.0238 - accuracy: 0.3507 - val_loss: 0.0128 - val_accuracy: 0.3253\nEpoch 83/100\n940/940 [==============================] - 1s 785us/step - loss: 0.0238 - accuracy: 0.3474 - val_loss: 0.0127 - val_accuracy: 0.3506\nEpoch 84/100\n940/940 [==============================] - 1s 838us/step - loss: 0.0239 - accuracy: 0.3553 - val_loss: 0.0126 - val_accuracy: 0.3253\nEpoch 85/100\n940/940 [==============================] - 1s 842us/step - loss: 0.0239 - accuracy: 0.3444 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 86/100\n940/940 [==============================] - 1s 975us/step - loss: 0.0239 - accuracy: 0.3565 - val_loss: 0.0129 - val_accuracy: 0.9729\nEpoch 87/100\n940/940 [==============================] - 1s 981us/step - loss: 0.0239 - accuracy: 0.3720 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 88/100\n940/940 [==============================] - 1s 863us/step - loss: 0.0238 - accuracy: 0.3538 - val_loss: 0.0123 - val_accuracy: 0.9729\nEpoch 89/100\n940/940 [==============================] - 1s 832us/step - loss: 0.0238 - accuracy: 0.3527 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 90/100\n940/940 [==============================] - 1s 825us/step - loss: 0.0238 - accuracy: 0.3583 - val_loss: 0.0127 - val_accuracy: 0.9729\nEpoch 91/100\n940/940 [==============================] - 1s 827us/step - loss: 0.0237 - accuracy: 0.3611 - val_loss: 0.0129 - val_accuracy: 0.3506\nEpoch 92/100\n940/940 [==============================] - 1s 830us/step - loss: 0.0238 - accuracy: 0.3685 - val_loss: 0.0125 - val_accuracy: 0.9729\nEpoch 93/100\n940/940 [==============================] - 1s 874us/step - loss: 0.0238 - accuracy: 0.3547 - val_loss: 0.0126 - val_accuracy: 0.9729\nEpoch 94/100\n940/940 [==============================] - 1s 826us/step - loss: 0.0239 - accuracy: 0.3558 - val_loss: 0.0128 - val_accuracy: 0.9729\nEpoch 95/100\n940/940 [==============================] - 1s 813us/step - loss: 0.0240 - accuracy: 0.3558 - val_loss: 0.0122 - val_accuracy: 0.9729\nEpoch 96/100\n940/940 [==============================] - 1s 840us/step - loss: 0.0239 - accuracy: 0.3636 - val_loss: 0.0121 - val_accuracy: 0.9729\nEpoch 97/100\n940/940 [==============================] - 1s 808us/step - loss: 0.0238 - accuracy: 0.3544 - val_loss: 0.0126 - val_accuracy: 0.3253\nEpoch 98/100\n940/940 [==============================] - 1s 847us/step - loss: 0.0241 - accuracy: 0.3559 - val_loss: 0.0129 - val_accuracy: 0.3253\nEpoch 99/100\n940/940 [==============================] - 1s 907us/step - loss: 0.0239 - accuracy: 0.3650 - val_loss: 0.0130 - val_accuracy: 0.9729\nEpoch 100/100\n940/940 [==============================] - 1s 810us/step - loss: 0.0238 - accuracy: 0.3460 - val_loss: 0.0133 - val_accuracy: 0.9729\n"
],
[
"# dropout_AE.save('saved_model.pb')\nplt.plot(history.history['loss'], label='Training Loss')\nplt.plot(history.history['val_loss'], label='Validation Loss')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(history.history['accuracy'], label='Training Acc')\nplt.plot(history.history['val_accuracy'], label='Validation Acc')\nplt.legend()\nplt.show() ",
"_____no_output_____"
],
[
"cn7_train_pred = dropout_AE.predict(cn7_Y)\ncn7_train_loss = np.mean(np.square(cn7_train_pred - cn7_Y), axis=1)\n\nthreshold = np.mean(cn7_train_loss) + 5 * np.std(cn7_train_loss)\n\nprint(\"복원 오류 임계치: \", threshold)",
"복원 오류 임계치: 0.09380687052688988\n"
],
[
"cn7_predict_Y = dropout_AE.predict(cn7_Y)\ncn7_test_Y_mse = np.mean(np.square(cn7_predict_Y - cn7_Y), axis=1)\n\nplt.hist(cn7_test_Y_mse, bins=50)\nplt.xlabel(\"test MSE loss\")\nplt.ylabel(\"No of samples\")\nplt.show()\n\ncn7_test_Y_anomalies = cn7_test_Y_mse > threshold\nprint(\" 불량 개수: \", np.sum(cn7_test_Y_anomalies))",
"_____no_output_____"
],
[
"cn7_predict_N = dropout_AE.predict(cn7_N)\ncn7_test_N_mse = np.mean(np.square(cn7_predict_N - cn7_N), axis=1)\n\nplt.hist(cn7_test_N_mse, bins=50)\nplt.xlabel(\"test MSE loss\")\nplt.ylabel(\"No of samples\")\nplt.show()\n\ncn7_test_N_anomalies = cn7_test_N_mse > threshold\nprint(\" 불량 개수: \", np.sum(cn7_test_N_anomalies))",
"_____no_output_____"
],
[
"cn7_true = np.concatenate([np.zeros(len(cn7_test_Y_anomalies)), np.ones(len(cn7_test_N_anomalies))])",
"_____no_output_____"
],
[
"cn7_prediction = np.concatenate([cn7_test_Y_anomalies, cn7_test_N_anomalies])",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix(cn7_prediction, cn7_true)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score\nprint(\"정확도: \", accuracy_score(cn7_true, cn7_prediction))\nprint(\"정밀도: \", precision_score(cn7_true, cn7_prediction))\nprint(\"재현율: \", recall_score(cn7_true, cn7_prediction))\nprint(\"F1: \", f1_score(cn7_true, cn7_prediction))",
"정확도: 0.9954705586311021\n정밀도: 0.6086956521739131\n재현율: 1.0\nF1: 0.7567567567567568\n"
],
[
"# unlabeled 데이터 predict\ncn7_predict_Y = dropout_AE.predict(cn7_X)\ncn7_test_Y_mse = np.mean(np.square(cn7_predict_Y - cn7_X), axis=1)\n\nplt.hist(cn7_test_Y_mse, bins=50)\nplt.xlabel(\"test MSE loss\")\nplt.ylabel(\"No of samples\")\nplt.show()\n\ncn7_test_Y_anomalies = cn7_test_Y_mse > threshold\nprint(\"불량 개수: \", np.sum(cn7_test_Y_anomalies))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf82c5c8a98b26f1b8ae93d4fac8d8bc005082d | 3,610 | ipynb | Jupyter Notebook | Business_Buzzword_Generator_STUDENTS.ipynb | Parshav-Shah/ISYS5002_portfolio | cb873369d70d8f5a90f129d0364eeb359092c4ef | [
"MIT"
] | null | null | null | Business_Buzzword_Generator_STUDENTS.ipynb | Parshav-Shah/ISYS5002_portfolio | cb873369d70d8f5a90f129d0364eeb359092c4ef | [
"MIT"
] | null | null | null | Business_Buzzword_Generator_STUDENTS.ipynb | Parshav-Shah/ISYS5002_portfolio | cb873369d70d8f5a90f129d0364eeb359092c4ef | [
"MIT"
] | null | null | null | 35.392157 | 297 | 0.561773 | [
[
[
"<a href=\"https://colab.research.google.com/github/Parshav-Shah/ISYS5002_portfolio/blob/main/Business_Buzzword_Generator_STUDENTS.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Business Buzzword Generator\n\nLets write a real-world business application using Python. Want to write a program to genrate business phrases. The program takes three lists of words, randomly picks one word from each list, combines the words into a phrase, and then prints out the phrase. Here is the psuedocode:\n\n importing the random module\n make three lists, one of buzzword, one of actions, and one of outcomes\n randomly choose one buzzword, action, and outcome from each list\n now build the phrase by \"adding\" the words together\n output the phrase\n \nA good use of pseudocode is each line becomes comments in the code. Each line of pseudo code has been pasted into a cell below. Try to implement each as python statement or statements.",
"_____no_output_____"
]
],
[
[
"# importing the random module\nimport random as rn\n\n# make three lists, one of buzzwords, one of actions, and one of outcomes\nverbs_list=['benchmark','brand','build','cloudify','communicate']\nadjectives_list=['accurate','adaptive','agile','alternative']\nnouns_list=['architectures','bandwidth','benefits','best practices']\n\n# randomly choose one buzzword, action, and outcome from each list\nverb= rn.choice(verbs_list)\nadjective= rn.choice(adjectives_list)\nnoun= rn.choice(nouns_list)\n\n# build the phrase by \"adding\" the words together\nphrase= verb+' '+adjective+' '+noun\n\n# output the phrase\nprint(phrase)",
"communicate adaptive bandwidth\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecf82e7e442c68f911058e6f3c7fc6550042ce87 | 5,270 | ipynb | Jupyter Notebook | notebooks/AUP110_W9_Notebook.ipynb | htchu/AU110Programming | 6c22e3d202afb0ba90ef02378900270ab7ab657b | [
"MIT"
] | 1 | 2021-09-13T11:49:58.000Z | 2021-09-13T11:49:58.000Z | notebooks/AUP110_W9_Notebook.ipynb | htchu/AU110Programming | 6c22e3d202afb0ba90ef02378900270ab7ab657b | [
"MIT"
] | null | null | null | notebooks/AUP110_W9_Notebook.ipynb | htchu/AU110Programming | 6c22e3d202afb0ba90ef02378900270ab7ab657b | [
"MIT"
] | null | null | null | 18.362369 | 82 | 0.460531 | [
[
[
"# AU Fundamentals of Python Programming-W7",
"_____no_output_____"
],
[
"## Topic 1(主題1)-function open() and read",
"_____no_output_____"
],
[
"## 從score.txt輸入資料,印出來",
"_____no_output_____"
]
],
[
[
"fin=open('score.txt')\nfor line in fin:\n print(line.strip(), end=\",\")\nfin.close()",
"_____no_output_____"
]
],
[
[
"## 從score.txt輸入資料,算人數",
"_____no_output_____"
]
],
[
[
"count=0\nfin=open('score.txt')\nfor line in fin:\n count +=1\nfin.close()\nprint(\"The number is\",count)",
"_____no_output_____"
]
],
[
[
"## 計算score.txt檔案中及格人數",
"_____no_output_____"
]
],
[
[
"count=0\npassed = 0\nfin=open('score.txt')\nfor line in fin:\n count +=1\n score = int(line)\n if score >= 60:\n passed +=1\nfin.close()\nprint(\"The number of the passed is\",passed, \"in\", count, \"students\")",
"_____no_output_____"
]
],
[
[
"## write()",
"_____no_output_____"
]
],
[
[
"count=0\npassed = 0\nfin=open('score.txt')\nfout=open('passed.txt','w')\nfor line in fin:\n count +=1\n score = int(line)\n if score >= 60:\n passed +=1\n fout.write(line)\nfin.close()\nfout.close()\nprint(\"The number of the passed is\",passed, \"in\", count, \"students\")\n",
"_____no_output_____"
]
],
[
[
"##用with",
"_____no_output_____"
]
],
[
[
"#改用with來讀檔案\ncount = 0\ntotal = 0\nwith open('score.txt') as fin:\n for line in fin:\n count += 1\n total += int(line)\nprint(\"Average score=\", total/count)",
"_____no_output_____"
],
[
"count=0\npassed = 0\nwith open('score.txt') as fin, open('passed2.txt','w') as fout:\n for line in fin:\n count +=1\n score = int(line)\n if score >= 60:\n passed +=1\n fout.write(line)\nprint(\"The number of the passed is\",passed, \"in\", count, \"students\")",
"_____no_output_____"
]
],
[
[
"## Topic 2(主題2)-while-loop",
"_____no_output_____"
],
[
"## while",
"_____no_output_____"
]
],
[
[
"i = 1\nwhile i < 6:\n print(i)\n i += 1",
"_____no_output_____"
]
],
[
[
"## break-statement: stop the loop",
"_____no_output_____"
]
],
[
[
"i = 1\nwhile i < 6:\n print(i)\n if i == 3:\n break\n i += 1",
"_____no_output_____"
]
],
[
[
"continue-statement: stop the current iteration",
"_____no_output_____"
]
],
[
[
"i = 0\nwhile i < 6:\n i += 1\n if i == 3:\n continue\n print(i)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf82f18b8b071c56baaf6a3e806734ac54d61eb | 18,271 | ipynb | Jupyter Notebook | sessions/multi-modal-pe/Multimodal Fusion for PE Detection (Clean).ipynb | italati/AI-Deep-Learning-Lab-2021 | 1820a104b87bedf10099f1bf1bffbe77e0bf5039 | [
"MIT"
] | null | null | null | sessions/multi-modal-pe/Multimodal Fusion for PE Detection (Clean).ipynb | italati/AI-Deep-Learning-Lab-2021 | 1820a104b87bedf10099f1bf1bffbe77e0bf5039 | [
"MIT"
] | 1 | 2021-11-27T14:27:55.000Z | 2021-11-27T14:27:55.000Z | sessions/multi-modal-pe/Multimodal Fusion for PE Detection (Clean).ipynb | italati/AI-Deep-Learning-Lab-2021 | 1820a104b87bedf10099f1bf1bffbe77e0bf5039 | [
"MIT"
] | null | null | null | 29.140351 | 580 | 0.573149 | [
[
[
"# Multimodal Fusion for Pulmonary Embolism Classification\n\nIn this demonstration, we will build a multimodal fusion model (late fusion) that combines information from both CT scans and Electronic Medical Record (EMR) to automatically diagnose the presence of PE. \n\n### Motivation\n\nPulmonary Embolism (PE) is a serious medical condition that hospitalizes 300,000 people in the United States every year. The gold standard diagnostic modality for PE is Computed Tomography Pulmonary Angiography (CTPA) which is interpreted by radiologists. Studies have shown that prompt diagnosis and treatment can greatly reduce morbidity and mortality. Strategies to automate accurate interpretation and timely reporting of CTPA examinations may successfully triage urgent cases of PE to the immediate attention of physicians, improving time to diagnosis and treatment.\n\nRecent advancements in deep learning have led to a resurgence of medical imaging and Electronic Medical Record (EMR) models for a variety of applications, including clinical decision support, automated workflow triage, clinical prediction and more. However, very few models have been developed to integrate both clinical and imaging data, despite that in routine practice clinicians rely on EMR to provide context in medical imaging interpretation. \n\n### Data\nWe will use RadFusion, a large-scale multimodal pulmonary embolism detection dataset consisting of 1837 CT imaging studies (comprising 600,000+ 2D slices) for 1794 patients and their corresponding EHR summary data. \nhttps://stanfordmedicine.app.box.com/folder/144231260472?s=q6lm1iwauyspyuicq4rlz35bqsnrwle0\n\n### References\n- Huang, Shih-Cheng, et al. \"PENet—a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging.\" NPJ digital medicine 3.1 (2020): 1-9.\n- Huang, Shih-Cheng, et al. \"Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: a case-study in pulmonary embolism detection.\" Scientific reports 10.1 (2020): 1-9.",
"_____no_output_____"
],
[
"## Fusion Strategies\n",
"_____no_output_____"
],
[
"## System Setup & Downloading the Data",
"_____no_output_____"
]
],
[
[
"!pip install numpy pandas scikit-learn matplotlib\n!gdown --id 1w0ocK3br8oqVwn6zK5qgtRaj9Ql37dtd # /content/Demographics.csv\n!gdown --id 1MEhVZ87J2IwFmkgxOi8WjdVKTdwOpDDY # /content/INP_MED.csv\n!gdown --id 1PRgFvQjqEUudeJ0FLR3DbtvqmI7t7sCT # /content/OUT_MED.csv\n!gdown --id 1EDZOYmWrvv6D3XaZrjVous95c9HdiBEx # /content/Vitals.csv\n!gdown --id 1Nlm1ZgibRv6kJBIJkQHkRh8oPqUpELnK # /content/ICD.csv\n!gdown --id 17Y9DJsolaRPyMkk_Xm3w-iCgSOxkQOyf # /content/LABS.csv\n!gdown --id 1dp_L_YxYgxUHVV1F50vIlNTX1m7FBSqW # /content/Vision.csv",
"_____no_output_____"
]
],
[
[
"After downloading the data, you should be able to find the following files in your directory: \n \n- **Demographics.csv**: one-hot encoded gender, race and smoking habits and the age as a numeric variable.\n- **INP_MED.csv**: 641 unique classes of drugs represented as both the frequency within the 12-month window and a binary label of whether the drug was prescribed to the patient. \n- **OUT_MED.csv**: similar to (INPT_MED) inpatient medications, but for out patients\n- **Vitals.csv**: including systolic and diastolic blood pressure, height, weight, body mass index (BMI), temperature, respiration rate, pulse oximetry (spO2) and heart rate.\n- **ICD.csv**: 141 diagnosis groups presented as binary presence/absence as well as a frequency.\n- **LABS.csv**: 22 lab tests represented as binary presence/absence as well as the latest value\n- **Vision.csv**: PE labels, PE type, Data split for PENet, PENet prediction probablity",
"_____no_output_____"
],
[
"## Explore Data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"# Patient Demographics\ndemo_df = pd.read_csv('/content/Demographics.csv')\nprint(demo_df.shape)\ndemo_df.head(5)",
"_____no_output_____"
],
[
"out_med_df = pd.read_csv('/content/OUT_MED.csv')\nprint(out_med_df.shape)\nout_med_df.head(5)",
"_____no_output_____"
],
[
"in_med_df = pd.read_csv('/content/INP_MED.csv')\nprint(in_med_df.shape)\nin_med_df.head(5)",
"_____no_output_____"
],
[
"icd_df = pd.read_csv('/content/ICD.csv')\nprint(icd_df.shape)\nicd_df.head(5)",
"_____no_output_____"
],
[
"lab_df = pd.read_csv('/content/LABS.csv')\nprint(lab_df.shape)\nlab_df.head(5)",
"_____no_output_____"
],
[
"vitals_df = pd.read_csv('/content/Vitals.csv')\nvitals_df.head(5)",
"_____no_output_____"
],
[
"vision_df = pd.read_csv('/content/Vision.csv')\nvision_df.head(5)",
"_____no_output_____"
]
],
[
[
"## Process Data",
"_____no_output_____"
]
],
[
[
"processed_emr_dfs = []\nfor df in [demo_df, out_med_df, in_med_df, icd_df, lab_df, vitals_df]:\n # remove zero variance featurs\n df = df.loc[:,df.apply(pd.Series.nunique) != 1]\n \n # set index \n df = df.set_index('idx')\n\n # normalize features\n df = df.apply(lambda x: (x - x.mean())/(x.std()))\n \n processed_emr_dfs.append(df)\n\nemr_df = pd.concat(processed_emr_dfs, axis=1)\nemr_df.head(5)",
"_____no_output_____"
],
[
"# Define columns\nEMR_FEATURE_COLS = emr_df.columns.tolist()\nPE_TYPE_COL = 'pe_type'\nSPLIT_COL = 'split'\nVISION_PRED_COL = 'pred'\nEMR_PRED_COL = 'emr_pred'\nFUSION_PRED_COL = 'late_fusion_pred'\nLABEL_COL = 'label'",
"_____no_output_____"
],
[
"# Join vision information with emr dataframe\nvision_df = vision_df.set_index('idx')\ndf = pd.concat([vision_df, emr_df], axis=1)",
"_____no_output_____"
],
[
"# Create data splits\ndf_dev = df[(df.split == 'train') | (df.split == 'val')] # for gridsearch CV\ndf_train = df[df.split == 'train']\ndf_val = df[df.split == 'val']\ndf_test = df[df.split == 'test']",
"_____no_output_____"
]
],
[
[
"## Train EMR Model\n",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import GridSearchCV\n\n# Uncomment and run grid search if time permits\n\"\"\"\n# define model\nclf = LogisticRegression(\n penalty='elasticnet', solver='saga', random_state=0\n)\n\n# define grid search\nparam_grid = {\n \"C\": [0.01, 0.1, 1.0, 100], \n \"class_weight\": ['balanced'],\n \"max_iter\": [1000],\n \"l1_ratio\": [0.01, 0.25, 0.5, 0.75, 0.99]\n}\ngsc = GridSearchCV(\n estimator=clf,\n param_grid=param_grid,\n scoring='roc_auc',\n n_jobs=-1,\n verbose=10\n)\n\n# run grid search\ngsc.fit(df_dev[EMR_FEATURE_COLS], df_dev[LABEL_COL])\nprint(f\"Best parameters: {gsc.best_params_}\")\nclf = gsc.best_estimator_\n\"\"\"\n\nclf = LogisticRegression(\n penalty='elasticnet', solver='saga', random_state=0,\n C= 0.1, class_weight='balanced', l1_ratio= 0.99, max_iter= 1000\n)\nclf.fit(df_train[EMR_FEATURE_COLS], df_train[LABEL_COL])",
"_____no_output_____"
]
],
[
[
"## Test EMR Model",
"_____no_output_____"
]
],
[
[
"# test with best model\nemr_prob = clf.predict_proba(df_test[EMR_FEATURE_COLS])\n\n# take probability of positive class \nemr_prob = [p[1] for p in emr_prob]\n\ndf_test = df_test.assign(emr_pred = emr_prob)",
"_____no_output_____"
]
],
[
[
"## Late Fusion",
"_____no_output_____"
]
],
[
[
"# Late fusion by taking the average prediction probability from vision model and emr model\nlate_fusion_pred = np.mean(\n [df_test[EMR_PRED_COL], df_test[VISION_PRED_COL]], \n axis=0\n)\ndf_test = df_test.assign(late_fusion_pred = late_fusion_pred)",
"_____no_output_____"
]
],
[
[
"## Evaluate Performance",
"_____no_output_____"
]
],
[
[
"from sklearn import metrics\nimport matplotlib.pyplot as plt\n\nplt.style.use('ggplot')\nplt.figure(figsize=(20, 20))\nlw = 2\n\ndef plot_auc(df, label):\n # PENet performance\n fpr_v, tpr_v, _ = metrics.roc_curve(\n df[LABEL_COL], \n df[VISION_PRED_COL])\n roc_auc_v = metrics.auc(fpr_v, tpr_v)\n plt.plot(\n fpr_v, \n tpr_v, \n color='darkorange',\n lw=lw, \n label='PENet ROC curve (area = %0.2f)' % roc_auc_v)\n\n # EMR model performance\n fpr_emr, tpr_emr, _ = metrics.roc_curve(\n df[LABEL_COL], \n df[EMR_PRED_COL])\n roc_auc_emr = metrics.auc(fpr_emr, tpr_emr)\n plt.plot(\n fpr_emr, \n tpr_emr,\n lw=lw, \n label='EMR Model ROC curve (area = %0.2f)' % roc_auc_emr)\n\n # Fusion model performance\n fpr_fusion, tpr_fusion, _ = metrics.roc_curve(\n df[LABEL_COL], \n df[FUSION_PRED_COL])\n roc_auc_fusion = metrics.auc(fpr_fusion, tpr_fusion)\n plt.plot(\n fpr_fusion, \n tpr_fusion,\n lw=lw, \n label='Fusion Model ROC curve (area = %0.2f)' % roc_auc_fusion)\n\n plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\n plt.xlim([0.0, 0.95])\n plt.ylim([0.0, 1.05])\n plt.axes().set_aspect('equal', 'datalim')\n\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title(f'Receiver operating characteristic ({label})')\n plt.legend(loc=\"lower right\")\n\n plt.show()",
"_____no_output_____"
],
[
"# Performance for all cases\nplot_auc(df_test, 'All Cases')",
"_____no_output_____"
],
[
"# Performance for non-subsegmental cases\ndf_test_no_subseg = df_test[df_test[PE_TYPE_COL] != 'subsegmental']\nplot_auc(df_test_no_subseg, 'No Subsegmental')",
"_____no_output_____"
],
[
"# Visualize histogram of Predicted Probs\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import figure\n\n# style\nplt.clf()\nplt.style.use('ggplot')\nmatplotlib.rc('xtick', labelsize=15) \nmatplotlib.rc('ytick', labelsize=15) \nf, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(21,6))\nbins = np.linspace(0, 1, 30)\n\n# seperate cases into positive and negative\npositive_cases = df_test_no_subseg[df_test_no_subseg[LABEL_COL] == 1]\nnegative_cases = df_test_no_subseg[df_test_no_subseg[LABEL_COL] == 0]\n\n# PENet\nax1.hist(\n [positive_cases[VISION_PRED_COL], negative_cases[VISION_PRED_COL]], \n bins, \n label=['positive','negative'], \n width=0.01)\n\n# EMR\nax2.hist(\n [positive_cases[EMR_PRED_COL], negative_cases[EMR_PRED_COL]], \n bins, \n label=['positive', 'negative'], \n width=0.01)\n\n# Fusion\nax3.hist(\n [positive_cases[FUSION_PRED_COL], negative_cases[FUSION_PRED_COL]], \n bins, \n label=['positive','negative'], \n width=0.01)\n\nf.tight_layout(pad=0.5)\nplt.legend(loc='upper right')\nax2.set_xlabel(\"Predicted Probabilities\", fontsize = 25)\nax1.set_ylabel(\"Count\", fontsize = 25)\nax1.set_title('Vision Only', fontsize = 25)\nax2.set_title('EMR Only', fontsize = 25)\nax3.set_title('Fusion', fontsize = 25)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecf83963f9343975cc3469d2728aba2f34647531 | 28,271 | ipynb | Jupyter Notebook | zufall/demo/ipynb/d_abi_bayern_B2.ipynb | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | zufall/demo/ipynb/d_abi_bayern_B2.ipynb | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | zufall/demo/ipynb/d_abi_bayern_B2.ipynb | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | 40.502865 | 11,498 | 0.689293 | [
[
[
"<br>\n# Einblick in die Arbeit mit <i>zufall</i>\n\nvon Holger Böttcher - [email protected]\n<br><br>\nDiese Arbeit steht unter der freien Lizenz [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.de) \n<br><br>\n### Abitur Bayern 2019\n \n### Stochastik Teil B, Aufgabengruppe 2\n<br>\nQuelle: [serlo.org](https://de.serlo.org/mathe/deutschland/bayern/gymnasium/abiturpr%C3%BCfungen-l%C3%B6sung/mathematik-abitur-bayern-2019/stochastik,-teil-b,-aufgabengruppe-2) ",
"_____no_output_____"
],
[
"<br>\n<b>1.</b> Jeder sechste Besucher eines Volksfestes trägt ein Lebkuchenherz um den Hals. <br> \nWährend der Dauer des Volksfests wird 25-mal ein Besucher zufällig ausgewählt. <br>\nDie Zufallsgröße $X$ beschreibt die Anzahl der ausgewählten Besucher, die ein <br>\nLebkuchenherz tragen.\n\n$\\quad$<b>a)</b> Bestimmen Sie die Wahrscheinlichkeit dafür, dass unter den ausgewählten<br>\n$\\quad\\:\\:\\:\\:$Besuchern höchstens ein Besucher ein Lebkuchenherz trägt.\n\n$\\quad$<b>b)</b> Beschreiben Sie im Sachzusammenhang ein Ereignis, dessen <br>\n$\\quad\\:\\:\\:\\:$Wahrscheinlichkeit mit dem Term $\\sum_{i=5}^8 B(25;16;\\,i)$ \nberechnet werden kann.\n\n$\\quad$<b>c)</b> Bestimmen Sie die Wahrscheinlichkeit dafür, dass der Wert der Zufallsgröße<br>\n$\\quad\\:\\:\\:\\:$$X$ höchstens um eine Standardabweichung vom Erwartungswert der<br>\n$\\quad\\:\\:\\:\\:$Zufallsgröße abweicht.\n\n<b>2.</b> Bei einer Losbude wird damit geworben, dass jedes Los gewinnt. Die Lose und die <br>\nzugehörigen Sachpreise können drei Kategorien zugeordnet werden, die mit \"Donau\", <br>\n\"Main\" und \"Lech\" bezeichnet werden. Im Lostopf befinden sich viermal so viele Lose <br>\nder Kategorie \"Main\" wie Lose der Kategorie \"Donau\". Ein Los kostet 1 Euro. Die <br>\nInhaberin der Losbude bezahlt im Einkauf für einen Sachpreis in der Kategorie <br>\n\"Donau\" 8 Euro, in der Kategorie \"Main\" 2 Euro und in der Kategorie \"Lech\" 20 Cent. <br>\nErmitteln Sie, wie groß der Anteil der Lose Kategorie \"Donau\" sein muss, wenn die <br>\nInhaberin im Mittel einen Gewinn von 35 Cent pro Los erzielen will.\n\n<b>3.</b> Die Inhaberin der Losbude beschäftigt einen Angestellten, der Besucher des <br>\nVolksfests anspricht, um diese zum Kauf von Losen zu animieren. Sie ist mit der <br>\nErfolgsquote des Angestellten unzufrieden.\n\n$\\quad$<b>a)</b> Die Inhaberin möchte dem Angestellten das Gehalt kürzen, wenn weniger als<br>\n$\\quad\\:\\:\\:\\:$15% der angesprochenen Besucher Lose kaufen. Die Entscheidung über die <br>\n$\\quad\\:\\:\\:\\:$Gehaltskürzung soll mithilfe eines Signifikanztests auf der Grundlage von 100 <br>\n$\\quad\\:\\:\\:\\:$angesprochenen Besuchern getroffen werden. Dabei soll möglichst vermieden <br>\n$\\quad\\:\\:\\:\\:$werden, dem Angestellten das Gehalt zu Unrecht zu kürzen. Geben Sie die <br>\n$\\quad\\:\\:\\:\\:$entsprechende Nullhypothese an und ermitteln Sie die zugehörige <br>\n$\\quad\\:\\:\\:\\:$Entscheidungsregel auf dem Signifikanzniveau von 10%.\n\n$\\quad$<b>b)</b> Der Angestellte konnte bei der Durchführung des Tests zehn von 100 <br>\n$\\quad\\:\\:\\:\\:$erwachsenen Besuchern dazu animieren, Lose zu kaufen. Er behauptet, dass er <br>\n$\\quad\\:\\:\\:\\:$zumindest bei Personen mit Kind eine Erfolgsquote größer als 10% habe. Unter <br>\n$\\quad\\:\\:\\:\\:$den 100 angesprochenen Besuchern befanden sich 40 Personen mit Kind. Von <br>\n$\\quad\\:\\:\\:\\:$den Personen ohne Kind zogen 54 kein Los. Überprüfen Sie, ob das Ergebnis <br>\n$\\quad\\:\\:\\:\\:$der Stichprobe die Behauptung des Angestellten stützt.\n<br><br>",
"_____no_output_____"
]
],
[
[
"%run zufall/start",
"_____no_output_____"
]
],
[
[
"<br>\n### Zu 1.",
"_____no_output_____"
],
[
"Es liegt eine Bernoullikette vor, $X$ ist binomialverteilt mit $n=25$ und der <br>\nTrefferwahrscheinlichkeit $\\frac{1}{6}$",
"_____no_output_____"
]
],
[
[
"bk = BK(25, 1/6) # BK - BernoulliKette",
"Erzeugung eines BernoulliKette-BinomialVerteilung-Objektes\n"
]
],
[
[
"### a)",
"_____no_output_____"
]
],
[
[
"P = bk.P # temporäre Umbenennng der P-Methode von bk",
"_____no_output_____"
],
[
"P(X <= 1)",
"_____no_output_____"
],
[
" P(X <= 1, p=ja) # in Prozent",
"_____no_output_____"
]
],
[
[
"### b)",
"_____no_output_____"
],
[
"Mit der Formel wird die Wahrscheinlichkeit dafür berechnet, dass $X$ Werte aus dem <br>\nIntervall $[5,\\;8]$ annimmt <br>\n\n(\"mindestens 5 und höchstens 8 tragen ein Lebkuchenherz\")",
"_____no_output_____"
]
],
[
[
"P('5 <= X <= 8', g=ja)",
" \n"
],
[
"P('5 <= X <= 8'), P('5 <= X <= 8').n()",
"_____no_output_____"
]
],
[
[
"### c)",
"_____no_output_____"
]
],
[
[
"e = bk.erw; si = bk.sigma\ne, e.n(3)",
"_____no_output_____"
],
[
"si, si.n(3)",
"_____no_output_____"
],
[
"P('-1.86 + 25/6 < X < 1.86 + 25/6', p=ja)",
"_____no_output_____"
]
],
[
[
"<br>\n### Zu 2.",
"_____no_output_____"
],
[
"Für die Zufallsgröße Gewinn = \"Reingewinn für ein Los\" (in Euro) ergibt sich mit <br>\n$p$ = Wahrscheinlichkeit dafür, dass ein Los \"Donau\" gezogen wird, die <br>\nWahrscheinlichkeitsverteilung\n\n$\\{ \\text{Donau}: p, \\text{Main}: 4p, \\text{Lech}: 1\\,-\\,5p \\}$\n\nReingewinn ist der Lospreis minus dem Preis für den Sachpreis (für jedes Los); <br>das ergibt die Tabelle\n\n$\\{ \\text{Donau}: -7, \\text{Main}: -1, \\text{Lech}: 0.8 \\}$\n\nNach der Formel für den Erwartungswert muss die Summe der Produkte der <br>\nentsprechenden Elemente der beiden Tabellen gebildet und dem vorgegebenen <br>\nWert von $0.35$ gleichgesetzt werden\n\nZur Lösung ist die entstehende Gleichung so umzuformen, dass ihre rechte Seite <br>\ngleich Null ist; die linke Seite wird dann benutzt",
"_____no_output_____"
]
],
[
[
"gl = -7*p -4*p +0.8*(1-5*p) - 0.35\ngl",
"_____no_output_____"
],
[
"löse(gl)",
"_____no_output_____"
]
],
[
[
"Als Lösung ergibt sich $p = 3\\%$",
"_____no_output_____"
],
[
"<br>\n### Zu 3.",
"_____no_output_____"
],
[
"### a)",
"_____no_output_____"
],
[
"Die Nullhypothese ergibt sich zu $H_0: p \\ge 0.15$, der Test ist",
"_____no_output_____"
]
],
[
[
"test = STP(0.15, 0.1, 'links', 100, 'B')",
"_____no_output_____"
],
[
"test.regel",
"\n"
]
],
[
[
"Die Entscheidungsregel für die Inhaberin ist\n\nWenn weniger als 10 angesprochene Besucher ein Los kaufen, wird sie das Gehalt <br>kürzen, sonst wird sie das Gehalt nicht kürzen",
"_____no_output_____"
],
[
"### b)",
"_____no_output_____"
],
[
"Die Erfolgsquote des Angestellten bei den 100 angesprochenen erwachsenen <br>\nBesuchern ist 10% (er hat an 100 Besucher 10 Lose verkauft).\n\n40 Personen von allen 100 davon haben ein Kind. Damit haben 60 kein Kind.<br>\nVon diesen zogen 54 kein Los. Also 6 ein Los.\n\nDie verbleibenden 4 Lose wurden also von den 40 Persoen mit Kind gekauft.<br>\nDamit ist der Anteil der von Besuchern mit Kind gekauften Lose ebenfalls (nur) 10%.\n\nErgebnis: \n\nDas Ergebnis der Stichprobe stützt die Behauptung des Angestellten nicht.\n<br><br>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ecf84074350fab3f85d83170eebeed49f0c31c2f | 2,286 | ipynb | Jupyter Notebook | Task3/ABHISHEK_KUMAR TASK3/Task3.ipynb | Muskan346/ML | e4d1d0ba2e8afff26de53f6ea9c596611af04111 | [
"MIT"
] | 1 | 2020-10-15T03:51:05.000Z | 2020-10-15T03:51:05.000Z | Task3/ABHISHEK_KUMAR TASK3/Task3.ipynb | Muskan346/ML | e4d1d0ba2e8afff26de53f6ea9c596611af04111 | [
"MIT"
] | 2 | 2020-10-15T03:43:46.000Z | 2020-10-15T05:02:43.000Z | Task3/ABHISHEK_KUMAR TASK3/Task3.ipynb | Muskan346/ML | e4d1d0ba2e8afff26de53f6ea9c596611af04111 | [
"MIT"
] | 9 | 2020-10-09T06:34:13.000Z | 2020-10-15T16:46:54.000Z | 25.4 | 117 | 0.510936 | [
[
[
"#importing libraries cv2,numpy\nimport cv2 \nimport numpy as np",
"_____no_output_____"
],
[
"catch = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')",
"_____no_output_____"
],
[
"video=cv2.VideoCapture(0) # 0 (argument of VideoCapture) is the camera number if we are using webcam\n#incase of video file we use video=cv2.VideoCapture(\"name_file.mp4\")",
"_____no_output_____"
],
[
"while(video.isOpened()):\n ret,frame=video.read() #reading of video\n faces = catch.detectMultiScale(frame, 1.1, 4) \n for (x, y, l, h) in faces:\n cv2.rectangle(frame, (x, y), (x+l, y+h), (255, 0, 0), 2) #framing of face in rectangle\n cv2.imshow('frame', frame) #imshow used for showing the image\n k = cv2.waitKey(30) & 0xff #click escape key for closing face detection\n if k==27:\n break",
"_____no_output_____"
],
[
"video.release() #releasing the video from memory\ncv2.destroyAllWindows() #close all windows that are opened through upper instruction.",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf85328e62d3508c7f99b7a29584822fcc903a6 | 8,829 | ipynb | Jupyter Notebook | CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb | snucsne/CSNE-Course-Source-Code | 2020c1245698096a964888a9a3a573a12a6a0861 | [
"MIT"
] | 53 | 2022-01-25T15:00:16.000Z | 2022-03-24T15:27:08.000Z | CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb | snucsne/CSNE-Course-Source-Code | 2020c1245698096a964888a9a3a573a12a6a0861 | [
"MIT"
] | null | null | null | CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb | snucsne/CSNE-Course-Source-Code | 2020c1245698096a964888a9a3a573a12a6a0861 | [
"MIT"
] | 11 | 2022-01-25T15:21:01.000Z | 2022-02-23T10:29:01.000Z | 33.698473 | 266 | 0.56405 | [
[
[
"# Chapter 9: Word play\n***\n__Contents__\n- [Reading word lists](#Reading-word-lists)\n- [Search](#Search)\n- [Looping with indices](#Looping-with-indices)\n- [Debugging](#Debugging)\n- [Exercises](#Exercises)\n***\n\n_This notebook is based on \"Think Python, 2Ed\" by Allen B. Downey_ <br>\n[https://greenteapress.com/wp/think-python-2e/](https://greenteapress.com/wp/think-python-2e/)\n***",
"_____no_output_____"
],
[
"## Reading word lists\n- The built-in function ``open`` opens a file (specified as the argument) and returns a __file object__",
"_____no_output_____"
]
],
[
[
"input_file = open( 'data/short-words.txt' )\nprint( input_file )",
"<_io.TextIOWrapper name='data/short-words.txt' mode='r' encoding='UTF-8'>\n"
]
],
[
[
"- The book says ``fin`` is an acceptable name, but I opt for a more descriptive name\n- There are a number of methods for reading and writing files, including:\n - ``read( size )`` Reads ``size`` bytes of data. If ``size`` is omitted or negative, the entire file is readn and return. Returns an empty string if the end of the file (``EOF``) is reached.\n - ``readline()`` Reads a single line from the file\n - ``write( a_string )`` Writes a string to the file\n - ``close()`` Closes the file object and frees up any system resources\n- You can also use a ``for`` loop to read each line of the file",
"_____no_output_____"
]
],
[
[
"for line in input_file:\n word = line.strip()\n print( word )",
"abroad\nbattlefield\nchapter\ndeliver\nglockenspiel\ninstitutional\n"
]
],
[
[
"- The ``strip`` method removes whitespace at the beginning and end of a string",
"_____no_output_____"
],
[
"## Search\n- Most of the exercises in this chapter have something in common\n- They all involve searching a string for specific characters",
"_____no_output_____"
]
],
[
[
"def has_no_e( word ):\n result = True\n for letter in word:\n if( 'e' == letter ):\n result = False\n return result\n\ninput_file = open( 'data/short-words.txt' )\nfor line in input_file:\n word = line.strip()\n if( has_no_e( word ) ):\n print( 'No `e`: ', word )",
"No `e`: abroad\nNo `e`: institutional\n"
]
],
[
[
"- The ``for`` loop traverses each letter in the word looking for an ``e``\n- In fact, if you paid very good attention, you will see that the ``uses_all`` and ``uses_only`` functions in the book are the same\n- In computer science, we frequently encounter problems that are essentially the same as ones we have already solved, but are just worded differently\n- When you find one (called __problem recognition__), you can apply a previously developed solution\n- How much work you need to do to apply it is dependent on how general your solution is\n- This is an essential skill for problem-solving in general and not just programming",
"_____no_output_____"
],
[
"## Looping with indices\n- The previous code didn't have a need to use the indices of characters so the simple ``for ... in`` loop was used\n- There are a number of ways to traverse a string while maintaining a current index\n 1. Use a ``for`` loop across the range of the length of the string\n 2. Use recursion\n 3. Use a ``while`` loop and maintain the current index\n- I recommend the first option as it lets the ``for`` loop maintain the index\n- Recursion is more complex than necessary for this problem\n- A ``while`` loop can be used, but isn't as well suited since we know exactly how many times we need to run through the loop\n- Examples of all three options are below",
"_____no_output_____"
]
],
[
[
"fruit = 'banana'\n\n# For loop\nfor i in range( len( fruit ) ):\n print( 'For: [',i,']=[',fruit[i],']' )\n\n# Recursive function\ndef recurse_through_string( word, i ):\n print( 'Recursive: [',i,']=[',fruit[i],']' )\n if( (i + 1) < len( word ) ):\n recurse_through_string( word, i + 1 )\n\nrecurse_through_string( fruit, 0 )\n \n# While loop\ni = 0\nwhile( i < len( fruit ) ):\n print( 'While: [',i,']=[',fruit[i],']' )\n i = i + 1",
"For: [ 0 ]=[ b ]\nFor: [ 1 ]=[ a ]\nFor: [ 2 ]=[ n ]\nFor: [ 3 ]=[ a ]\nFor: [ 4 ]=[ n ]\nFor: [ 5 ]=[ a ]\nRecursive: [ 0 ]=[ b ]\nRecursive: [ 1 ]=[ a ]\nRecursive: [ 2 ]=[ n ]\nRecursive: [ 3 ]=[ a ]\nRecursive: [ 4 ]=[ n ]\nRecursive: [ 5 ]=[ a ]\nWhile: [ 0 ]=[ b ]\nWhile: [ 1 ]=[ a ]\nWhile: [ 2 ]=[ n ]\nWhile: [ 3 ]=[ a ]\nWhile: [ 4 ]=[ n ]\nWhile: [ 5 ]=[ a ]\n"
]
],
[
[
"## Debugging\n- Testing is _hard_\n- The programs discussed in this chapter are relatively easy to test since you can check the results by hand\n- There are ways to make testing easier and more effective\n- One is to ensure you have different variations of a test\n- For example, for the words with an ``e`` function, test using words that have an ``e`` at the beginning, middle and end. Test long and short words (including the empty string).\n- Often you will come across __special cases__ (like the empty string) that can throw your program off if you don't have a robust solution\n- Another option is finding large sets of data (like the words list file) against which you can test your program\n- However, if your program requires you to manually inspect the tests for correctness, you are always at risk of missing something\n- The best option is automated testing\n- For example, wrapping your tests in conditionals that only print out if the test fails is a good start\n- In later courses, I will discuss libraries that make automated testing easier\n- Remember that although it feels like more work to write tests, it saves quite a bit of time in the long run",
"_____no_output_____"
],
[
"## Exercises\n- Write a program that reads ``words.txt`` and prints only the words with more than 20 characters (not counting whitespace). _(Ex. 9.1 on pg. 84)_\n- Generalize the ``has_no_e`` function to a function called ``avoids`` that takes a word and a string of forbidden letters. It should return ``True`` if the word does not contain any of the forbidden letters and ``False`` if it does. _(Ex. 9.3 on pg. 84)_\n- Write a function called ``uses_only`` that takes a word and a string of letters, and returns ``True`` if the word contains only letters in the list. _(Ex. 9.4 on pg. 84)_",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecf85a71a10e38bb86b334ff315dd6e948ad8de8 | 907 | ipynb | Jupyter Notebook | template.ipynb | re114/re114.github.io | 0230f666d823618ccb04b429490c4189391205f9 | [
"MIT"
] | null | null | null | template.ipynb | re114/re114.github.io | 0230f666d823618ccb04b429490c4189391205f9 | [
"MIT"
] | null | null | null | template.ipynb | re114/re114.github.io | 0230f666d823618ccb04b429490c4189391205f9 | [
"MIT"
] | null | null | null | 23.25641 | 226 | 0.504961 | [
[
[
"<a href=\"https://colab.research.google.com/github/re114/re114.github.io/blob/main/template.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ecf87a0d82684494fb696e86a86cceec8a96dc2e | 109,296 | ipynb | Jupyter Notebook | Deploy_PrototypingTFServingForSequenceClassification_Hugging_Face.ipynb | daquarti/AI | a5f8030299e9d394986ed16de5b0e12e54527f3f | [
"MIT"
] | null | null | null | Deploy_PrototypingTFServingForSequenceClassification_Hugging_Face.ipynb | daquarti/AI | a5f8030299e9d394986ed16de5b0e12e54527f3f | [
"MIT"
] | null | null | null | Deploy_PrototypingTFServingForSequenceClassification_Hugging_Face.ipynb | daquarti/AI | a5f8030299e9d394986ed16de5b0e12e54527f3f | [
"MIT"
] | null | null | null | 65.880651 | 35,950 | 0.669576 | [
[
[
"<a href=\"https://colab.research.google.com/github/daquarti/AI/blob/main/Deploy_PrototypingTFServingForSequenceClassification_Hugging_Face.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!pip install transformers\n!pip install requests",
"Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/9c/34/fb092588df61bf33f113ade030d1cbe74fb73a0353648f8dd938a223dce7/transformers-3.5.0-py3-none-any.whl (1.3MB)\n\u001b[K |████████████████████████████████| 1.3MB 14.4MB/s \n\u001b[?25hCollecting sentencepiece==0.1.91\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)\n\u001b[K |████████████████████████████████| 1.1MB 48.4MB/s \n\u001b[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from transformers) (3.12.4)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)\nCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)\n\u001b[K |████████████████████████████████| 890kB 48.5MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)\nCollecting tokenizers==0.9.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4c/34/b39eb9994bc3c999270b69c9eea40ecc6f0e97991dba28282b9fd32d44ee/tokenizers-0.9.3-cp36-cp36m-manylinux1_x86_64.whl (2.9MB)\n\u001b[K |████████████████████████████████| 2.9MB 46.0MB/s \n\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)\nRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)\nRequirement already satisfied: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) (1.15.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) (50.3.2)\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.17.0)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)\nBuilding wheels for collected packages: sacremoses\n Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893257 sha256=3bd605f4fd52066ddfc4aed937d778ebefcfa5bdc9dffced8bf3a6d410459c19\n Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45\nSuccessfully built sacremoses\nInstalling collected packages: sentencepiece, sacremoses, tokenizers, transformers\nSuccessfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.9.3 transformers-3.5.0\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (2.23.0)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests) (2020.6.20)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests) (3.0.4)\n"
],
[
"",
"_____no_output_____"
],
[
"# Load the Drive helper and mount\nfrom google.colab import drive\n\n# This will prompt for authorization.menos de 48hs de evolución\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"path = \"/content/drive/My Drive/Emergencias/Consultas/\"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"PRETRAINED_MODEL = path + \"destil_v4\"",
"_____no_output_____"
],
[
"saved_model_path = \"./tfx_saved_model/distilbert-sst2-english/3\"",
"_____no_output_____"
],
[
"saved_model_path",
"_____no_output_____"
],
[
"import tensorflow as tf\nfrom transformers import *\n\nclass WrappedModel(tf.Module):\n\tdef __init__(self):\n\t\tsuper(WrappedModel, self).__init__()\n\t\tconfig = AutoConfig.from_pretrained(PRETRAINED_MODEL)\n\t\tself.model = TFAutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL, config=config)\n\[email protected]\n\tdef __call__(self, x):\n\t\treturn self.model(x)\n\nmodel = WrappedModel()\n\n# call = model.__call__.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids'))\ncall = model.__call__.get_concrete_function((tf.TensorSpec([None, None], tf.int32, \n name='input_ids'), tf.TensorSpec([None, None], tf.int32, name='attention_mask')))\ntf.saved_model.save(model, saved_model_path, signatures=call, )",
"All model checkpoint layers were used when initializing TFDistilBertForSequenceClassification.\n\nAll the layers of TFDistilBertForSequenceClassification were initialized from the model checkpoint at /content/drive/My Drive/Emergencias/Consultas/destil_v4.\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertForSequenceClassification for predictions without further training.\n"
],
[
"!saved_model_cli show --dir ./tfx_saved_model/distilbert-sst2-english/3 --tag_set serve --signature_def serving_default",
"The given SavedModel SignatureDef contains the following input(s):\n inputs['attention_mask'] tensor_info:\n dtype: DT_INT32\n shape: (-1, -1)\n name: serving_default_attention_mask:0\n inputs['input_ids'] tensor_info:\n dtype: DT_INT32\n shape: (-1, -1)\n name: serving_default_input_ids:0\nThe given SavedModel SignatureDef contains the following output(s):\n outputs['output_0'] tensor_info:\n dtype: DT_FLOAT\n shape: (-1, 95)\n name: StatefulPartitionedCall:0\nMethod name is: tensorflow/serving/predict\n"
],
[
"!saved_model_cli show --dir ./tfx_saved_model/distilbert-sst2-english/3 --all",
"\nMetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:\n\nsignature_def['__saved_model_init_op']:\n The given SavedModel SignatureDef contains the following input(s):\n The given SavedModel SignatureDef contains the following output(s):\n outputs['__saved_model_init_op'] tensor_info:\n dtype: DT_INVALID\n shape: unknown_rank\n name: NoOp\n Method name is: \n\nsignature_def['serving_default']:\n The given SavedModel SignatureDef contains the following input(s):\n inputs['attention_mask'] tensor_info:\n dtype: DT_INT32\n shape: (-1, -1)\n name: serving_default_attention_mask:0\n inputs['input_ids'] tensor_info:\n dtype: DT_INT32\n shape: (-1, -1)\n name: serving_default_input_ids:0\n The given SavedModel SignatureDef contains the following output(s):\n outputs['output_0'] tensor_info:\n dtype: DT_FLOAT\n shape: (-1, 95)\n name: StatefulPartitionedCall:0\n Method name is: tensorflow/serving/predict\nWARNING: Logging before flag parsing goes to stderr.\nW1113 15:24:40.903388 140473372800896 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\n\nDefined Functions:\n Function Name: '__call__'\n Option #1\n Callable with:\n Argument #1\n DType: tuple\n Value: [TensorSpec(shape=(None, None), dtype=tf.int32, name=u'input_ids'), TensorSpec(shape=(None, None), dtype=tf.int32, name=u'attention_mask'), \b\b]\n"
],
[
"!echo \"deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal\" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \\\ncurl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -\n!apt update",
"deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 2943 100 2943 0 0 6511 0 --:--:-- --:--:-- --:--:-- 6511\nOK\nGet:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]\nHit:2 http://archive.ubuntu.com/ubuntu bionic InRelease\nGet:3 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]\nGet:4 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]\nGet:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\nIgn:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\nGet:7 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,688 kB]\nGet:8 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [864 kB]\nGet:9 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\nIgn:10 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\nGet:11 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [697 B]\nGet:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B]\nGet:13 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3,012 B]\nGet:14 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B]\nGet:15 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B]\nGet:16 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\nGet:17 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [46.6 kB]\nGet:18 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,365 kB]\nIgn:19 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages\nGet:20 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [58.5 kB]\nGet:19 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [407 kB]\nGet:21 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [15.8 kB]\nGet:22 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [222 kB]\nGet:23 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1,781 kB]\nGet:24 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [349 B]\nGet:25 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [341 B]\nGet:26 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [247 kB]\nGet:27 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,129 kB]\nGet:28 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [46.3 kB]\nGet:29 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,198 kB]\nFetched 11.4 MB in 4s (2,982 kB/s)\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\n40 packages can be upgraded. Run 'apt list --upgradable' to see them.\n"
],
[
"!apt-get install tensorflow-model-server",
"Reading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following NEW packages will be installed:\n tensorflow-model-server\n0 upgraded, 1 newly installed, 0 to remove and 40 not upgraded.\nNeed to get 210 MB of archives.\nAfter this operation, 0 B of additional disk space will be used.\nGet:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.3.0 [210 MB]\nFetched 210 MB in 4s (49.5 MB/s)\nSelecting previously unselected package tensorflow-model-server.\n(Reading database ... 144786 files and directories currently installed.)\nPreparing to unpack .../tensorflow-model-server_2.3.0_all.deb ...\nUnpacking tensorflow-model-server (2.3.0) ...\nSetting up tensorflow-model-server (2.3.0) ...\n"
],
[
"%%bash --bg \nnohup tensorflow_model_server \\\n --rest_api_port=8501 \\\n --model_name=sentiment_analysis_distilbert_sst2 \\\n --model_base_path=\"/content/tfx_saved_model/distilbert-sst2-english\" >server.log 2>&1",
"Starting job # 0 in a separate thread.\n"
],
[
"!tail server.log",
"2020-11-13 15:25:00.137611: I tensorflow_serving/model_servers/server.cc:87] Building single TensorFlow model file config: model_name: sentiment_analysis_distilbert_sst2 model_base_path: /content/tfx_saved_model/distilbert-sst2-english\n2020-11-13 15:25:00.137942: I tensorflow_serving/model_servers/server_core.cc:464] Adding/updating models.\n2020-11-13 15:25:00.137982: I tensorflow_serving/model_servers/server_core.cc:575] (Re-)adding model: sentiment_analysis_distilbert_sst2\n2020-11-13 15:25:00.139597: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: sentiment_analysis_distilbert_sst2 version: 3}\n2020-11-13 15:25:00.139629: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: sentiment_analysis_distilbert_sst2 version: 3}\n2020-11-13 15:25:00.139644: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: sentiment_analysis_distilbert_sst2 version: 3}\n2020-11-13 15:25:00.139694: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /content/tfx_saved_model/distilbert-sst2-english/3\n"
],
[
"post_request_distilbert = \"http://localhost:8501/v1/models/sentiment_analysis_distilbert_sst2:predict\"",
"_____no_output_____"
],
[
"import json\nimport numpy\nimport numpy as np\nimport requests\n\ntokenizer = AutoTokenizer.from_pretrained('distilbert-base-multilingual-cased')\nmy_features = {\"input_ids\": tokenizer.encode(\"A man inspects the uniform of a figure in some East Asian country.\", add_special_tokens=True)}",
"_____no_output_____"
],
[
"my_features",
"_____no_output_____"
],
[
"import json\nimport numpy as np\nimport requests\n\nsentences = [\n \"this is absolutely great\",\n \"this is bad\"\n]\n\nmy_features = [{\"input_ids\": tokenizer.encode(sentence, add_special_tokens=True)} for sentence in sentences]\nprint(my_features)\n\ndata = json.dumps({\"signature_name\": \"serving_default\",\n \"instances\": my_features})\nheaders = {\"content-type\": \"application/json\"}\njson_response = requests.post(post_request_distilbert,\n data=data, headers=headers)\nprint(json_response)\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\nprint(predictions)\nfor prediction in predictions:\n np.argmax(prediction)",
"[{'input_ids': [101, 10531, 10124, 48573, 10454, 14772, 102]}, {'input_ids': [101, 10531, 10124, 15838, 102]}]\n<Response [400]>\n"
],
[
"import json\nimport numpy as np\nimport requests\n\nsentences = [\n \"this is absolutely great\"\n]\n\nmy_features = [{\"input_ids\": tokenizer.encode(sentence, add_special_tokens=True)} for sentence in sentences]\nprint(my_features)\n\nfeatures = [{'input_ids': [101, 2023, 2003, 7078, 2307, 102]}]\n\ndata = json.dumps({\"signature_name\": \"serving_default\",\n \"instances\": features})\nheaders = {\"content-type\": \"application/json\"}\njson_response = requests.post(post_request_distilbert,\n data=data, headers=headers)\nprint(json_response)\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\nprint(predictions)\nfor prediction in predictions:\n np.argmax(prediction)",
"[{'input_ids': [101, 10531, 10124, 48573, 10454, 14772, 102]}]\n<Response [400]>\n"
],
[
"sentences = [\n \"ME FALTA EL AIRE, PACIENTE CON DISNEA\"\n]\n\nexamples = ({'idx': tf.constant(i, dtype=tf.int64), 'label': tf.constant(0, dtype=tf.int64) ,\n 'sentence': tf.constant(sentence, dtype=tf.string)} for i, sentence in enumerate(sentences))\n\ndef gen():\n for i, sentence in enumerate(sentences):\n yield(\n {'idx': tf.constant(i, dtype=tf.int64), 'label': tf.constant(0, dtype=tf.int64) ,\n 'sentence': tf.constant(sentence, dtype=tf.string)}\n )\nds = tf.data.Dataset.from_generator(gen, \n ({\"sentence\": tf.string, \"idx\": tf.int64, \"label\": tf.int64})\n )\n\nfeature_ds = glue_convert_examples_to_features(ds, tokenizer, max_length=512, task='sst-2')\nfeature_dataset = feature_ds.batch(1)\n\ninstances = [{\"input_ids\": feature_batch[0][\"input_ids\"].numpy().tolist()[0], \n \"attention_mask\": feature_batch[0][\"attention_mask\"].numpy().tolist()[0]} for feature_batch in feature_dataset.take(-1)]\nprint(instances)\n\ndata = json.dumps({\"signature_name\": \"serving_default\",\n \"instances\": instances})\nheaders = {\"content-type\": \"application/json\"}\njson_response = requests.post(post_request_distilbert,\n data=data, headers=headers)\nprint(json_response)\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\nprint(predictions[0])",
"/usr/local/lib/python3.6/dist-packages/transformers/data/processors/glue.py:67: FutureWarning: This function will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py\n warnings.warn(DEPRECATION_WARNING.format(\"function\"), FutureWarning)\n/usr/local/lib/python3.6/dist-packages/transformers/data/processors/glue.py:331: FutureWarning: This processor will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py\n warnings.warn(DEPRECATION_WARNING.format(\"processor\"), FutureWarning)\n"
],
[
"predictions[0]",
"_____no_output_____"
],
[
"tf_prediction = tf.nn.softmax(predictions, axis=1).numpy()[0]\nprint(tf_prediction)",
"[4.15969869e-03 2.21892164e-03 4.60406577e-04 7.28365137e-05\n 4.56118877e-04 2.86093861e-04 2.67656424e-03 4.36102930e-05\n 5.86551100e-04 5.37240275e-05 4.11344443e-05 4.17352849e-05\n 2.44676019e-05 9.71473972e-06 6.54721439e-04 2.52373620e-03\n 5.20323951e-04 6.26384321e-05 7.87520076e-05 3.16035921e-04\n 5.39696471e-04 1.29101804e-04 1.35493151e-04 7.94333907e-04\n 5.84717237e-05 4.41009052e-04 8.22092460e-04 3.00811746e-04\n 1.21919707e-04 8.98901106e-05 1.34344788e-04 1.10941472e-05\n 1.25430959e-04 9.85952229e-05 1.29733540e-04 1.54577857e-04\n 1.82512190e-05 2.24221130e-05 1.02404006e-02 3.24600063e-03\n 3.30726288e-03 2.92325807e-03 1.03055886e-03 1.27513300e-04\n 1.21591947e-03 1.11750868e-03 1.79847358e-04 4.18303430e-03\n 1.99052904e-03 1.40243142e-02 6.07668871e-05 1.02649092e-03\n 5.10168739e-04 1.13077910e-03 5.28803470e-04 1.70858398e-04\n 1.40489408e-04 2.83539550e-03 1.13990389e-03 1.40208233e-03\n 2.65663494e-03 7.00376743e-05 1.06483175e-03 2.60529188e-04\n 2.66426421e-04 2.97530693e-04 4.78153237e-04 1.49090241e-03\n 3.15854328e-02 4.68576590e-03 7.60095580e-03 1.89934897e-02\n 1.14074858e-04 2.72447221e-02 9.19440560e-03 3.46861466e-03\n 7.59441626e-04 1.35949980e-02 1.87753234e-03 1.62343637e-01\n 5.29267223e-01 8.05712526e-04 3.24854233e-03 3.82293565e-03\n 1.13084407e-03 7.74915768e-02 9.92830563e-03 1.33930789e-02\n 1.43483732e-04 1.01420854e-04 1.15766950e-06 4.86102648e-04\n 7.71712296e-06 9.63595927e-05 8.04810917e-05]\n"
],
[
"import numpy as np\nle.inverse_transform([np.argmax(tf_prediction)])",
"_____no_output_____"
],
[
"array_invertido = np.argsort(tf_prediction)[::-1]\ndiagnosticos = le.inverse_transform(array_invertido)\nprobabilidad = np.sort (tf_prediction)[::-1]\nlist_of_tuples = list(zip(diagnosticos, probabilidad))[:10]\ndf = pd.DataFrame(list_of_tuples, columns = ['diagnostico', 'probabilidad']) \ndf.head(10).plot.barh(x='diagnostico')",
"_____no_output_____"
],
[
"import pandas as pd\npd.set_option('display.max_colwidth', -1)\ndf = pd.read_csv (path + 'input_distil_v1')\ndf.info()",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:2: FutureWarning: Passing a negative integer is deprecated in version 1.0 and will not be supported in future version. Instead, use None to not limit the column width.\n \n"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 99329 entries, 0 to 99328\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 resumen 99329 non-null object\n 1 diagnostico 99329 non-null object\ndtypes: object(2)\nmemory usage: 1.5+ MB\n"
],
[
"y = df.diagnostico.to_list()\nX = df.resumen.to_list()",
"_____no_output_____"
],
[
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(y)\n\ny_encoder = le.transform(y)\n#len(y_encoder)\n#len(le.classes_)\ny = le.transform(y)\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8822c5357694538ded8094f7afdabccc9f92b | 648,074 | ipynb | Jupyter Notebook | MTGAN.ipynb | iolucas/mtgan | 8039c59b5fd3a845d94442600c577e986589cfca | [
"MIT"
] | null | null | null | MTGAN.ipynb | iolucas/mtgan | 8039c59b5fd3a845d94442600c577e986589cfca | [
"MIT"
] | null | null | null | MTGAN.ipynb | iolucas/mtgan | 8039c59b5fd3a845d94442600c577e986589cfca | [
"MIT"
] | null | null | null | 948.863836 | 283,640 | 0.938481 | [
[
[
"# Deep Convolutional GANs\n\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [original paper here](https://arxiv.org/pdf/1511.06434.pdf).\n\nYou'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\n\n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what [you saw previously](https://github.com/udacity/deep-learning/tree/master/gan_mnist) are in the generator and discriminator, otherwise the rest of the implementation is the same.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n#from scipy.io import loadmat\nimport tensorflow as tf",
"_____no_output_____"
],
[
"!mkdir data",
"_____no_output_____"
]
],
[
[
"## Getting the data\n\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.",
"_____no_output_____"
],
[
"These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above.",
"_____no_output_____"
]
],
[
[
"#trainset = loadmat(data_dir + 'train_32x32.mat')\n#testset = loadmat(data_dir + 'test_32x32.mat')",
"_____no_output_____"
],
[
"dataset = np.load(\"imgs.npy\")\ntrainset = dataset[:int(len(dataset)*0.8)]\ntestset = dataset[int(len(dataset)*0.8):]",
"_____no_output_____"
]
],
[
[
"Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.",
"_____no_output_____"
]
],
[
[
"idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)",
"_____no_output_____"
]
],
[
[
"Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.",
"_____no_output_____"
]
],
[
[
"def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x",
"_____no_output_____"
],
[
"class Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y",
"_____no_output_____"
]
],
[
[
"## Network Inputs\n\nHere, just creating some placeholders like normal.",
"_____no_output_____"
]
],
[
[
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"_____no_output_____"
]
],
[
[
"## Generator\n\nHere you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\n\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\n\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\n\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.",
"_____no_output_____"
]
],
[
[
"def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x1 = tf.layers.dense(z, 4*4*512)\n # Reshape it to start the convolutional stack\n x1 = tf.reshape(x1, (-1, 4, 4, 512))\n x1 = tf.layers.batch_normalization(x1, training=training)\n x1 = tf.maximum(alpha * x1, x1)\n # 4x4x512 now\n \n x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')\n x2 = tf.layers.batch_normalization(x2, training=training)\n x2 = tf.maximum(alpha * x2, x2)\n # 8x8x256 now\n \n x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')\n x3 = tf.layers.batch_normalization(x3, training=training)\n x3 = tf.maximum(alpha * x3, x3)\n # 16x16x128 now\n \n # Output layer\n logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')\n # 32x32x3 now\n \n out = tf.tanh(logits)\n \n return out",
"_____no_output_____"
]
],
[
[
"## Discriminator\n\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\n\nYou'll also want to use batch normalization with `tf.layers.batch_normalization` on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. \n\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set `training` to `True`.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the `training` parameter appropriately.",
"_____no_output_____"
]
],
[
[
"def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')\n relu1 = tf.maximum(alpha * x1, x1)\n # 16x16x64\n \n x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')\n bn2 = tf.layers.batch_normalization(x2, training=True)\n relu2 = tf.maximum(alpha * bn2, bn2)\n # 8x8x128\n \n x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')\n bn3 = tf.layers.batch_normalization(x3, training=True)\n relu3 = tf.maximum(alpha * bn3, bn3)\n # 4x4x256\n\n # Flatten it\n flat = tf.reshape(relu3, (-1, 4*4*256))\n logits = tf.layers.dense(flat, 1)\n out = tf.sigmoid(logits)\n \n return out, logits",
"_____no_output_____"
]
],
[
[
"## Model Loss\n\nCalculating the loss like before, nothing new here.",
"_____no_output_____"
]
],
[
[
"def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss",
"_____no_output_____"
]
],
[
[
"## Optimizers\n\nNot much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics.",
"_____no_output_____"
]
],
[
[
"def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt",
"_____no_output_____"
]
],
[
[
"## Building the model\n\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.",
"_____no_output_____"
]
],
[
[
"class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=alpha)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)",
"_____no_output_____"
]
],
[
[
"Here is a function for displaying generated images.",
"_____no_output_____"
]
],
[
[
"def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes",
"_____no_output_____"
]
],
[
[
"And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the `tf.control_dependencies` block we created in `model_opt`. ",
"_____no_output_____"
]
],
[
[
"def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples",
"_____no_output_____"
]
],
[
[
"## Hyperparameters\n\nGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.",
"_____no_output_____"
]
],
[
[
"real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.0002\nbatch_size = 128\nepochs = 25\nalpha = 0.2\nbeta1 = 0.5\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)",
"_____no_output_____"
],
[
"dataset = Dataset(trainset, testset)\n\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()",
"_____no_output_____"
],
[
"_ = view_samples(-1, samples, 6, 12, figsize=(10,5))",
"_____no_output_____"
],
[
"_ = view_samples(-1, samples, 6, 12, figsize=(10,5))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf890b3cb3ae685ad220984ecdccc3fba1ea274 | 22,206 | ipynb | Jupyter Notebook | pymks/fmks/tests/non_periodic.ipynb | wd15/pymks | 8ea501da3496885d7d365f124e3807d3ca87c600 | [
"MIT"
] | 2 | 2015-03-26T14:23:48.000Z | 2018-10-18T20:46:45.000Z | pymks/fmks/tests/non_periodic.ipynb | wd15/pymks | 8ea501da3496885d7d365f124e3807d3ca87c600 | [
"MIT"
] | null | null | null | pymks/fmks/tests/non_periodic.ipynb | wd15/pymks | 8ea501da3496885d7d365f124e3807d3ca87c600 | [
"MIT"
] | 1 | 2015-06-05T17:35:54.000Z | 2015-06-05T17:35:54.000Z | 56.647959 | 5,296 | 0.741421 | [
[
[
"# Implement Masking and Test Issue 517\n\nTesting for weighted masks and fix [#517](https://github.com/materialsinnovation/pymks/issues/517).",
"_____no_output_____"
]
],
[
[
"import dask.array as da\nimport numpy as np\nfrom pymks.fmks import correlations\nfrom pymks import plot_microstructures",
"_____no_output_____"
],
[
"A = da.from_array(np.array([\n [\n [1, 0, 0],\n [0, 1, 1],\n [1, 1, 0]\n ],\n [\n [0, 0, 1],\n [1, 0, 0],\n [0, 0, 1]\n ]\n]))\nmask = np.ones((2,3,3))\nmask[:,2,1:] = 0\nmask = da.from_array(mask)",
"_____no_output_____"
],
[
"plot_microstructures(A[0], A[1],\n titles=['Structure[0]', 'Structure[1]'],\n cmap='gray', figsize_weight=2.5)\nplot_microstructures(mask[0], mask[1],\n titles=['Mask[0]', 'Mask[1]'],\n cmap='viridis', figsize_weight=2.5)\n",
"_____no_output_____"
]
],
[
[
"## Check that periodic still works\n\nThe normalization occurs in the two_point_stats function and the auto-correlation/cross-correlation occur in the cross_correlation function. Checking that the normalization is properly calculated.\n\nFirst is the auto-correlation. Second is the cross-correlation.",
"_____no_output_____"
]
],
[
[
"correct = (correlations.cross_correlation(A, A).compute() / 9).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, A).compute().round(3).astype(np.float64)\nassert (correct == tested).all()",
"_____no_output_____"
],
[
"correct = (correlations.cross_correlation(A, 1-A).compute() / 9).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, 1-A).compute().round(3).astype(np.float64)\nassert (correct == tested).all()",
"_____no_output_____"
]
],
[
[
"## Check that masked periodic works\n\nTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In masked periodic, we assume that vectors going across the boundary of the structure come back on the other side. However, a vector landing in the masked area is discarded (ie not included in the correlation sum).\n\nBelow, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.",
"_____no_output_____"
]
],
[
[
"correct_periodic_mask_auto = np.array([\n [\n [2,1,2],\n [1,4,1],\n [2,1,2]\n ],\n [\n [1,0,0],\n [0,2,0],\n [0,0,1]\n ]\n])\n\ncorrect_periodic_mask_cross = np.array([\n [\n [1,3,1],\n [2,0,2],\n [1,1,1]\n ],\n [\n [0,1,2],\n [2,0,2],\n [1,2,0]\n ]\n])\n\nnorm_periodic_mask = np.array([\n [5,5,5],\n [6,7,6],\n [5,5,5]\n])\n\n# Auto-Correlation\ncorrect = (correct_periodic_mask_auto / norm_periodic_mask).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)\n\nassert (correct == tested).all()\n\n# Cross-Correlation\ncorrect = (correct_periodic_mask_cross / norm_periodic_mask).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)\n\nassert (correct == tested).all()",
"_____no_output_____"
]
],
[
[
"## Test that non-periodic works\n\nTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In non-periodic, we assume that a vector used to count up 2 point states can only connect two states in the structure. A vector going outside of the bounds of the structure is not counted.\n\nBelow, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.",
"_____no_output_____"
]
],
[
[
"correct_nonperiodic_auto = np.array([\n [\n [1,1,2],\n [2,5,2],\n [2,1,1]\n ],\n [\n [0,0,0],\n [0,3,0],\n [0,0,0]\n ]\n])\n\ncorrect_nonperiodic_cross = np.array([\n [\n [2,3,1],\n [1,0,2],\n [0,2,1]\n ],\n [\n [1,2,1],\n [2,0,1],\n [1,2,1]\n ]\n])\n\nnorm_nonperiodic = np.array([\n [4,6,4],\n [6,9,6],\n [4,6,4]\n])\n\n# Auto-Correlation\ncorrect = (correct_nonperiodic_auto / norm_nonperiodic).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, A, periodic_boundary=False).compute().round(3).astype(np.float64)\n\nassert (correct == tested).all()\n\n# Cross-Correlation\ncorrect = (correct_nonperiodic_cross / norm_nonperiodic).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, 1-A, periodic_boundary=False).compute().round(3).astype(np.float64)\n\nassert (correct == tested).all()",
"_____no_output_____"
]
],
[
[
"## Check that non-periodic masking works\n\nIn non-periodic masking, vectors that go across the boundary or land in a mask are not included in the sum.",
"_____no_output_____"
]
],
[
[
"correct_nonperiodic_mask_auto = np.array([\n [\n [1,0,1],\n [1,4,1],\n [1,0,1]\n ],\n [\n [0,0,0],\n [0,2,0],\n [0,0,0]\n ]\n])\n\ncorrect_nonperiodic_mask_cross = np.array([\n [\n [1,3,1],\n [1,0,1],\n [0,1,0]\n ],\n [\n [0,1,1],\n [1,0,1],\n [1,2,0]\n ]\n])\n\nnorm_nonperiodic_mask = np.array([\n [2,4,3],\n [4,7,4],\n [3,4,2]\n])\n\n# Auto-Correlation\ncorrect = (correct_nonperiodic_mask_auto / norm_nonperiodic_mask).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)\nassert (correct == tested).all()\n\n# Cross-Correlation\ncorrect = (correct_nonperiodic_mask_cross / norm_nonperiodic_mask).round(3).astype(np.float64)\ntested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)\nassert (correct == tested).all()",
"_____no_output_____"
]
],
[
[
"## Check that different sized dask arrays are valid masks.\n\nWe want to be able to specify the same mask for each sample. We also want to be able to specify a different mask for each sample. This validates that both are possible.",
"_____no_output_____"
]
],
[
[
"A = da.random.random([1000,3,3])\n\nmask_same4all = da.random.randint(0,2,[3,3])\nmask_same4some = da.random.randint(0,2,[100,3,3])\nmask_diff4all = da.random.randint(0,2,[1000,3,3])\n\ncorrelations.two_point_stats(A, A, mask=mask_same4all)\n# The following check fails. Therefore, the current implementation\n# only works for one mask for all or different mask for all, which\n# is feature rich enough for me.\n# correlations.two_point_stats(A, A, mask=mask_same4some)\ncorrelations.two_point_stats(A, A, mask=mask_diff4all);",
"_____no_output_____"
]
],
[
[
"## Some check that boolean and integers are valid masks\n\nA mask could be true and false specifying where there is a microstructure. However, it could also be any value in the range $[0,1]$ which specifies the probability a value is correctly assigned. The mask right now only implements confidence in a single phase, although idealy it should represent the confidence in all phases. However, for the use cases where there are 2 phases, a mask with a probability for one phase also completely describes the confidence in the other phase. Therefore, this implementation is complete for 2 phases.",
"_____no_output_____"
]
],
[
[
"mask_int = da.random.randint(0,2,[1000,3,3])\nmask_bool = mask_int.copy().astype(bool)\n\nprint(mask_int.dtype, mask_bool.dtype)\n\ncorrelations.two_point_stats(A, A, mask=mask_int)\ncorrelations.two_point_stats(A, A, mask=mask_bool);",
"int64 bool\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf8948994df1bd68d4bf8c35b03eb20003c4c6c | 14,441 | ipynb | Jupyter Notebook | ChecklistForPrecisionDifferentialPhotometry.ipynb | zkbt/rockymountaintransits | c9dbe9624281319ce77440e5fb19a45485136525 | [
"MIT"
] | null | null | null | ChecklistForPrecisionDifferentialPhotometry.ipynb | zkbt/rockymountaintransits | c9dbe9624281319ce77440e5fb19a45485136525 | [
"MIT"
] | null | null | null | ChecklistForPrecisionDifferentialPhotometry.ipynb | zkbt/rockymountaintransits | c9dbe9624281319ce77440e5fb19a45485136525 | [
"MIT"
] | null | null | null | 79.78453 | 1,015 | 0.734714 | [
[
[
"# Checklist for Precision Differential Photometry\nHere is a list of important questions to consider when planning and executing transiting exoplanet observations. These are in a jupyter notebook, so you can sketch out the calculations you need to answer them right below! (Answering some of these may take a lot of code, so you may want to organize your responses into separate modules and import function from them into this notebook.)",
"_____no_output_____"
],
[
"## What is my photon noise limit?\nIf the only noise source were Poisson noise from the fact that we're counting a finite number of photons from a star, what would the uncertainties in my measurement be? The key ingredient needed to answer this question is \"how many photons do I detect from my star, in a given amount of time?\", which can be determined either (a) from estimates of the brightness of your star, your telescope collecting area, and the throughput of your camera, or (b) by scaling from the number of photons you detected from some star of a known magnitude in an exposure of a given exposure time. The answer should look something like \"based on photon noise from the star alone, the best precision to which we can measure relative brightness changes is 1%, 0.1%, 0.01%\" (or some such). Of course, this answer depends how long you spend collecting photons for you measurement; this might be the length of a single exposure, or it might be the total time you were collecting photons if you bin together multiple exposures.\n\n*(Hint: there ought to be a heck of a lot of $\\sqrt{N}$ involved in this calculation!)*",
"_____no_output_____"
],
[
"## What other noise sources do I expect?\nThere are terms besides photon noise from the source that contribute uncertainty to a measurement. For ground-based differential photometry, these include at least the following ingredients. Based on the size of the photometric aperture you're using in your analysis, you can quantify all of these, and how they compare to the photon noise from the star itself.\n\n+ **Sky noise.** There are also photons coming from the diffuse sky that will be entering your photometric aperture. Although you subtract an estimate of the sky flux from your photometry, these photons still contribute noise. The sky noise will depend on whether or not the Moon is up, your local light pollution conditions, and instantaneous weather conditions (aka cloudiness).\n+ **Comparison star photon noise.** Because we need comparison stars to correct for transmission variations in Earth's atmosphere, in some cases, you might be limited by the number of photons you detect from comparison stars (see below).\n+ **Readout noise.** The act of reading out the pixel values of a CCD introduces some noise, typically a equivalent to a few photoelectrons per pixel. \n+ **Dark current.** If your detector has substantial dark current accumulate during an exposure, the Poisson noise from these thermally generated electrons may contribute to the total noise.\n+ **Scintillation.** Also known as twinkling, scintillation noise is caused by more or less light from your star to be lost from your telescope's line of sight. It matters more for bright stars (where the intrinsic photon noise is lower.) In most cases, photon noise from your star will dominate over all other noise sources, except for very bright stars, where scintillation will dominate!",
"_____no_output_____"
],
[
"## What is my total expected per-point uncertainty?\nCombining photon noise with these ingredients leads to an estimate of the uncertainty to which you can measure fractional changes in the brightness of your star. You can write down a noise model that puts all these ingredients together, typically by adding the individual noise contributions together in quadrature.",
"_____no_output_____"
],
[
"## What photometric precision do I actually achieve?\nIf you observe a light curve, you have an actual *measurement* of the photometric precision you achieve in each exposure. After subtracting the best-fit transit model from your light curve, calculate the standard deviation of the residuals. How does the 1$\\sigma$ standard deviation of your residuals compare to the 1$\\sigma$ uncertainty predicted in your above calculations? If your achieved standard deviation is much greater than the prediction, there might be big systematic problems that you need to characterize. If your achieved standard deviation is close to the prediction, you might be doing about as well as you can with your telescope and camera. If your achieved standard deviation is much lower than the prediction, then something has gone horribly wrong!\n\nYou should also compare the photometric precision on multiple timescales:\n+ Is the standard deviation of each exposure consistent with your expectation for the noise in a single exposure?\n+ If you bin many datapoints together, the precision of the binned light curve should be $\\sqrt{N}$ better than the unbinned light curve. Is it? If not, you may have significant time-correlated noise in your dataset (\"red noise\"), which could be your ultimate limit to precise measurements of the depth of a transit.",
"_____no_output_____"
],
[
"## What is the size of the signal I am looking for?\nIt might be the detection of the transit of a planet, it might be a precise measurement of the depth of the transit at a specific wavelength, or it might be something else. What is the size of that signal? It should be a number in units that you can compare directly to your estimate of the fractional precision to which you can measure the brightness of the star over a given time. If your signal is larger than your predicted uncertainty, you're on the right track, but you're still not necessarily done... ",
"_____no_output_____"
],
[
"## What are other systematic noise sources to worry about?\nWe often use telescopes, cameras, or systems that are not explicitly designed for high-precision transiting exoplanet photometry. All the noise sources described above would exist even for *perfect* telescopes with *perfect* cameras that were *perfectly* calibrated. In the real world, we often encounter imperfections; in some cases these imperfections can lead to effective uncertainties in your measurements that can be orders of magnitude larger than your predicted uncertainties. Common ones of these include:\n+ **Imperfect flat-fielding:** If you don't know the perfectly how the sensitivity of your camera and/or CCD changes from pixel-to-pixel across the field, then even tiny motions from one pixel to the next pixel can lead to major systematic noise. Therefore, you want to determine as good of a flat-field as you possibly can, but also be aware that it will never be perfect! \n+ **Telescope jitter or drifts:** If your stars drift across the field, they may encounter sensitivity variations that are not accounted for by the flat-field. Even a *really good* flat-field is still uncertain to about 1%, so if you're trying to achieve a relative precision of 0.01% in some cases, that can be a serious problem! Therefore, you want to prevent your stars from moving by more than a pixel during the entire transit observation. \n+ **Big focus or seeing *changes*:** If the concentration of each star's light on the detector varies significantly over the course of the night, you may be capturing a larger or smaller fraction of that light within your photometric aperture. Additionally, you may be casting more or less light onto pixels whose sensitivity you don't know perfectly. Therefore, its best to try to keep size of your point-spread-focus fairly constant over your entire observations. One common way to do this is to defocus the telescope significantly, which has multiple effects: the size of your PSF will no longer bounce around with the seeing, you spread the light from each star across more pixels to average down some uncertainties in the flat-field, and you can take longer exposures without saturating (see below).\n+ **Non-linearity that is either uncorrected or poorly corrected:** All detectors are non-linear, to some extent. In a linear detector, if twice as many photons hit a pixel, that pixel will record twice as many photons. In a non-linear detector, that's not necessarily the case. As you're using variations in the brightness of your comparison stars to determine the variations in atmospheric transmission affecting your target star, deviations from this linear response (particularly when comparing stars of different brightness) can really mess you up. Therefore, you want to know the limits of linearity of your detector and avoid exposing any stars you want to use to levels where your detector is significantly nonlinear.\n+ **Anything else non-repeatable between exposures:** The traditional exoplanet photometry lore says that the best transit observation will be one where you set up your observations before the transit and then *don't change anything* during the observation, to minimize any opportunity for calibrations to break down. For example, if you swap filters between exposures and the filters don't come back to exactly the same place every time (down to the sub-pixel level), then you a fleck of dust on that filter might affect one of your stars of interest more or less in some exposures than others; you'd be undoing all the hard work you did to get determine the flat-field in the first place. Therefore, if you want to be changing anything between exposures, be sure to validate you're not introducing unnecessary scatter into your measurements.\n\n",
"_____no_output_____"
],
[
"## How do I test/improve these systematic challenges?\n+ **Imperfect flat-fielding:** Take flats in many different ways, and see how they compare. Take dome flats and twilight flats; how are they different? Take flats using the same setup at different times; do you get the same answer in flats taken one day apart from each other? Take flats in one filter, switch to a different filter, and then switch back and take more flats in the original filter; are the flats measured before and after moving the filter wheel *exactly* the same? If you drift your stars across the detector at night on the sky, can you see differences in the photometry between different flats you use in the analysis? \n\n+ **Telescope jitter or drifts:** Measure the `x` and `y` centroids of your star on the detector, for every exposure within a transit observation. By how many pixels does your star move? Do you see trends in the photometry that correlate with these drifts? If your star moves by more than a pixel, consider trying to implement some kind of real-time guiding on your telescope, to actively nudge the telescope pointing to keep your stars on their same pixels throughout the night.\n\n+ **Big focus or seeing *changes*:** Measure the FWHM and ellipticity of your stars throughout a transit observation. Do they vary significantly? Do trends in these measurements correlate with trends in your photometry? Make a movie of the pixels around one of your stars over the whole observation: does the shape of the point-spread-function vary significantly? You can't do much about seeing variations, but if you defocus a bit more you may be able to minimize the effect of seeing fluctations, as your point-spread-function will instead be limited by the defocus.\n\n+ **Non-linearity that is either uncorrected or poorly corrected:** Take flat field exposures with different exposure times. Does the flux recorded in pixel increase by *exactly* 2X or 10X or 100X in exposures that are 2X or 10X or 100X longer? (Most lamps aren't stable enough on their own to do this test, so either need an independent check on the flux levels or a more stable light source: cracking the dome during a cloud-free day can be pretty stable.) Are there existing non-linearity estimates for your camera? \n\n+ **Anything else non-repeatable between exposures:** Take some light curves with keeping everything as steady as possible. Don't change filters during the night, don't change exposure times, don't change focus; control everything as precisely as you can. Compare the noise you achieve in your stable configuration to that you get when you try to change things between exposures. Are they the same?\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecf89dd3386acb0fd34b1e3775c3a6c5630045fb | 37,403 | ipynb | Jupyter Notebook | extras/solutions/05_pytorch_going_modular_exercise_solutions.ipynb | beneyal/pytorch-deep-learning | 87e2c49f9ce6d5392df0bc7e7c216b9f0cd355b3 | [
"MIT"
] | 88 | 2021-10-19T01:24:35.000Z | 2022-03-23T11:53:31.000Z | extras/solutions/05_pytorch_going_modular_exercise_solutions.ipynb | beneyal/pytorch-deep-learning | 87e2c49f9ce6d5392df0bc7e7c216b9f0cd355b3 | [
"MIT"
] | 36 | 2022-01-30T23:13:43.000Z | 2022-03-29T00:27:35.000Z | extras/solutions/05_pytorch_going_modular_exercise_solutions.ipynb | beneyal/pytorch-deep-learning | 87e2c49f9ce6d5392df0bc7e7c216b9f0cd355b3 | [
"MIT"
] | 19 | 2021-11-04T17:26:17.000Z | 2022-03-24T02:18:06.000Z | 41.651448 | 368 | 0.498623 | [
[
[
"<a href=\"https://colab.research.google.com/github/mrdbourke/pytorch-deep-learning/blob/main/extras/solutions/05_pytorch_going_modular_exercise_solutions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# 05. PyTorch Going Modular Exercise Solutions\n\nWelcome to the 05. PyTorch Going Modular exercise solutions notebook.\n\n> **Note:** There may be more than one solution to each of the exercises. This notebook only shows one possible example.\n\n## Resources\n\n1. These exercises/solutions are based on [section 05. PyTorch Going Modular](https://www.learnpytorch.io/05_pytorch_going_modular/) of the Learn PyTorch for Deep Learning course by Zero to Mastery.\n2. See a live [walkthrough of the solutions (errors and all) on YouTube](https://youtu.be/ijgFhMK3pp4).\n3. See [other solutions on the course GitHub](https://github.com/mrdbourke/pytorch-deep-learning/tree/main/extras/solutions).",
"_____no_output_____"
],
[
"## 1. Turn the code to get the data (from section 1. Get Data) into a Python script, such as `get_data.py`.\n\n* When you run the script using `python get_data.py` it should check if the data already exists and skip downloading if it does.\n* If the data download is successful, you should be able to access the `pizza_steak_sushi` images from the `data` directory.",
"_____no_output_____"
]
],
[
[
"%%writefile get_data.py\nimport os\nimport zipfile\n\nfrom pathlib import Path\n\nimport requests\n\n# Setup path to data folder\ndata_path = Path(\"data/\")\nimage_path = data_path / \"pizza_steak_sushi\"\n\n# If the image folder doesn't exist, download it and prepare it... \nif image_path.is_dir():\n print(f\"{image_path} directory exists.\")\nelse:\n print(f\"Did not find {image_path} directory, creating one...\")\n image_path.mkdir(parents=True, exist_ok=True)\n \n# Download pizza, steak, sushi data\nwith open(data_path / \"pizza_steak_sushi.zip\", \"wb\") as f:\n request = requests.get(\"https://github.com/mrdbourke/pytorch-deep-learning/raw/main/data/pizza_steak_sushi.zip\")\n print(\"Downloading pizza, steak, sushi data...\")\n f.write(request.content)\n\n# Unzip pizza, steak, sushi data\nwith zipfile.ZipFile(data_path / \"pizza_steak_sushi.zip\", \"r\") as zip_ref:\n print(\"Unzipping pizza, steak, sushi data...\") \n zip_ref.extractall(image_path)\n\n# Remove zip file\nos.remove(data_path / \"pizza_steak_sushi.zip\")",
"Writing get_data.py\n"
],
[
"!python get_data.py",
"Did not find data/pizza_steak_sushi directory, creating one...\nDownloading pizza, steak, sushi data...\nUnzipping pizza, steak, sushi data...\n"
]
],
[
[
"## 2. Use [Python's `argparse` module](https://docs.python.org/3/library/argparse.html) to be able to send the `train.py` custom hyperparameter values for training procedures.\n* Add an argument flag for using a different:\n * Training/testing directory\n * Learning rate\n * Batch size\n * Number of epochs to train for\n * Number of hidden units in the TinyVGG model\n * Keep the default values for each of the above arguments as what they already are (as in notebook 05).\n* For example, you should be able to run something similar to the following line to train a TinyVGG model with a learning rate of 0.003 and a batch size of 64 for 20 epochs: `python train.py --learning_rate 0.003 batch_size 64 num_epochs 20`.\n* **Note:** Since `train.py` leverages the other scripts we created in section 05, such as, `model_builder.py`, `utils.py` and `engine.py`, you'll have to make sure they're available to use too. You can find these in the [`going_modular` folder on the course GitHub](https://github.com/mrdbourke/pytorch-deep-learning/tree/main/going_modular/going_modular). ",
"_____no_output_____"
]
],
[
[
"%%writefile data_setup.py\n\"\"\"\nContains functionality for creating PyTorch DataLoaders for \nimage classification data.\n\"\"\"\nimport os\n\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\nNUM_WORKERS = os.cpu_count()\n\ndef create_dataloaders(\n train_dir: str, \n test_dir: str, \n transform: transforms.Compose, \n batch_size: int, \n num_workers: int=NUM_WORKERS\n):\n \"\"\"Creates training and testing DataLoaders.\n Takes in a training directory and testing directory path and turns\n them into PyTorch Datasets and then into PyTorch DataLoaders.\n Args:\n train_dir: Path to training directory.\n test_dir: Path to testing directory.\n transform: torchvision transforms to perform on training and testing data.\n batch_size: Number of samples per batch in each of the DataLoaders.\n num_workers: An integer for number of workers per DataLoader.\n Returns:\n A tuple of (train_dataloader, test_dataloader, class_names).\n Where class_names is a list of the target classes.\n Example usage:\n train_dataloader, test_dataloader, class_names = \\\n = create_dataloaders(train_dir=path/to/train_dir,\n test_dir=path/to/test_dir,\n transform=some_transform,\n batch_size=32,\n num_workers=4)\n \"\"\"\n # Use ImageFolder to create dataset(s)\n train_data = datasets.ImageFolder(train_dir, transform=transform)\n test_data = datasets.ImageFolder(test_dir, transform=transform)\n\n # Get class names\n class_names = train_data.classes\n\n # Turn images into data loaders\n train_dataloader = DataLoader(\n train_data,\n batch_size=batch_size,\n shuffle=True,\n num_workers=num_workers,\n pin_memory=True,\n )\n test_dataloader = DataLoader(\n test_data,\n batch_size=batch_size,\n shuffle=False,\n num_workers=num_workers,\n pin_memory=True,\n )\n\n return train_dataloader, test_dataloader, class_names",
"Writing data_setup.py\n"
],
[
"%%writefile engine.py\n\"\"\"\nContains functions for training and testing a PyTorch model.\n\"\"\"\nimport torch\n\nfrom tqdm.auto import tqdm\nfrom typing import Dict, List, Tuple\n\ndef train_step(model: torch.nn.Module, \n dataloader: torch.utils.data.DataLoader, \n loss_fn: torch.nn.Module, \n optimizer: torch.optim.Optimizer,\n device: torch.device) -> Tuple[float, float]:\n \"\"\"Trains a PyTorch model for a single epoch.\n Turns a target PyTorch model to training mode and then\n runs through all of the required training steps (forward\n pass, loss calculation, optimizer step).\n Args:\n model: A PyTorch model to be trained.\n dataloader: A DataLoader instance for the model to be trained on.\n loss_fn: A PyTorch loss function to minimize.\n optimizer: A PyTorch optimizer to help minimize the loss function.\n device: A target device to compute on (e.g. \"cuda\" or \"cpu\").\n Returns:\n A tuple of training loss and training accuracy metrics.\n In the form (train_loss, train_accuracy). For example:\n (0.1112, 0.8743)\n \"\"\"\n # Put model in train mode\n model.train()\n\n # Setup train loss and train accuracy values\n train_loss, train_acc = 0, 0\n\n # Loop through data loader data batches\n for batch, (X, y) in enumerate(dataloader):\n # Send data to target device\n X, y = X.to(device), y.to(device)\n\n # 1. Forward pass\n y_pred = model(X)\n\n # 2. Calculate and accumulate loss\n loss = loss_fn(y_pred, y)\n train_loss += loss.item() \n\n # 3. Optimizer zero grad\n optimizer.zero_grad()\n\n # 4. Loss backward\n loss.backward()\n\n # 5. Optimizer step\n optimizer.step()\n\n # Calculate and accumulate accuracy metric across all batches\n y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1)\n train_acc += (y_pred_class == y).sum().item()/len(y_pred)\n\n # Adjust metrics to get average loss and accuracy per batch \n train_loss = train_loss / len(dataloader)\n train_acc = train_acc / len(dataloader)\n return train_loss, train_acc\n\ndef test_step(model: torch.nn.Module, \n dataloader: torch.utils.data.DataLoader, \n loss_fn: torch.nn.Module,\n device: torch.device) -> Tuple[float, float]:\n \"\"\"Tests a PyTorch model for a single epoch.\n Turns a target PyTorch model to \"eval\" mode and then performs\n a forward pass on a testing dataset.\n Args:\n model: A PyTorch model to be tested.\n dataloader: A DataLoader instance for the model to be tested on.\n loss_fn: A PyTorch loss function to calculate loss on the test data.\n device: A target device to compute on (e.g. \"cuda\" or \"cpu\").\n Returns:\n A tuple of testing loss and testing accuracy metrics.\n In the form (test_loss, test_accuracy). For example:\n (0.0223, 0.8985)\n \"\"\"\n # Put model in eval mode\n model.eval() \n\n # Setup test loss and test accuracy values\n test_loss, test_acc = 0, 0\n\n # Turn on inference context manager\n with torch.inference_mode():\n # Loop through DataLoader batches\n for batch, (X, y) in enumerate(dataloader):\n # Send data to target device\n X, y = X.to(device), y.to(device)\n\n # 1. Forward pass\n test_pred_logits = model(X)\n\n # 2. Calculate and accumulate loss\n loss = loss_fn(test_pred_logits, y)\n test_loss += loss.item()\n\n # Calculate and accumulate accuracy\n test_pred_labels = test_pred_logits.argmax(dim=1)\n test_acc += ((test_pred_labels == y).sum().item()/len(test_pred_labels))\n\n # Adjust metrics to get average loss and accuracy per batch \n test_loss = test_loss / len(dataloader)\n test_acc = test_acc / len(dataloader)\n return test_loss, test_acc\n\ndef train(model: torch.nn.Module, \n train_dataloader: torch.utils.data.DataLoader, \n test_dataloader: torch.utils.data.DataLoader, \n optimizer: torch.optim.Optimizer,\n loss_fn: torch.nn.Module,\n epochs: int,\n device: torch.device) -> Dict[str, List]:\n \"\"\"Trains and tests a PyTorch model.\n Passes a target PyTorch models through train_step() and test_step()\n functions for a number of epochs, training and testing the model\n in the same epoch loop.\n Calculates, prints and stores evaluation metrics throughout.\n Args:\n model: A PyTorch model to be trained and tested.\n train_dataloader: A DataLoader instance for the model to be trained on.\n test_dataloader: A DataLoader instance for the model to be tested on.\n optimizer: A PyTorch optimizer to help minimize the loss function.\n loss_fn: A PyTorch loss function to calculate loss on both datasets.\n epochs: An integer indicating how many epochs to train for.\n device: A target device to compute on (e.g. \"cuda\" or \"cpu\").\n Returns:\n A dictionary of training and testing loss as well as training and\n testing accuracy metrics. Each metric has a value in a list for \n each epoch.\n In the form: {train_loss: [...],\n train_acc: [...],\n test_loss: [...],\n test_acc: [...]} \n For example if training for epochs=2: \n {train_loss: [2.0616, 1.0537],\n train_acc: [0.3945, 0.3945],\n test_loss: [1.2641, 1.5706],\n test_acc: [0.3400, 0.2973]} \n \"\"\"\n # Create empty results dictionary\n results = {\"train_loss\": [],\n \"train_acc\": [],\n \"test_loss\": [],\n \"test_acc\": []\n }\n\n # Loop through training and testing steps for a number of epochs\n for epoch in tqdm(range(epochs)):\n train_loss, train_acc = train_step(model=model,\n dataloader=train_dataloader,\n loss_fn=loss_fn,\n optimizer=optimizer,\n device=device)\n test_loss, test_acc = test_step(model=model,\n dataloader=test_dataloader,\n loss_fn=loss_fn,\n device=device)\n\n # Print out what's happening\n print(\n f\"Epoch: {epoch+1} | \"\n f\"train_loss: {train_loss:.4f} | \"\n f\"train_acc: {train_acc:.4f} | \"\n f\"test_loss: {test_loss:.4f} | \"\n f\"test_acc: {test_acc:.4f}\"\n )\n\n # Update results dictionary\n results[\"train_loss\"].append(train_loss)\n results[\"train_acc\"].append(train_acc)\n results[\"test_loss\"].append(test_loss)\n results[\"test_acc\"].append(test_acc)\n\n # Return the filled results at the end of the epochs\n return results",
"Writing engine.py\n"
],
[
"%%writefile model_builder.py\n\"\"\"\nContains PyTorch model code to instantiate a TinyVGG model.\n\"\"\"\nimport torch\nfrom torch import nn \n\nclass TinyVGG(nn.Module):\n \"\"\"Creates the TinyVGG architecture.\n Replicates the TinyVGG architecture from the CNN explainer website in PyTorch.\n See the original architecture here: https://poloclub.github.io/cnn-explainer/\n Args:\n input_shape: An integer indicating number of input channels.\n hidden_units: An integer indicating number of hidden units between layers.\n output_shape: An integer indicating number of output units.\n \"\"\"\n def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -> None:\n super().__init__()\n self.conv_block_1 = nn.Sequential(\n nn.Conv2d(in_channels=input_shape, \n out_channels=hidden_units, \n kernel_size=3, \n stride=1, \n padding=0), \n nn.ReLU(),\n nn.Conv2d(in_channels=hidden_units, \n out_channels=hidden_units,\n kernel_size=3,\n stride=1,\n padding=0),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2,\n stride=2)\n )\n self.conv_block_2 = nn.Sequential(\n nn.Conv2d(hidden_units, hidden_units, kernel_size=3, padding=0),\n nn.ReLU(),\n nn.Conv2d(hidden_units, hidden_units, kernel_size=3, padding=0),\n nn.ReLU(),\n nn.MaxPool2d(2)\n )\n self.classifier = nn.Sequential(\n nn.Flatten(),\n # Where did this in_features shape come from? \n # It's because each layer of our network compresses and changes the shape of our inputs data.\n nn.Linear(in_features=hidden_units*13*13,\n out_features=output_shape)\n )\n \n def forward(self, x: torch.Tensor):\n x = self.conv_block_1(x)\n x = self.conv_block_2(x)\n x = self.classifier(x)\n return x\n # return self.classifier(self.block_2(self.block_1(x))) # <- leverage the benefits of operator fusion",
"Writing model_builder.py\n"
],
[
"%%writefile utils.py\n\"\"\"\nContains various utility functions for PyTorch model training and saving.\n\"\"\"\nimport torch\nfrom pathlib import Path\n\ndef save_model(model: torch.nn.Module,\n target_dir: str,\n model_name: str):\n \"\"\"Saves a PyTorch model to a target directory.\n Args:\n model: A target PyTorch model to save.\n target_dir: A directory for saving the model to.\n model_name: A filename for the saved model. Should include\n either \".pth\" or \".pt\" as the file extension.\n Example usage:\n save_model(model=model_0,\n target_dir=\"models\",\n model_name=\"05_going_modular_tingvgg_model.pth\")\n \"\"\"\n # Create target directory\n target_dir_path = Path(target_dir)\n target_dir_path.mkdir(parents=True,\n exist_ok=True)\n\n # Create model save path\n assert model_name.endswith(\".pth\") or model_name.endswith(\".pt\"), \"model_name should end with '.pt' or '.pth'\"\n model_save_path = target_dir_path / model_name\n\n # Save the model state_dict()\n print(f\"[INFO] Saving model to: {model_save_path}\")\n torch.save(obj=model.state_dict(),\n f=model_save_path)",
"Writing utils.py\n"
],
[
"%%writefile train.py\n\"\"\"\nTrains a PyTorch image classification model using device-agnostic code.\n\"\"\"\n\nimport os\nimport argparse\n\nimport torch\n\nfrom torchvision import transforms\n\nimport data_setup, engine, model_builder, utils\n\n# Create a parser\nparser = argparse.ArgumentParser(description=\"Get some hyperparameters.\")\n\n# Get an arg for num_epochs\nparser.add_argument(\"--num_epochs\", \n default=10, \n type=int, \n help=\"the number of epochs to train for\")\n\n# Get an arg for batch_size\nparser.add_argument(\"--batch_size\",\n default=32,\n type=int,\n help=\"number of samples per batch\")\n\n# Get an arg for hidden_units\nparser.add_argument(\"--hidden_units\",\n default=10,\n type=int,\n help=\"number of hidden units in hidden layers\")\n\n# Get an arg for learning_rate\nparser.add_argument(\"--learning_rate\",\n default=0.001,\n type=float,\n help=\"learning rate to use for model\")\n\n# Create an arg for training directory \nparser.add_argument(\"--train_dir\",\n default=\"data/pizza_steak_sushi/train\",\n type=str,\n help=\"directory file path to training data in standard image classification format\")\n\n# Create an arg for test directory \nparser.add_argument(\"--test_dir\",\n default=\"data/pizza_steak_sushi/test\",\n type=str,\n help=\"directory file path to testing data in standard image classification format\")\n\n# Get our arguments from the parser\nargs = parser.parse_args()\n\n# Setup hyperparameters\nNUM_EPOCHS = args.num_epochs\nBATCH_SIZE = args.batch_size\nHIDDEN_UNITS = args.hidden_units\nLEARNING_RATE = args.learning_rate\nprint(f\"[INFO] Training a model for {NUM_EPOCHS} epochs with batch size {BATCH_SIZE} using {HIDDEN_UNITS} hidden units and a learning rate of {LEARNING_RATE}\")\n\n# Setup directories\ntrain_dir = args.train_dir\ntest_dir = args.test_dir\nprint(f\"[INFO] Training data file: {train_dir}\")\nprint(f\"[INFO] Testing data file: {test_dir}\")\n\n# Setup target device\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# Create transforms\ndata_transform = transforms.Compose([\n transforms.Resize((64, 64)),\n transforms.ToTensor()\n])\n\n# Create DataLoaders with help from data_setup.py\ntrain_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(\n train_dir=train_dir,\n test_dir=test_dir,\n transform=data_transform,\n batch_size=BATCH_SIZE\n)\n\n# Create model with help from model_builder.py\nmodel = model_builder.TinyVGG(\n input_shape=3,\n hidden_units=HIDDEN_UNITS,\n output_shape=len(class_names)\n).to(device)\n\n# Set loss and optimizer\nloss_fn = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(),\n lr=LEARNING_RATE)\n\n# Start training with help from engine.py\nengine.train(model=model,\n train_dataloader=train_dataloader,\n test_dataloader=test_dataloader,\n loss_fn=loss_fn,\n optimizer=optimizer,\n epochs=NUM_EPOCHS,\n device=device)\n\n# Save the model with help from utils.py\nutils.save_model(model=model,\n target_dir=\"models\",\n model_name=\"05_going_modular_script_mode_tinyvgg_model.pth\")",
"Writing train.py\n"
],
[
"!python train.py --num_epochs 5 --batch_size 128 --hidden_units 128 --learning_rate 0.0003",
"[INFO] Training a model for 5 epochs with batch size 128 using 128 hidden units and a learning rate of 0.0003\n[INFO] Training data file: data/pizza_steak_sushi/train\n[INFO] Testing data file: data/pizza_steak_sushi/test\n 0% 0/5 [00:00<?, ?it/s]Epoch: 1 | train_loss: 1.0994 | train_acc: 0.3082 | test_loss: 1.0933 | test_acc: 0.3333\n 20% 1/5 [00:01<00:07, 1.94s/it]Epoch: 2 | train_loss: 1.0897 | train_acc: 0.3859 | test_loss: 1.0803 | test_acc: 0.3733\n 40% 2/5 [00:03<00:04, 1.65s/it]Epoch: 3 | train_loss: 1.0683 | train_acc: 0.3926 | test_loss: 1.0466 | test_acc: 0.4800\n 60% 3/5 [00:04<00:03, 1.56s/it]Epoch: 4 | train_loss: 1.0318 | train_acc: 0.4898 | test_loss: 1.0320 | test_acc: 0.4267\n 80% 4/5 [00:06<00:01, 1.55s/it]Epoch: 5 | train_loss: 0.9779 | train_acc: 0.5560 | test_loss: 1.0151 | test_acc: 0.4000\n100% 5/5 [00:07<00:00, 1.57s/it]\n[INFO] Saving model to: models/05_going_modular_script_mode_tinyvgg_model.pth\n"
]
],
[
[
"## 3. Create a Python script to predict (such as `predict.py`) on a target image given a file path with a saved model.\n\n* For example, you should be able to run the command `python predict.py some_image.jpeg` and have a trained PyTorch model predict on the image and return its prediction.\n* To see example prediction code, check out the [predicting on a custom image section in notebook 04](https://www.learnpytorch.io/04_pytorch_custom_datasets/#113-putting-custom-image-prediction-together-building-a-function). \n* You may also have to write code to load in a trained model.",
"_____no_output_____"
]
],
[
[
"%%writefile predict.py\nimport torch\nimport torchvision\nimport argparse\n\nimport model_builder\n\n# Creating a parser\nparser = argparse.ArgumentParser()\n\n# Get an image path\nparser.add_argument(\"--image\",\n help=\"target image filepath to predict on\")\n\n# Get a model path\nparser.add_argument(\"--model_path\",\n default=\"models/05_going_modular_script_mode_tinyvgg_model.pth\",\n type=str,\n help=\"target model to use for prediction filepath\")\n\nargs = parser.parse_args()\n\n# Setup class names\nclass_names = [\"pizza\", \"steak\", \"sushi\"]\n\n# Setup device\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# Get the image path\nIMG_PATH = args.image\nprint(f\"[INFO] Predicting on {IMG_PATH}\")\n\n# Function to load in the model\ndef load_model(filepath=args.model_path):\n # Need to use same hyperparameters as saved model \n model = model_builder.TinyVGG(input_shape=3,\n hidden_units=128,\n output_shape=3).to(device)\n\n print(f\"[INFO] Loading in model from: {filepath}\")\n # Load in the saved model state dictionary from file \n model.load_state_dict(torch.load(filepath))\n\n return model\n\n# Function to load in model + predict on select image\ndef predict_on_image(image_path=IMG_PATH, filepath=args.model_path):\n # Load the model\n model = load_model(filepath)\n\n # Load in the image and turn it into torch.float32 (same type as model)\n image = torchvision.io.read_image(str(IMG_PATH)).type(torch.float32)\n\n # Preprocess the image to get it between 0 and 1\n image = image / 255.\n\n # Resize the image to be the same size as the model\n transform = torchvision.transforms.Resize(size=(64, 64))\n image = transform(image) \n\n # Predict on image\n model.eval()\n with torch.inference_mode():\n # Put image to target device\n image = image.to(device)\n\n # Get pred logits\n pred_logits = model(image.unsqueeze(dim=0)) # make sure image has batch dimension (shape: [batch_size, height, width, color_channels])\n\n # Get pred probs\n pred_prob = torch.softmax(pred_logits, dim=1)\n\n # Get pred labels\n pred_label = torch.argmax(pred_prob, dim=1)\n pred_label_class = class_names[pred_label]\n\n print(f\"[INFO] Pred class: {pred_label_class}, Pred prob: {pred_prob.max():.3f}\")\n\nif __name__ == \"__main__\":\n predict_on_image()",
"Writing predict.py\n"
],
[
"!python predict.py --image data/pizza_steak_sushi/test/sushi/175783.jpg",
"[INFO] Predicting on data/pizza_steak_sushi/test/sushi/175783.jpg\n[INFO] Loading in model from: models/05_going_modular_script_mode_tinyvgg_model.pth\n[INFO] Pred class: steak, Pred prob: 0.376\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecf8a0d33e867a44fe874950044ac55b28cb5bfb | 178,939 | ipynb | Jupyter Notebook | clean_code/plots/hubble.ipynb | emilleishida/resspect_metric | 92f0b5d9de9cd6a031ec67fd76f8d302be0efef8 | [
"MIT"
] | 1 | 2021-08-11T13:04:38.000Z | 2021-08-11T13:04:38.000Z | clean_code/plots/.ipynb_checkpoints/hubble-checkpoint.ipynb | emilleishida/resspect_metric | 92f0b5d9de9cd6a031ec67fd76f8d302be0efef8 | [
"MIT"
] | 1 | 2021-08-16T10:40:45.000Z | 2021-08-16T10:40:45.000Z | clean_code/plots/hubble.ipynb | emilleishida/resspect_metric | 92f0b5d9de9cd6a031ec67fd76f8d302be0efef8 | [
"MIT"
] | null | null | null | 311.198261 | 71,104 | 0.906549 | [
[
[
"import matplotlib.pylab as plt\nimport pandas as pd\nimport numpy as np\nfrom collections import OrderedDict\n\nfrom astropy.cosmology import wCDM\nfrom astropy.cosmology import FlatLambdaCDM, FlatwCDM",
"_____no_output_____"
],
[
"# get correct cosmology\n\nh0 = 70\ncosmo = wCDM(H0=h0, Om0=0.3, w0=-1, Ode0=0.7)\n\nxaxis = np.arange(0.001,1.5,0.015)\ntheor_dist = np.array([cosmo.distmod(z).value for z in xaxis])",
"_____no_output_____"
],
[
"# read SALT2mu results\ncases_ddf = ['perfect3000', 'random3000', 'fiducial3000', '99SNIa1SNII', '99SNIa1SNIbc', \n '99SNIa1SNIax', '99.8SNIa0.2SNIa-91bg', '99.1SNIa0.9CART', '99.9SNIa0.1AGN']\n\ncolors_ddf = ['black', 'tab:red', 'tab:blue', 'orange', 'brown', 'green', 'purple', 'cyan', 'pink']\n\ndata_ddf = {}\n\nfor name in cases_ddf:\n fname = '/media/RESSPECT/data/PLAsTiCC/for_metrics/final_data/DDF/v0/fitres/'+ \\\n 'test_salt2mu_lowz_withbias_' + name + '.fitres'\n\n data_temp = pd.read_csv(fname, delim_whitespace=True, comment='#')\n \n if 'perfect' not in name:\n fitres_temp = pd.read_csv(fname, delim_whitespace=True, comment='#')\n \n # 11 are Ia and 1 are lowz, different sims\n mask = np.logical_and(fitres_temp['SIM_TYPE_INDEX'].values != 11,\n fitres_temp['SIM_TYPE_INDEX'].values != 1)\n fitres_temp3 = fitres_temp[mask]\n\n # remove duplicated redshift\n z_all = []\n for j in range(fitres_temp3.shape[0]):\n \n z = fitres_temp3.iloc[j]['SIM_ZCMB']\n z_new = z\n if z in z_all:\n while z_new in z_all:\n z_new = z + np.random.normal(loc=0, scale=0.001)\n \n fitres_temp3.loc[j, 'SIM_ZCMB'] = z_new\n z_all.append(z_new)\n \n indx = np.argsort(z_all)\n fitres_temp2 = fitres_temp3[['SIM_ZCMB', 'MU', 'MUERR', 'MURES']].iloc[indx]\n data_ddf[name] = fitres_temp2\n else:\n data_ddf[name] = data_temp",
"/media/emille/git/COIN/RESSPECT_repo/venv/resspect_main/lib/python3.7/site-packages/ipykernel_launcher.py:33: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n"
],
[
"all_shapes = {'SNIa-91bg': 'o',\n 'SNIax': 's',\n 'SNII': 'd',\n 'SNIbc': 'X',\n 'AGN': '^',\n 'CART': 'v',\n 'TDE': '*',\n 'random': '<',\n 'fiducial': '>'}\n\nremap_dict_ddf = OrderedDict({\n 'perfect3000': 'Perfect', \n 'fiducial3000': 'Fiducial', \n 'random3000': 'Random',\n '72SNIa28SNII': 'SN-II 28',\n '75SNIa25SNII': 'SN-II 25', \n '90SNIa10SNII': 'SN-II 10',\n '95SNIa5SNII': 'SN-II 5',\n '98SNIa2SNII': 'SN-II 2',\n '99SNIa1SNII': 'SN-II 1',\n '95SNIa5SNIbc': 'SN-Ibc 5',\n '98SNIa2SNIbc': 'SN-Ibc 2',\n '99SNIa1SNIbc': 'SN-Ibc 1',\n '86SNIa14SNIax': 'SN-Iax 14',\n '90SNIa10SNIax': 'SN-Iax 10',\n '95SNIa5SNIax': 'SN-Iax 5',\n '98SNIa2SNIax': 'SN-Iax 2',\n '99SNIa1SNIax': 'SN-Iax 1',\n '99.1SNIa0.9CART': 'CART 0.9',\n '99.8SNIa0.2SNIa-91bg': 'SN-Ia-91bg 0.2',\n '99.9SNIa0.1AGN': 'AGN 0.1',\n })\n\nremap_dict_wfd = OrderedDict({\n 'perfect3000': 'Perfect', \n 'fiducial3000': 'Fiducial', \n 'random3000': 'Random',\n '72SNIa28SNII': 'SN-II 28',\n '75SNIa25SNII': 'SN-II 25', \n '90SNIa10SNII': 'SN-II 10',\n '95SNIa5SNII': 'SN-II 5',\n '98SNIa2SNII': 'SN-II 2',\n '99SNIa1SNII': 'SN-II 1',\n '90SNIa10SNIbc': 'SN-Ibc 10',\n '95SNIa5SNIbc': 'SN-Ibc 5',\n '98SNIa2SNIbc': 'SN-Ibc 2',\n '99SNIa1SNIbc': 'SN-Ibc 1',\n '75SNIa25SNIax': 'SN-Iax 25',\n '90SNIa10SNIax': 'SN-Iax 10',\n '95SNIa5SNIax': 'SN-Iax 5',\n '98SNIa2SNIax': 'SN-Iax 2',\n '99SNIa1SNIax': 'SN-Iax 1',\n '95SNIa5SNIa-91bg': 'SN-Ia-91bg 5',\n '98SNIa2SNIa-91bg': 'SN-Ia-91bg 2',\n '99SNIa1SNIa-91bg': 'SN-Ia-91bg 1',\n '95SNIa5AGN': 'AGN 5',\n '98SNIa2AGN': 'AGN 2',\n '99SNIa1AGN': 'AGN 1',\n '99.6SNIa0.4TDE': 'TDE 0.4',\n '99.7SNIa0.3CART': 'CART 0.3',\n })",
"_____no_output_____"
],
[
"# read SALT2mu results\ncases_wfd = ['perfect3000', 'random3000', 'fiducial3000', '99SNIa1SNII', '99SNIa1SNIbc', \n '99SNIa1SNIax', '99SNIa1SNIa-91bg', '99.7SNIa0.3CART', '99SNIa1AGN','99.6SNIa0.4TDE']\n\ncolors_wfd = ['black', 'tab:red', 'tab:blue', 'orange', 'brown', 'green', 'purple', 'cyan', 'pink', \n 'magenta']\n\ndata_wfd = {}\n\nfor name in cases_wfd:\n fname = '/media/RESSPECT/data/PLAsTiCC/for_metrics/final_data/WFD/v0/fitres/'+ \\\n 'test_salt2mu_lowz_withbias_' + name + '.fitres'\n\n data_temp = pd.read_csv(fname, delim_whitespace=True, comment='#')\n \n if 'perfect' not in name:\n fitres_temp = pd.read_csv(fname, delim_whitespace=True, comment='#')\n \n # 11 are Ia and 1 are lowz, different sims\n mask = np.logical_and(fitres_temp['SIM_TYPE_INDEX'].values != 11,\n fitres_temp['SIM_TYPE_INDEX'].values != 1)\n fitres_temp3 = fitres_temp[mask]\n\n # remove duplicated redshift\n z_all = []\n for j in range(fitres_temp3.shape[0]):\n \n z = fitres_temp3.iloc[j]['SIM_ZCMB']\n z_new = z\n if z in z_all:\n while z_new in z_all:\n z_new = z + np.random.normal(loc=0, scale=0.001)\n \n fitres_temp3.at[j, 'SIM_ZCMB'] = z_new\n z_all.append(z_new)\n \n indx = np.argsort(z_all)\n fitres_temp2 = fitres_temp3[['SIM_ZCMB', 'MU', 'MUERR','MURES']].iloc[indx]\n data_wfd[name] = fitres_temp2\n else:\n data_wfd[name] = data_temp\n",
"/media/emille/git/COIN/RESSPECT_repo/venv/resspect_main/lib/python3.7/site-packages/pandas/core/frame.py:3041: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self.loc[index, col] = value\n"
],
[
"plt.figure(figsize=(8,10))\n \nax1 = plt.subplot(2,1,1)\nplt.plot(xaxis, theor_dist, lw=2, color='black', label=r'$\\Lambda$CDM')\n\nfor i in range(3, len(cases_ddf)):\n for key in all_shapes:\n if key in cases_ddf[i]:\n cont = key\n \n plt.scatter(data_ddf[cases_ddf[i]]['SIM_ZCMB'], data_ddf[cases_ddf[i]]['MU'],\n marker=all_shapes[cont], s=40, \n color=colors_ddf[i], label=remap_dict_ddf[cases_ddf[i]], alpha=0.5)\n \n plt.errorbar(data_ddf[cases_ddf[i]]['SIM_ZCMB'], data_ddf[cases_ddf[i]]['MU'], \n yerr=data_ddf[cases_ddf[i]]['MUERR'], fmt=' ',\n color=colors_ddf[i], alpha=0.85)\n\n#plt.xlabel('z', fontsize=16)\nplt.ylabel('mu', fontsize=16)\nplt.legend(loc='lower right', title='DDF', frameon=False)\nplt.xticks(ax1.get_xticks(), [])\n\nax2 = plt.subplot(2,1,2, sharex=ax1, sharey=ax1)\nplt.plot(xaxis, theor_dist, lw=2, color='black', label=r'$\\Lambda$CDM')\n\nfor i in range(3, len(cases_wfd)):\n for key in all_shapes:\n if key in cases_wfd[i]:\n cont = key\n \n plt.scatter(data_wfd[cases_wfd[i]]['SIM_ZCMB'], data_wfd[cases_wfd[i]]['MU'],\n marker=all_shapes[cont], s=40, \n color=colors_wfd[i], label=remap_dict_wfd[cases_wfd[i]], alpha=0.5)\n \n plt.errorbar(data_wfd[cases_wfd[i]]['SIM_ZCMB'], data_wfd[cases_wfd[i]]['MU'], \n yerr=data_wfd[cases_wfd[i]]['MUERR'], fmt=' ',\n color=colors_wfd[i], alpha=0.85)\n\nplt.xlabel('z', fontsize=16)\nplt.ylabel('mu', fontsize=16)\nplt.legend(loc='lower right', title='WFD', frameon=False)\n\nplt.subplots_adjust(hspace=0.001)\n\nplt.show()",
"_____no_output_____"
],
[
"# simulated model\nxaxis = np.arange(0.001,1.5, 0.05)\ncosmo = FlatLambdaCDM(H0=70, Om0=0.3)\ntheor_dist = np.array([cosmo.distmod(z).value for z in xaxis])\n\nroot_dir = {}\nroot_dir['DDF'] = '/media/RESSPECT/data/PLAsTiCC/for_metrics/final_data/DDF/v1/'\nroot_dir['WFD'] = '/media/RESSPECT/data/PLAsTiCC/for_metrics/final_data/WFD/v1/'\n\n# read summary table\nsumm = {}\nfor field in root_dir.keys():\n fname_summ = root_dir[field] + 'summary_stats.csv'\n summ[field] = pd.read_csv(fname_summ)\n \ndist = {}\n\nfor field in root_dir.keys():\n \n dist[field] = {}\n for i in range(summ[field].shape[0]):\n \n case = summ[field]['case'].iloc[i]\n \n dist[field][case] = {}\n \n cosmo_wfit = FlatwCDM(H0=70, Om0=summ[field]['wfit_om_lowz'].iloc[i], \n w0=summ[field]['wfit_w_lowz'].iloc[i])\n dist[field][case]['wfit'] = [cosmo_wfit.distmod(z).value for z in xaxis]\n \n cosmo_stan = FlatwCDM(H0=70, Om0=summ[field]['stan_om_lowz'].iloc[i], \n w0=summ[field]['stan_w_lowz'].iloc[i])\n dist[field][case]['stan'] = [cosmo_stan.distmod(z).value for z in xaxis]\n ",
"_____no_output_____"
],
[
"dist['WFD']['perfect3000']",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,10))\n \nax1 = plt.subplot(2,1,1)\nplt.plot(xaxis, np.full((xaxis.shape[0]),0), lw=2, color='black', label=r'$\\Lambda$CDM')\n\nfor i in range(3, len(data_ddf)):\n for key in all_shapes:\n if key in cases_ddf[i]:\n cont = key\n \n plt.scatter(data_ddf[cases_ddf[i]]['SIM_ZCMB'], data_ddf[cases_ddf[i]]['MURES'],\n marker=all_shapes[cont], s=40, \n color=colors_ddf[i], label=remap_dict_ddf[cases_ddf[i]], alpha=0.5)\n \n plt.errorbar(data_ddf[cases_ddf[i]]['SIM_ZCMB'], data_ddf[cases_ddf[i]]['MURES'], \n yerr=data_ddf[cases_ddf[i]]['MUERR'], fmt=' ',\n color=colors_ddf[i], alpha=0.85)\n #plt.plot(xaxis, dist['DDF'][key]['stan'] - theor_dist, \n # label='stan - ' + remap_dict[key], color=cs11[i])\n\n#plt.xlabel('z', fontsize=16)\nplt.ylabel('mu', fontsize=16)\nplt.legend(loc='lower right', title='DDF', frameon=False)\nplt.xticks(ax1.get_xticks(), [])\n\nax2 = plt.subplot(2,1,2, sharex=ax1, sharey=ax1)\nplt.plot(xaxis, np.full((xaxis.shape[0]),0), lw=2, color='black', label=r'$\\Lambda$CDM')\n\nfor i in range(3, len(data_wfd)):\n for key in all_shapes:\n if key in cases_wfd[i]:\n cont = key\n\n plt.scatter(data_wfd[cases_wfd[i]]['SIM_ZCMB'], data_wfd[cases_wfd[i]]['MURES'],\n marker=all_shapes[cont], s=40, \n color=colors_wfd[i], label=remap_dict_wfd[cases_wfd[i]], alpha=0.5)\n \n plt.errorbar(data_wfd[cases_wfd[i]]['SIM_ZCMB'], data_wfd[cases_wfd[i]]['MURES'], \n yerr=data_wfd[cases_wfd[i]]['MUERR'], fmt=' ',\n color=colors_wfd[i], alpha=0.85)\n\nplt.xlabel('z', fontsize=16)\nplt.ylabel('mu', fontsize=16)\nplt.legend(loc='lower right', title='WFD', frameon=False)\n\nplt.subplots_adjust(hspace=0.001)\n\nplt.show()",
"_____no_output_____"
],
[
"fname = '/media/RESSPECT/data/PLAsTiCC/for_metrics/final_data/WFD/v1/M0DIF/test_salt2mu_lowz_withbias_perfect3000.M0DIF'\ndata = pd.read_csv(fname, delim_whitespace=True, comment='#')",
"_____no_output_____"
],
[
"plt.errorbar(data['z'], data['MUDIF'], yerr=data['MUDIFERR'], fmt='o')\nplt.plot(xaxis, dist['WFD']['perfect3000']['stan']-theor_dist , \n label='stan', color='green')\nplt.plot(xaxis, dist['WFD']['perfect3000']['wfit']-theor_dist , \n label='wfit', color='red')\nplt.plot([0,1.5], [0,0])\nplt.ylim(-0.3,0.2)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(xaxis, theor_dist)\nplt.errorbar(data['z'], data['MUREF'], yerr=data['MUDIFERR'], fmt='o')\nplt.plot(xaxis, dist['WFD']['perfect3000']['stan'], \n label='perfect', color='green')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8a6a437716192431e6075a8d1cfaabc6c8684 | 150,565 | ipynb | Jupyter Notebook | jupyter_book/basic_ml/16_visualzation for decision tree.ipynb | jsqwert/ML-Class | 276c052bbe37914c289391d0d2d1be03e471086d | [
"Apache-2.0"
] | null | null | null | jupyter_book/basic_ml/16_visualzation for decision tree.ipynb | jsqwert/ML-Class | 276c052bbe37914c289391d0d2d1be03e471086d | [
"Apache-2.0"
] | null | null | null | jupyter_book/basic_ml/16_visualzation for decision tree.ipynb | jsqwert/ML-Class | 276c052bbe37914c289391d0d2d1be03e471086d | [
"Apache-2.0"
] | null | null | null | 312.375519 | 138,490 | 0.918002 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport warnings\nfrom sklearn import tree #决策树\nfrom sklearn.tree import DecisionTreeClassifier #分类树\nfrom sklearn.model_selection import train_test_split#测试集和训练集\nfrom sklearn.pipeline import Pipeline #管道\nfrom sklearn.feature_selection import SelectKBest #特征选择\nfrom sklearn.feature_selection import chi2 #卡方统计量\n\nfrom sklearn.preprocessing import MinMaxScaler #数据归一化\nfrom sklearn.decomposition import PCA #主成分分析\nfrom sklearn.model_selection import GridSearchCV #网格搜索交叉验证,用于选择最优的参数",
"_____no_output_____"
],
[
"## 设置属性防止中文乱码\nmpl.rcParams['font.sans-serif'] = [u'SimHei']\nmpl.rcParams['axes.unicode_minus'] = False",
"_____no_output_____"
],
[
"warnings.filterwarnings('ignore', category=FutureWarning)",
"_____no_output_____"
],
[
"iris_feature_E = 'sepal length', 'sepal width', 'petal length', 'petal width'\niris_feature_C = '花萼长度', '花萼宽度', '花瓣长度', '花瓣宽度'\niris_class = 'Iris-setosa', 'Iris-versicolor', 'Iris-virginica'",
"_____no_output_____"
],
[
"#读取数据\npath = './datas/iris.data' \ndata = pd.read_csv(path, header=None)\nx=data[list(range(4))]#获取X变量\ny=pd.Categorical(data[4]).codes#把Y转换成分类型的0,1,2\nprint(\"总样本数目:%d;特征属性数目:%d\" % x.shape)",
"总样本数目:150;特征属性数目:4\n"
],
[
"data.head(5)",
"_____no_output_____"
],
[
"#数据进行分割(训练数据和测试数据)\nx_train1, x_test1, y_train1, y_test1 = train_test_split(x, y, train_size=0.8, random_state=14)",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = x_train1, x_test1, y_train1, y_test1\nprint (\"训练数据集样本数目:%d, 测试数据集样本数目:%d\" % (x_train.shape[0], x_test.shape[0]))\ny_train = y_train.astype(np.int)\ny_test = y_test.astype(np.int)",
"训练数据集样本数目:120, 测试数据集样本数目:30\n"
],
[
"y_train",
"_____no_output_____"
],
[
"#数据标准化\n#StandardScaler (基于特征矩阵的列,将属性值转换至服从正态分布)\n#标准化是依照特征矩阵的列处理数据,其通过求z-score的方法,将样本的特征值转换到同一量纲下\n#常用与基于正态分布的算法,比如回归\n\n#数据归一化\n#MinMaxScaler (区间缩放,基于最大最小值,将数据转换到0,1区间上的)\n#提升模型收敛速度,提升模型精度\n#常见用于神经网络\n\n#Normalizer (基于矩阵的行,将样本向量转换为单位向量)\n#其目的在于样本向量在点乘运算或其他核函数计算相似性时,拥有统一的标准\n#常见用于文本分类和聚类、logistic回归中也会使用,有效防止过拟合\n\nss = MinMaxScaler ()\n#用标准化方法对数据进行处理并转换\nx_train = ss.fit_transform(x_train)\nx_test = ss.transform(x_test)\nprint (\"原始数据各个特征属性的调整最小值:\",ss.min_)\nprint (\"原始数据各个特征属性的缩放数据值:\",ss.scale_)",
"原始数据各个特征属性的调整最小值: [-1.19444444 -0.83333333 -0.18965517 -0.04166667]\n原始数据各个特征属性的缩放数据值: [ 0.27777778 0.41666667 0.17241379 0.41666667]\n"
],
[
"#特征选择:从已有的特征中选择出影响目标值最大的特征属性\n#常用方法:{ 分类:F统计量、卡方系数,互信息mutual_info_classif\n #{ 连续:皮尔逊相关系数 F统计量 互信息mutual_info_classif\n#SelectKBest(卡方系数)\n\nch2 = SelectKBest(chi2,k=3)#在当前的案例中,使用SelectKBest这个方法从4个原始的特征属性,选择出来3个\n#K默认为10\n#如果指定了,那么就会返回你所想要的特征的个数\nx_train = ch2.fit_transform(x_train, y_train)#训练并转换\nx_test = ch2.transform(x_test)#转换\n\nselect_name_index = ch2.get_support(indices=True)\nprint (\"对类别判断影响最大的三个特征属性分布是:\",ch2.get_support(indices=False))\nprint(select_name_index)",
"对类别判断影响最大的三个特征属性分布是: [ True False True True]\n[0 2 3]\n"
],
[
"#降维:对于数据而言,如果特征属性比较多,在构建过程中,会比较复杂,这个时候考虑将多维(高维)映射到低维的数据\n#常用的方法:\n#PCA:主成分分析(无监督)\n#LDA:线性判别分析(有监督)类内方差最小,人脸识别,通常先做一次pca\n\npca = PCA(n_components=2)#构建一个pca对象,设置最终维度是2维\n# #这里是为了后面画图方便,所以将数据维度设置了2维,一般用默认不设置参数就可以\n\nx_train = pca.fit_transform(x_train)#训练并转换\nx_test = pca.transform(x_test)#转换",
"_____no_output_____"
],
[
"#模型的构建\nmodel = DecisionTreeClassifier(criterion='entropy',random_state=0, min_samples_split=10)#另外也可选gini \n#模型训练\nmodel.fit(x_train, y_train)\n#模型预测\ny_test_hat = model.predict(x_test) ",
"_____no_output_____"
],
[
"#模型结果的评估\ny_test2 = y_test.reshape(-1)\nresult = (y_test2 == y_test_hat)\nprint (\"准确率:%.2f%%\" % (np.mean(result) * 100))\n#实际可通过参数获取\nprint (\"Score:\", model.score(x_test, y_test))#准确率\nprint (\"Classes:\", model.classes_)",
"准确率:96.67%\nScore: 0.966666666667\nClasses: [0 1 2]\n"
],
[
"# 方式一:输出形成dot文件,然后使用graphviz的dot命令将dot文件转换为pdf\nfrom sklearn import tree\nwith open('iris.dot', 'w') as f:\n # 将模型model输出到给定的文件中\n f = tree.export_graphviz(model, out_file=f)\n# 命令行执行dot命令: dot -Tpdf iris.dot -o iris.pdf",
"_____no_output_____"
],
[
"# 方式二:直接使用pydotplus插件生成pdf文件\nfrom sklearn import tree\nimport pydotplus \ndot_data = tree.export_graphviz(model, out_file=None) \ngraph = pydotplus.graph_from_dot_data(dot_data) \n# graph.write_pdf(\"iris2.pdf\") \ngraph.write_png(\"0.png\")",
"_____no_output_____"
],
[
"# 方式三:直接生成图片\nfrom sklearn import tree\nfrom IPython.display import Image\nimport pydotplus\ndot_data = tree.export_graphviz(model, out_file=None, \n feature_names=['sepal length', 'sepal width', 'petal length', 'petal width'], \n class_names=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], \n filled=True, rounded=True, \n special_characters=True) \ngraph = pydotplus.graph_from_dot_data(dot_data) \nImage(graph.create_png()) ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8b3c9c4365c72e0b53c4e1439388bdac1f903 | 17,150 | ipynb | Jupyter Notebook | Probability/pdfExploration.ipynb | carlos-faria/Stochastic-Processes | 2ee57a1029566b606af781ec5d307eb33434fb79 | [
"MIT"
] | null | null | null | Probability/pdfExploration.ipynb | carlos-faria/Stochastic-Processes | 2ee57a1029566b606af781ec5d307eb33434fb79 | [
"MIT"
] | null | null | null | Probability/pdfExploration.ipynb | carlos-faria/Stochastic-Processes | 2ee57a1029566b606af781ec5d307eb33434fb79 | [
"MIT"
] | null | null | null | 70 | 11,884 | 0.815452 | [
[
[
"# Papoulis",
"_____no_output_____"
],
[
" TODO",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('../lib')\n\nimport sympy\nimport sympy.functions.elementary.exponential as symExp\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport visuals\nimport auxlib",
"_____no_output_____"
],
[
"c, t, U = sympy.symbols('c, t, U')",
"_____no_output_____"
],
[
"f = c*symExp.exp(-c*t)\nf",
"_____no_output_____"
],
[
"paramsToSub = {c:2}",
"_____no_output_____"
],
[
"auxlib.integrateFunc(f, t, bounds=[0, sympy.oo], paramsToSub=paramsToSub)",
"_____no_output_____"
],
[
"func_int = auxlib.integrateFunc(f, t, paramsToSub=paramsToSub, conds='piecewise') \nfunc_int - auxlib.get_constant()",
"_____no_output_____"
]
],
[
[
"sympy.limit(q, t, sympy.oo)\n\n$\\infty$",
"_____no_output_____"
]
],
[
[
"from sympy.utilities.lambdify import lambdify\nfunc = lambdify(t, func_int - auxlib.get_constant(), 'scipy') # returns a numpy-ready function1\nfunc",
"_____no_output_____"
],
[
"interval = np.linspace(-1, 5, num=1000)\nvalues = func(interval)",
"_____no_output_____"
]
],
[
[
"def pdfInfo(interval, values, rng, xlim=None):\n fig, ax = plt.subplots()\n ax.plot(interval, values, 'k')\n \n if not np.isscalar(rng) and len(rng)==1:\n rng = rng[0]\n \n if not np.isscalar(rng):\n x_1 = np.argmax(interval>rng[0])\n x_2 = np.argmax(interval>rng[1])\n ax.fill_between(np.linspace(rng[0], rng[1], num=len(values[x_1:x_2])), values[x_1:x_2], color='k')\n else:\n x = np.argmax(interval>rng)\n x_0 = interval[0]\n ax.fill_between(np.linspace(x_0, rng, num=len(values[0:x])), values[0:x], color='k')\n \n ax.axhline(0, c='k', alpha=.5)\n ax.axvline(0, c='k', alpha=.5)\n \n ax.yaxis.grid(b=True, which='minor', color='k', linestyle='--', alpha=.2, zorder=0) \n ax.yaxis.grid(b=True, which='major', color='k', linestyle='--', alpha=.5, zorder=0) \n\n ax.set_facecolor('xkcd:white')\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n ax.spines[\"left\"].set_visible(False)\n ax.spines[\"bottom\"].set_visible(False)\n \n if xlim != None:\n ax.set_xlim(xlim)",
"_____no_output_____"
]
],
[
[
"visuals.plotPdf(interval, values, [-.5, 3], xlim=[-2, 7])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf8bfcc0652a826f9e851a03853708b557cbec1 | 78,221 | ipynb | Jupyter Notebook | Chapter02/tests/Exercise2.03.ipynb | PacktWorkshops/Applied-Unsupervised-Learning-with-Python | ddafe8ed6917e1e0020489170dacdc820e4800bb | [
"MIT"
] | 17 | 2020-04-06T09:26:34.000Z | 2022-03-08T04:38:27.000Z | Chapter02/tests/Exercise2.03.ipynb | PacktWorkshops/The-Unsupervised-Learning-Workshop | ddafe8ed6917e1e0020489170dacdc820e4800bb | [
"MIT"
] | null | null | null | Chapter02/tests/Exercise2.03.ipynb | PacktWorkshops/The-Unsupervised-Learning-Workshop | ddafe8ed6917e1e0020489170dacdc820e4800bb | [
"MIT"
] | 22 | 2020-02-25T07:20:56.000Z | 2022-03-09T01:03:58.000Z | 341.576419 | 36,348 | 0.930402 | [
[
[
"from sklearn.cluster import AgglomerativeClustering\nfrom sklearn.datasets import make_blobs\nimport matplotlib.pyplot as plt\nfrom scipy.cluster.hierarchy import linkage, dendrogram, fcluster\n\nac = AgglomerativeClustering(n_clusters = 8, affinity=\"euclidean\", linkage=\"average\")\nX, y = make_blobs(n_samples=1000, centers=8, n_features=2, random_state=800)",
"_____no_output_____"
],
[
"distances = linkage(X, method=\"centroid\", metric=\"euclidean\")\nsklearn_clusters = ac.fit_predict(X)\nscipy_clusters = fcluster(distances, 3, criterion=\"distance\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,4))\nplt.title(\"Clusters from Sci-Kit Learn Approach\")\nplt.scatter(X[:, 0], X[:, 1], c = sklearn_clusters ,s=50, cmap='tab20b')\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,4))\nplt.title(\"Clusters from SciPy Approach\")\nplt.scatter(X[:, 0], X[:, 1], c = scipy_clusters ,s=50, cmap='tab20b')\nplt.show()",
"_____no_output_____"
],
[
"# Unit Test",
"_____no_output_____"
],
[
"from sklearn.cluster import AgglomerativeClustering\nfrom sklearn.datasets import make_blobs\nfrom scipy.cluster.hierarchy import linkage, dendrogram, fcluster\nimport unittest\nclass TestAgglomerativeClustering(unittest.TestCase): \n X2, y2 = make_blobs(n_samples=1000, centers=8, n_features=2, random_state=800)\n \n def test_X_y(self):\n self.assertEqual(len(self.X2),len(X))\n self.assertEqual(len(self.y2),len(y))\n \n def test_ac(self):\n ac2 = AgglomerativeClustering(n_clusters = 8, affinity=\"euclidean\", linkage=\"average\")\n self.assertMultiLineEqual(str(ac2),str(ac))\n \n def test_distances(self):\n distances2 = linkage(self.X2, method=\"centroid\", metric=\"euclidean\")\n self.assertEqual(len(distances2),len(distances))\n \n def test_sklearn_clusters(self):\n ac3 = AgglomerativeClustering(n_clusters = 8, affinity=\"euclidean\", linkage=\"average\")\n sklearn_clusters2 = ac3.fit_predict(self.X2)\n self.assertEqual(len(sklearn_clusters2),len(sklearn_clusters))\n \n \n def test_scipy_clusters(self):\n distances12 = linkage(self.X2, method=\"centroid\", metric=\"euclidean\")\n scipy_clusters2 = fcluster(distances12, 3, criterion=\"distance\")\n self.assertEqual(len(scipy_clusters2),len(scipy_clusters2))\n \n ",
"_____no_output_____"
],
[
"suite = unittest.TestLoader().loadTestsFromTestCase(TestAgglomerativeClustering)\nunittest.TextTestRunner(verbosity=2).run(suite)",
"test_X_y (__main__.TestAgglomerativeClustering) ... ok\ntest_ac (__main__.TestAgglomerativeClustering) ... ok\ntest_distances (__main__.TestAgglomerativeClustering) ... ok\ntest_scipy_clusters (__main__.TestAgglomerativeClustering) ... ok\ntest_sklearn_clusters (__main__.TestAgglomerativeClustering) ... ok\n\n----------------------------------------------------------------------\nRan 5 tests in 0.163s\n\nOK\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8c15d4c226b512001bcb82ab52ce14fdb1911 | 8,530 | ipynb | Jupyter Notebook | completed/04. Let's write a cleaning function.ipynb | cjwinchester/cfj-2017 | db14686b0269303eb1db5942dd30b3a28775fb0b | [
"MIT"
] | 4 | 2017-10-20T02:56:21.000Z | 2019-04-10T14:59:31.000Z | completed/04. Let's write a cleaning function.ipynb | cjwinchester/cfj-2017 | db14686b0269303eb1db5942dd30b3a28775fb0b | [
"MIT"
] | 5 | 2020-03-24T15:29:43.000Z | 2021-06-01T21:50:07.000Z | completed/04. Let's write a cleaning function.ipynb | cjwinchester/cfj-2017 | db14686b0269303eb1db5942dd30b3a28775fb0b | [
"MIT"
] | 2 | 2020-08-18T19:21:49.000Z | 2020-12-15T04:28:34.000Z | 36.767241 | 401 | 0.571161 | [
[
[
"# Let's write a cleaning function\n\nYou can use Python string functions to do some basic data cleaning. (For data sets with more complex cleaning needs, you might want to use a power tool like [Open Refine](http://openrefine.org/).)\n\nHere, we're going to write a function that takes in one row of data as input, cleans up the pieces of data in the row, then _returns_ a cleaned version of that row. As we loop over the data in the file, we'll call this function on each row, then write out the clean row to a new file.\n\nLet's break down our tasks:\n\n- Write a cleaning function that accepts a row of raw data and returns a row of clean data\n- Open our CSV file of raw data\n- Open a CSV file to write the cleaned data into\n- Loop over the rows of raw data, passing each row to our cleaning function\n- Write out the clean data to the new file",
"_____no_output_____"
],
[
"### The data\n\nWe're going to be working with the FDIC's [list of failed banks](https://catalog.data.gov/dataset/fdic-failed-bank-list).",
"_____no_output_____"
],
[
"### Write the cleaning function\n\nFirst, we need to write our cleaning function -- let's call our function `clean_row()`. We need to decide whether the row it parses will be a dictionary (using `csv.DictReader`) or a list (using `csv.reader`).\n\nLet's use a dictionary.\n\nHere are the fields that we are going to include in our output file. The ones that need cleaning are in bold.\n\n- **Bank Name**: Sometimes has extra whitespace, needs to be uppercase, our house style dictates that ampersands should be replaced by the word \"and\"\n- **City**: Needs to be uppercase\n- ST\n- **Acquiring Institution**: Sometimes has extra whitespace, needs to be uppercase, our house style dictates that ampersands should be replaced by the word \"and\"\n- Closing Date",
"_____no_output_____"
]
],
[
[
"# first line defines the function and the argument\n# (\"row\" is an arbitrary variable name)\ndef clean_row(row):\n \n \"\"\"\n For the bank and institution name:\n - strip whitespace\n - uppercase the name\n - replace '&' with 'AND'\n \n n.b.: you can chain string methods together\n \"\"\"\n clean_bank = row['Bank Name'].strip().upper().replace('&', 'AND')\n clean_inst = row['Acquiring Institution'].strip().upper().replace('&', 'AND')\n\n # strip whitespace and upcase the city\n clean_city = row['City'].strip().upper()\n \n # return a dictionary of clean data\n # the keys ~must~ match the headers of our output file\n return {\n 'bank': clean_bank,\n 'inst': clean_inst,\n 'city': clean_city,\n 'st': row['ST'],\n 'c_date': row['Closing Date']\n }",
"_____no_output_____"
]
],
[
[
"### Use the cleaning function\n\nNow, in a `with` block, we'll do the following:\n\n- Read in `data/failed_banks.csv`\n- Open `banks-clean.csv` to write to\n- Loop over the rows of raw data\n- Call the cleaning function on each row\n- Write the returned (clean) data to `banks-clean.csv`",
"_____no_output_____"
]
],
[
[
"# import the csv library\nimport csv\n\n# open the two files\nwith open('data/failed-banks.csv', 'r') as infile, open('banks-clean.csv', 'w') as outfile:\n \n # create a DictReader object\n reader = csv.DictReader(infile)\n \n # create a DictWriter object\n # the fieldnames must exactly match the keys in the dictionary being returned\n # by our cleaning function\n writer = csv.DictWriter(outfile, fieldnames=['bank', 'inst', 'city', 'st', 'c_date'])\n \n # write out header row\n writer.writeheader()\n \n # loop over the rows of raw data\n # \"row\" is an arbitrary variable name\n for row in reader:\n \n # call the cleaning function on the row\n cleaned = clean_row(row)\n \n # write out the clean row\n writer.writerow(cleaned)",
"_____no_output_____"
]
],
[
[
"### _Extra credit_\n\nThe `Closing Date` field in the bank failure data is in this format: `6-Sep-2011`. In other words, day, then abbreviated month as text, then year.\n\nPython's built-in `datetime` module has [two methods](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior) that can help us reformat them: `strftime()` and `strptime()`.\n\n**Your task**: Add some code to the cleaning function to reformat the closing date in `yyyy-mm-dd` format. This will require doing some research into a module that we haven't discussed yet. (Good practice for when you're coding on your own.)\n\nBreaking it down into smaller tasks:\n\n- You'll need to import `datetime` from the `datetime` module: `from datetime import datetime`\n- Then figure out how to use `strptime()` to turn a `6-Sep-2011`-type string into a Python date object\n- Then figure out how to format that date object as `yyyy-mm-dd` using `strftime()`\n- Then add that functionality to the cleaning function and re-run the bank data\n\nGoogle is your friend here. Try searching for things like \"python strptime example.\" (Freebie: Here's a [handy guide](http://strftime.org/) to the date directives.) Noodle around in a cell. Get something working for one date -- a test string -- before setting your solution loose on the whole file. Try new things, see what happens, fail, find solutions. It's all part of the learning process.",
"_____no_output_____"
]
],
[
[
"import csv\nfrom datetime import datetime\n\ndef clean_row(row):\n\n clean_bank = row['Bank Name'].strip().upper().replace('&', 'AND')\n clean_inst = row['Acquiring Institution'].strip().upper().replace('&', 'AND')\n clean_city = row['City'].strip().upper()\n \n # reformat the date\n clean_date = datetime.strptime(row['Closing Date'], '%d-%b-%y').strftime('%Y-%m-%d')\n\n return {\n 'bank': clean_bank,\n 'inst': clean_inst,\n 'city': clean_city,\n 'st': row['ST'],\n 'c_date': clean_date\n }\n\nwith open('data/failed_banks.csv', 'r') as infile, open('banks-clean.csv', 'w') as outfile:\n \n reader = csv.DictReader(infile)\n writer = csv.DictWriter(outfile, fieldnames=['bank', 'inst', 'city', 'st', 'c_date'])\n \n writer.writeheader()\n \n for row in reader:\n cleaned = clean_row(row)\n writer.writerow(cleaned)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf8c53a7b96a3bdbfcd915c04b4836380c56e5c | 481,295 | ipynb | Jupyter Notebook | CustomerSegmentation.ipynb | BrianChegeGichau/Week13IP | 3c89b1945d5ccca0a225a743b130b82b144ad52c | [
"Unlicense"
] | null | null | null | CustomerSegmentation.ipynb | BrianChegeGichau/Week13IP | 3c89b1945d5ccca0a225a743b130b82b144ad52c | [
"Unlicense"
] | null | null | null | CustomerSegmentation.ipynb | BrianChegeGichau/Week13IP | 3c89b1945d5ccca0a225a743b130b82b144ad52c | [
"Unlicense"
] | null | null | null | 197.009824 | 138,094 | 0.829433 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecf8c68f8ba850718219732c441ed47676213a70 | 72,972 | ipynb | Jupyter Notebook | NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 2/C2-W2-assginment-Parts-of-Speech Tagging (POS).ipynb | tsuirak/skills | 22280be0870627c5dd84e069ec271aeeb6797831 | [
"MIT"
] | 362 | 2020-10-08T07:34:25.000Z | 2022-03-30T05:11:30.000Z | NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 2/C2-W2-assginment-Parts-of-Speech Tagging (POS).ipynb | abcd1758323829/skills | 195fad43e99de5efe6491817ad2b79e12665cc2a | [
"MIT"
] | 7 | 2020-07-07T16:10:23.000Z | 2021-06-04T08:17:55.000Z | NLP/Learn_by_deeplearning.ai/Course 2 - Probabilistic Models/Labs/Week 2/C2-W2-assginment-Parts-of-Speech Tagging (POS).ipynb | abcd1758323829/skills | 195fad43e99de5efe6491817ad2b79e12665cc2a | [
"MIT"
] | 238 | 2020-10-08T12:01:31.000Z | 2022-03-25T08:10:42.000Z | 41.390811 | 528 | 0.546744 | [
[
[
"# Assignment 2: Parts-of-Speech Tagging (POS)\n\nWelcome to the second assignment of Course 2 in the Natural Language Processing specialization. This assignment will develop skills in part-of-speech (POS) tagging, the process of assigning a part-of-speech tag (Noun, Verb, Adjective...) to each word in an input text. Tagging is difficult because some words can represent more than one part of speech at different times. They are **Ambiguous**. Let's look at the following example: \n\n- The whole team played **well**. [adverb]\n- You are doing **well** for yourself. [adjective]\n- **Well**, this assignment took me forever to complete. [interjection]\n- The **well** is dry. [noun]\n- Tears were beginning to **well** in her eyes. [verb]\n\nDistinguishing the parts-of-speech of a word in a sentence will help you better understand the meaning of a sentence. This would be critically important in search queries. Identifying the proper noun, the organization, the stock symbol, or anything similar would greatly improve everything ranging from speech recognition to search. By completing this assignment, you will: \n\n- Learn how parts-of-speech tagging works\n- Compute the transition matrix A in a Hidden Markov Model\n- Compute the transition matrix B in a Hidden Markov Model\n- Compute the Viterbi algorithm \n- Compute the accuracy of your own model \n",
"_____no_output_____"
],
[
"## Outline\n\n- [0 Data Sources](#0)\n- [1 POS Tagging](#1)\n - [1.1 Training](#1.1)\n - [Exercise 01](#ex-01)\n - [1.2 Testing](#1.2)\n - [Exercise 02](#ex-02)\n- [2 Hidden Markov Models](#2)\n - [2.1 Generating Matrices](#2.1)\n - [Exercise 03](#ex-03)\n - [Exercise 04](#ex-04)\n- [3 Viterbi Algorithm](#3)\n - [3.1 Initialization](#3.1)\n - [Exercise 05](#ex-05)\n - [3.2 Viterbi Forward](#3.2)\n - [Exercise 06](#ex-06)\n - [3.3 Viterbi Backward](#3.3)\n - [Exercise 07](#ex-07)\n- [4 Predicting on a data set](#4)\n - [Exercise 08](#ex-08)",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport math\nfrom collections import defaultdict\n\n%matplotlib inline\n%config InlineBackend.figure_format='png'",
"_____no_output_____"
]
],
[
[
"<a name='0'></a>\n## Part 0: Data Sources\nThis assignment will use two tagged data sets collected from the **Wall Street Journal (WSJ)**. \n\n[Here](http://relearn.be/2015/training-common-sense/sources/software/pattern-2.6-critical-fork/docs/html/mbsp-tags.html) is an example 'tag-set' or Part of Speech designation describing the two or three letter tag and their meaning. \n- One data set (**WSJ-2_21.pos**) will be used for **training**.\n- The other (**WSJ-24.pos**) for **testing**. \n- The tagged training data has been preprocessed to form a vocabulary (**hmm_vocab.txt**). \n- The words in the vocabulary are words from the training set that were used two or more times. \n- The vocabulary is augmented with a set of 'unknown word tokens', described below. \n\nThe training set will be used to create the emission, transmission and tag counts. \n\nThe test set (WSJ-24.pos) is read in to create `y`. \n- This contains both the test text and the true tag. \n- The test set has also been preprocessed to remove the tags to form **test_words.txt**. \n- This is read in and further processed to identify the end of sentences and handle words not in the vocabulary using functions provided in **utils_pos.py**. \n- This forms the list `prep`, the preprocessed text used to test our POS taggers.\n\nA POS tagger will necessarily encounter words that are not in its datasets. \n- To improve accuracy, these words are further analyzed during preprocessing to extract available hints as to their appropriate tag. \n- For example, the suffix 'ize' is a hint that the word is a verb, as in 'final-ize' or 'character-ize'. \n- A set of unknown-tokens, such as '--unk-verb--' or '--unk-noun--' will replace the unknown words in both the training and test corpus and will appear in the emission, transmission and tag data structures.\n\n\n<img src = \"DataSources1.PNG\" />",
"_____no_output_____"
],
[
"Implementation note: \n\n- For python 3.6 and beyond, dictionaries retain the insertion order. \n- Furthermore, their hash-based lookup makes them suitable for rapid membership tests. \n - If _di_ is a dictionary, `key in di` will return `True` if _di_ has a key _key_, else `False`. \n\nThe dictionary `vocab` will utilize these features.",
"_____no_output_____"
]
],
[
[
"# load in the training corpus\nwith open(\"WSJ_02-21.pos\",\"r\") as f:\n training_corpus=f.readlines()\n \nprint(f\"A few items of the training corpus list\")\nprint(training_corpus[0:5])",
"A few items of the training corpus list\n['In\\tIN\\n', 'an\\tDT\\n', 'Oct.\\tNNP\\n', '19\\tCD\\n', 'review\\tNN\\n']\n"
],
[
"# read the vocabulary data, split by each line of text, and save the list\nwith open(\"hmm_vocab.txt\",\"r\") as f:\n voc_l=f.read().split('\\n')\n \nprint(\"A few items of the vocabulary list\")\nprint(voc_l[0:50])\nprint()\nprint(\"A few items at the end of the vocabulary list\")\nprint(voc_l[-50:])",
"A few items of the vocabulary list\n['!', '#', '$', '%', '&', \"'\", \"''\", \"'40s\", \"'60s\", \"'70s\", \"'80s\", \"'86\", \"'90s\", \"'N\", \"'S\", \"'d\", \"'em\", \"'ll\", \"'m\", \"'n'\", \"'re\", \"'s\", \"'til\", \"'ve\", '(', ')', ',', '-', '--', '--n--', '--unk--', '--unk_adj--', '--unk_adv--', '--unk_digit--', '--unk_noun--', '--unk_punct--', '--unk_upper--', '--unk_verb--', '.', '...', '0.01', '0.0108', '0.02', '0.03', '0.05', '0.1', '0.10', '0.12', '0.13', '0.15']\n\nA few items at the end of the vocabulary list\n['yards', 'yardstick', 'year', 'year-ago', 'year-before', 'year-earlier', 'year-end', 'year-on-year', 'year-round', 'year-to-date', 'year-to-year', 'yearlong', 'yearly', 'years', 'yeast', 'yelled', 'yelling', 'yellow', 'yen', 'yes', 'yesterday', 'yet', 'yield', 'yielded', 'yielding', 'yields', 'you', 'young', 'younger', 'youngest', 'youngsters', 'your', 'yourself', 'youth', 'youthful', 'yuppie', 'yuppies', 'zero', 'zero-coupon', 'zeroing', 'zeros', 'zinc', 'zip', 'zombie', 'zone', 'zones', 'zoning', '{', '}', '']\n"
],
[
"# vocab: dictionary that has the index of the corresponding words\nvocab={}\n\n# Get the index of the corresponding words:\nfor i,word in enumerate(sorted(voc_l)):\n vocab[word]=i\n\n \nprint(\"Vocabulary dictionary, key is the word, value is a unique integer\")\ncnt = 0\nfor k,v in vocab.items():\n print(f\"{k}:{v}\")\n cnt += 1\n if cnt > 20:\n break",
"Vocabulary dictionary, key is the word, value is a unique integer\n:0\n!:1\n#:2\n$:3\n%:4\n&:5\n':6\n'':7\n'40s:8\n'60s:9\n'70s:10\n'80s:11\n'86:12\n'90s:13\n'N:14\n'S:15\n'd:16\n'em:17\n'll:18\n'm:19\n'n':20\n"
],
[
"# load in the test corpus\nwith open(\"WSJ_24.pos\", 'r') as f:\n y = f.readlines()\n\nprint(\"A sample of the test corpus\")\nprint(y[0:10])",
"A sample of the test corpus\n['The\\tDT\\n', 'economy\\tNN\\n', \"'s\\tPOS\\n\", 'temperature\\tNN\\n', 'will\\tMD\\n', 'be\\tVB\\n', 'taken\\tVBN\\n', 'from\\tIN\\n', 'several\\tJJ\\n', 'vantage\\tNN\\n']\n"
],
[
"import string\n\n# Punctuation characters\npunct = set(string.punctuation)\n\n# Morphology rules used to assign unknown word tokens\nnoun_suffix = [\"action\", \"age\", \"ance\", \"cy\", \"dom\", \"ee\", \"ence\", \"er\", \"hood\", \"ion\", \"ism\", \"ist\", \"ity\", \"ling\", \"ment\", \"ness\", \"or\", \"ry\", \"scape\", \"ship\", \"ty\"]\nverb_suffix = [\"ate\", \"ify\", \"ise\", \"ize\"]\nadj_suffix = [\"able\", \"ese\", \"ful\", \"i\", \"ian\", \"ible\", \"ic\", \"ish\", \"ive\", \"less\", \"ly\", \"ous\"]\nadv_suffix = [\"ward\", \"wards\", \"wise\"]\n\n\ndef assign_unk(tok):\n \"\"\"\n Assign unknown word tokens\n \"\"\"\n # Digits\n if any(char.isdigit() for char in tok):\n return \"--unk_digit--\"\n\n # Punctuation\n elif any(char in punct for char in tok):\n return \"--unk_punct--\"\n\n # Upper-case\n elif any(char.isupper() for char in tok):\n return \"--unk_upper--\"\n\n # Nouns\n elif any(tok.endswith(suffix) for suffix in noun_suffix):\n return \"--unk_noun--\"\n\n # Verbs\n elif any(tok.endswith(suffix) for suffix in verb_suffix):\n return \"--unk_verb--\"\n\n # Adjectives\n elif any(tok.endswith(suffix) for suffix in adj_suffix):\n return \"--unk_adj--\"\n\n # Adverbs\n elif any(tok.endswith(suffix) for suffix in adv_suffix):\n return \"--unk_adv--\"\n\n return \"--unk--\"\n",
"_____no_output_____"
],
[
"def preprocess(vocab,data_fp):\n \"\"\"\n Preprocess data\n \"\"\"\n orig=[]\n prep=[]\n \n with open(data_fp,\"r\") as data_file:\n \n for cnt,word in enumerate(data_file):\n \n # End of sentence\n if not word.split():\n orig.append(word.strip())\n word='--n--'\n prep.append(word)\n continue\n \n # Handle unk\n elif word.strip() not in vocab:\n orig.append(word.strip())\n word=assign_unk(word)\n prep.append(word)\n continue\n \n else:\n orig.append(word.strip())\n prep.append(word.strip())\n \n assert(len(orig) == len(open(data_fp, \"r\").readlines()))\n assert(len(prep) == len(open(data_fp, \"r\").readlines()))\n\n return orig, prep",
"_____no_output_____"
],
[
"#corpus without tags, preprocessed\n_, prep = preprocess(vocab, \"test.words\") \n\nprint('The length of the preprocessed test corpus: ', len(prep))\nprint('This is a sample of the test_corpus: ')\nprint(prep[0:10])",
"The length of the preprocessed test corpus: 34199\nThis is a sample of the test_corpus: \n['The', 'economy', \"'s\", 'temperature', 'will', 'be', 'taken', 'from', 'several', '--unk--']\n"
]
],
[
[
"<a name='1'></a>\n# Part 1: Parts-of-speech tagging \n\n<a name='1.1'></a>\n## Part 1.1 - Training\nYou will start with the simplest possible parts-of-speech tagger and we will build up to the state of the art. \n\nIn this section, you will find the words that are not ambiguous. \n- For example, the word `is` is a verb and it is not ambiguous. \n- In the `WSJ` corpus, $86$% of the token are unambiguous (meaning they have only one tag) \n- About $14\\%$ are ambiguous (meaning that they have more than one tag)\n\n<img src = \"pos.png\" style=\"width:400px;height:250px;\"/>\n\nBefore you start predicting the tags of each word, you will need to compute a few dictionaries that will help you to generate the tables. ",
"_____no_output_____"
],
[
"#### Transition counts\n- The first dictionary is the `transition_counts` dictionary which computes the number of times each tag happened next to another tag. \n\nThis dictionary will be used to compute: \n$$P(t_i |t_{i-1}) \\tag{1}$$\n\nThis is the probability of a tag at position $i$ given the tag at position $i-1$.\n\nIn order for you to compute equation 1, you will create a `transition_counts` dictionary where \n- The keys are `(prev_tag, tag)`\n- The values are the number of times those two tags appeared in that order. ",
"_____no_output_____"
],
[
"#### Emission counts\n\nThe second dictionary you will compute is the `emission_counts` dictionary. This dictionary will be used to compute:\n\n$$P(w_i|t_i)\\tag{2}$$\n\nIn other words, you will use it to compute the probability of a word given its tag. \n\nIn order for you to compute equation 2, you will create an `emission_counts` dictionary where \n- The keys are `(tag, word)` \n- The values are the number of times that pair showed up in your training set. ",
"_____no_output_____"
],
[
"#### Tag counts\n\nThe last dictionary you will compute is the `tag_counts` dictionary. \n- The key is the tag \n- The value is the number of times each tag appeared.",
"_____no_output_____"
],
[
"<a name='ex-01'></a>\n### Exercise 01\n\n**Instructions:** Write a program that takes in the `training_corpus` and returns the three dictionaries mentioned above `transition_counts`, `emission_counts`, and `tag_counts`. \n- `emission_counts`: maps (tag, word) to the number of times it happened. \n- `transition_counts`: maps (prev_tag, tag) to the number of times it has appeared. \n- `tag_counts`: maps (tag) to the number of times it has occured. \n\nImplementation note: This routine utilises *defaultdict*, which is a subclass of *dict*. \n- A standard Python dictionary throws a *KeyError* if you try to access an item with a key that is not currently in the dictionary. \n- In contrast, the *defaultdict* will create an item of the type of the argument, in this case an integer with the default value of 0. \n- See [defaultdict](https://docs.python.org/3.3/library/collections.html#defaultdict-objects).",
"_____no_output_____"
]
],
[
[
"def get_word_tag(line, vocab): \n if not line.split():\n word = \"--n--\"\n tag = \"--s--\"\n return word, tag\n else:\n word, tag = line.split()\n if word not in vocab: \n # Handle unknown words\n word = assign_unk(word)\n return word, tag\n return None ",
"_____no_output_____"
],
[
"def create_dictionaries(training_corpus, vocab):\n \"\"\"\n Input: \n training_corpus: a corpus where each line has a word followed by its tag.\n vocab: a dictionary where keys are words in vocabulary and value is an index\n Output: \n emission_counts: a dictionary where the keys are (tag, word) and the values are the counts\n transition_counts: a dictionary where the keys are (prev_tag, tag) and the values are the counts\n tag_counts: a dictionary where the keys are the tags and the values are the counts\n \"\"\"\n \n # Initialize the dictionaries using defaultdict\n emission_counts=defaultdict(int)\n transition_counts=defaultdict(int)\n tag_counts=defaultdict(int)\n \n # Initialize 'prev_tag' (previous tag) with the start state\n # denoted by '--s--'\n \n prev_tag='--s--'\n \n i=0\n \n # Each item in the training corpus contains a word and its POS tag\n # Go through each word and its tafg in the training corpus\n for word_tag in training_corpus:\n \n i+=1\n \n if i % 50000 == 0:\n print(f'word count = {i}')\n \n word,tag=get_word_tag(word_tag,vocab)\n \n transition_counts[(prev_tag,tag)]+=1\n \n emission_counts[(tag,word)]+=1\n \n tag_counts[tag]+=1\n \n # Set the previous tag to this tag (for the next iteration of the loop)\n prev_tag = tag\n \n return emission_counts, transition_counts, tag_counts",
"_____no_output_____"
],
[
"emission_counts, transition_counts, tag_counts = create_dictionaries(training_corpus, vocab)",
"word count = 50000\nword count = 100000\nword count = 150000\nword count = 200000\nword count = 250000\nword count = 300000\nword count = 350000\nword count = 400000\nword count = 450000\nword count = 500000\nword count = 550000\nword count = 600000\nword count = 650000\nword count = 700000\nword count = 750000\nword count = 800000\nword count = 850000\nword count = 900000\nword count = 950000\n"
],
[
"# Get all the POS states\n\nstates=sorted(tag_counts.keys())\nprint(f\"Number of POS tags (number of 'states'): {len(states)}\")\nprint(\"View these POS tags (states)\")\nprint(states)",
"Number of POS tags (number of 'states'): 46\nView these POS tags (states)\n['#', '$', \"''\", '(', ')', ',', '--s--', '.', ':', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB', '``']\n"
]
],
[
[
"The 'states' are the Parts-of-speech designations found in the training data. They will also be referred to as 'tags' or POS in this assignment. \n\n- \"NN\" is noun, singular, \n- 'NNS' is noun, plural. \n- In addition, there are helpful tags like '--s--' which indicate a start of a sentence.\n- You can get a more complete description at [Penn Treebank II tag set](https://www.clips.uantwerpen.be/pages/mbsp-tags). ",
"_____no_output_____"
]
],
[
[
"print(\"transition examples: \")\nfor ex in list(transition_counts.items())[:3]:\n print(ex)\nprint()\n\nprint(\"emission examples: \")\nfor ex in list(emission_counts.items())[200:203]:\n print (ex)\nprint()\n\nprint(\"ambiguous word example: \")\nfor tup,cnt in emission_counts.items():\n if tup[1] == 'back': \n print (tup, cnt) ",
"transition examples: \n(('--s--', 'IN'), 5050)\n(('IN', 'DT'), 32364)\n(('DT', 'NNP'), 9044)\n\nemission examples: \n(('DT', 'any'), 721)\n(('NN', 'decrease'), 7)\n(('NN', 'insider-trading'), 5)\n\nambiguous word example: \n('RB', 'back') 304\n('VB', 'back') 20\n('RP', 'back') 84\n('JJ', 'back') 25\n('NN', 'back') 29\n('VBP', 'back') 4\n"
]
],
[
[
"<a name='1.2'></a>\n### Part 1.2 - Testing\n\nNow you will test the accuracy of your parts-of-speech tagger using your `emission_counts` dictionary. \n- Given your preprocessed test corpus `prep`, you will assign a parts-of-speech tag to every word in that corpus. \n- Using the original tagged test corpus `y`, you will then compute what percent of the tags you got correct. ",
"_____no_output_____"
],
[
"<a name='ex-02'></a>\n### Exercise 02\n\n**Instructions:** Implement `predict_pos` that computes the accuracy of your model. \n\n- This is a warm up exercise. \n- To assign a part of speech to a word, assign the most frequent POS for that word in the training set. \n- Then evaluate how well this approach works. Each time you predict based on the most frequent POS for the given word, check whether the actual POS of that word is the same. If so, the prediction was correct!\n- Calculate the accuracy as the number of correct predictions divided by the total number of words for which you predicted the POS tag.",
"_____no_output_____"
]
],
[
[
"def predict_pos(prep,y,emission_counts,vocab,states):\n '''\n Input: \n prep: a preprocessed version of 'y'. A list with the 'word' component of the tuples.\n y: a corpus composed of a list of tuples where each tuple consists of (word, POS)\n emission_counts: a dictionary where the keys are (tag,word) tuples and the value is the count\n vocab: a dictionary where keys are words in vocabulary and value is an index\n states: a sorted list of all possible tags for this assignment\n Output: \n accuracy: Number of times you classified a word correctly\n '''\n \n num_corret=0\n \n # Get the (tag,word) tuples stored as set\n all_words=set(emission_counts.keys())\n \n # Get the number of (word,POS) tuples in th corpus y\n total=len(y)\n \n for word,y_tup in zip(prep,y):\n \n # Split the (word,POS) string into a list of towo items\n y_tup_l=y_tup.split()\n \n # Verify the y_tup contain both word and POS\n if len(y_tup_l)==2:\n \n # Set the true POS label for this word\n true_label=y_tup_l[1]\n \n else:\n # If the y_tup didn't contain word and POS\n continue\n \n count_final=0\n pos_final=''\n \n # If the word is in the vocabulary\n if word in vocab:\n \n for pos in states:\n \n # Define the key as the tuple containing the POS and word\n key=(pos,word)\n \n # Check if the (pos,word) key exists in the emission_conuts dictionary\n if key in emission_counts:\n \n # Get the emission count of the (pos,word) tuple\n count=emission_counts[key]\n \n # Keep track of the POS with the largest count\n if count>count_final:\n \n # update the final count\n count_final=count\n \n # update the final pos\n pos_final=pos\n \n # If the final POS (with the largest count) matches the true POS: \n if pos_final == true_label:\n \n # update the number of correct predictions\n num_corret+=1\n \n accuracy=num_corret/total\n \n return accuracy",
"_____no_output_____"
],
[
"accuracy_predict_pos = predict_pos(prep, y, emission_counts, vocab, states)\nprint(f\"Accuracy of prediction using predict_pos is {accuracy_predict_pos:.4f}\")",
"Accuracy of prediction using predict_pos is 0.8889\n"
]
],
[
[
"<a name='2'></a>\n# Part 2: Hidden Markov Models for POS\n\nNow you will build something more context specific. Concretely, you will be implementing a Hidden Markov Model (HMM) with a Viterbi decoder\n- The HMM is one of the most commonly used algorithms in Natural Language Processing, and is a foundation to many deep learning techniques you will see in this specialization. \n- In addition to parts-of-speech tagging, HMM is used in speech recognition, speech synthesis, etc. \n- By completing this part of the assignment you will get a 95% accuracy on the same dataset you used in Part 1.\n\nThe Markov Model contains a number of states and the probability of transition between those states. \n- In this case, the states are the parts-of-speech. \n- A Markov Model utilizes a transition matrix, `A`. \n- A Hidden Markov Model adds an observation or emission matrix `B` which describes the probability of a visible observation when we are in a particular state. \n- In this case, the emissions are the words in the corpus\n- The state, which is hidden, is the POS tag of that word.",
"_____no_output_____"
],
[
"<a name='2.1'></a>\n## Part 2.1 Generating Matrices\n\n### Creating the 'A' transition probabilities matrix\nNow that you have your `emission_counts`, `transition_counts`, and `tag_counts`, you will start implementing the Hidden Markov Model. \n\nThis will allow you to quickly construct the \n- `A` transition probabilities matrix.\n- and the `B` emission probabilities matrix. \n\nYou will also use some smoothing when computing these matrices. \n\nHere is an example of what the `A` transition matrix would look like (it is simplified to 5 tags for viewing. It is 46x46 in this assignment.):\n\n\n|**A** |...| RBS | RP | SYM | TO | UH|...\n| --- ||---:-------------| ------------ | ------------ | -------- | ---------- |----\n|**RBS** |...|2.217069e-06 |2.217069e-06 |2.217069e-06 |0.008870 |2.217069e-06|...\n|**RP** |...|3.756509e-07 |7.516775e-04 |3.756509e-07 |0.051089 |3.756509e-07|...\n|**SYM** |...|1.722772e-05 |1.722772e-05 |1.722772e-05 |0.000017 |1.722772e-05|...\n|**TO** |...|4.477336e-05 |4.472863e-08 |4.472863e-08 |0.000090 |4.477336e-05|...\n|**UH** |...|1.030439e-05 |1.030439e-05 |1.030439e-05 |0.061837 |3.092348e-02|...\n| ... |...| ... | ... | ... | ... | ... | ...\n\nNote that the matrix above was computed with smoothing. \n\nEach cell gives you the probability to go from one part of speech to another. \n- In other words, there is a 4.47e-8 chance of going from parts-of-speech `TO` to `RP`. \n- The sum of each row has to equal 1, because we assume that the next POS tag must be one of the available columns in the table.\n\nThe smoothing was done as follows: \n\n$$ P(t_i | t_{i-1}) = \\frac{C(t_{i-1}, t_{i}) + \\alpha }{C(t_{i-1}) +\\alpha * N}\\tag{3}$$\n\n- $N$ is the total number of tags\n- $C(t_{i-1}, t_{i})$ is the count of the tuple (previous POS, current POS) in `transition_counts` dictionary.\n- $C(t_{i-1})$ is the count of the previous POS in the `tag_counts` dictionary.\n- $\\alpha$ is a smoothing parameter.",
"_____no_output_____"
],
[
"<a name='ex-03'></a>\n### Exercise 03\n\n**Instructions:** Implement the `create_transition_matrix` below for all tags. Your task is to output a matrix that computes equation 3 for each cell in matrix `A`. ",
"_____no_output_____"
]
],
[
[
"def create_transition_matrix(alpha, tag_counts, transition_counts):\n ''' \n Input: \n alpha: number used for smoothing\n tag_counts: a dictionary mapping each tag to its respective count\n transition_counts: transition count for the previous word and tag\n Output:\n A: matrix of dimension (num_tags,num_tags)\n '''\n \n # Get a sorted list of unique POS tags\n all_tags = sorted(tag_counts.keys())\n \n # Get the number of unique POS tags\n num_tags = len(all_tags)\n \n # Initialize the transition matrix 'A'\n A = np.zeros((num_tags,num_tags))\n \n # Get the unique transition tuples (previous POS,current POS)\n trains_keys=set(transition_counts.keys())\n \n for i in range(num_tags):\n \n for j in range(num_tags):\n \n count=0\n\n # Get the tag as position i and tag at position j (from the all_tags list)\n key = (all_tags[i],all_tags[j])\n\n # exists in the transition counts dictionary\n if key in transition_counts:\n\n # for the (prev POS , current POS) tuple\n count = transition_counts[key]\n\n # Get the count of the previous tag (index postion i) from tag_counts\n count_prev_tag = tag_counts[all_tags[i]]\n\n # Apply smoothing using count of the tuple , alpha\n # count of previous tag, alpha, and the number of total tags\n A[i,j] = (count + alpha) / (count_prev_tag +alpha*num_tags)\n \n return A",
"_____no_output_____"
],
[
"alpha = 0.001\nfor i in range(46):\n tag_counts.pop(i,None)\n\nA = create_transition_matrix(alpha, tag_counts, transition_counts)\n# Testing your function\nprint(f\"A at row 0, col 0: {A[0,0]:.9f}\")\nprint(f\"A at row 3, col 1: {A[3,1]:.4f}\")\n\n#print(\"View a subset of transition matrix A\")\nA_sub = pd.DataFrame(A[30:35,30:35], index=states[30:35], columns = states[30:35] )\nprint(A_sub)",
"A at row 0, col 0: 0.000007040\nA at row 3, col 1: 0.1691\n RBS RP SYM TO UH\nRBS 2.217069e-06 2.217069e-06 2.217069e-06 0.008870 2.217069e-06\nRP 3.756509e-07 7.516775e-04 3.756509e-07 0.051089 3.756509e-07\nSYM 1.722772e-05 1.722772e-05 1.722772e-05 0.000017 1.722772e-05\nTO 4.477336e-05 4.472863e-08 4.472863e-08 0.000090 4.477336e-05\nUH 1.030439e-05 1.030439e-05 1.030439e-05 0.061837 3.092348e-02\n"
]
],
[
[
"### Create the 'B' emission probabilities matrix\n\nNow you will create the `B` transition matrix which computes the emission probability. \n\nYou will use smoothing as defined below: \n\n$$P(w_i | t_i) = \\frac{C(t_i, word_i)+ \\alpha}{C(t_{i}) +\\alpha * N}\\tag{4}$$\n\n- $C(t_i, word_i)$ is the number of times $word_i$ was associated with $tag_i$ in the training data (stored in `emission_counts` dictionary).\n- $C(t_i)$ is the number of times $tag_i$ was in the training data (stored in `tag_counts` dictionary).\n- $N$ is the number of words in the vocabulary\n- $\\alpha$ is a smoothing parameter. \n\nThe matrix `B` is of dimension (num_tags, N), where num_tags is the number of possible parts-of-speech tags. \n\nHere is an example of the matrix, only a subset of tags and words are shown: \n<p style='text-align: center;'> <b>B Emissions Probability Matrix (subset)</b> </p>\n\n|**B**| ...| 725 | adroitly | engineers | promoted | synergy| ...|\n|----|----|--------------|--------------|--------------|--------------|-------------|----|\n|**CD** | ...| **8.201296e-05** | 2.732854e-08 | 2.732854e-08 | 2.732854e-08 | 2.732854e-08| ...|\n|**NN** | ...| 7.521128e-09 | 7.521128e-09 | 7.521128e-09 | 7.521128e-09 | **2.257091e-05**| ...|\n|**NNS** | ...| 1.670013e-08 | 1.670013e-08 |**4.676203e-04** | 1.670013e-08 | 1.670013e-08| ...|\n|**VB** | ...| 3.779036e-08 | 3.779036e-08 | 3.779036e-08 | 3.779036e-08 | 3.779036e-08| ...|\n|**RB** | ...| 3.226454e-08 | **6.456135e-05** | 3.226454e-08 | 3.226454e-08 | 3.226454e-08| ...|\n|**RP** | ...| 3.723317e-07 | 3.723317e-07 | 3.723317e-07 | **3.723317e-07** | 3.723317e-07| ...|\n| ... | ...| ... | ... | ... | ... | ... | ...|\n\n",
"_____no_output_____"
],
[
"<a name='ex-04'></a>\n### Exercise 04\n**Instructions:** Implement the `create_emission_matrix` below that computes the `B` emission probabilities matrix. Your function takes in $\\alpha$, the smoothing parameter, `tag_counts`, which is a dictionary mapping each tag to its respective count, the `emission_counts` dictionary where the keys are (tag, word) and the values are the counts. Your task is to output a matrix that computes equation 4 for each cell in matrix `B`. ",
"_____no_output_____"
]
],
[
[
"def create_emission_matrix(alpha, tag_counts, emission_counts, vocab):\n '''\n Input: \n alpha: tuning parameter used in smoothing \n tag_counts: a dictionary mapping each tag to its respective count\n emission_counts: a dictionary where the keys are (tag, word) and the values are the counts\n vocab: a dictionary where keys are words in vocabulary and value is an index\n Output:\n B: a matrix of dimension (num_tags, len(vocab))\n '''\n \n # Get the number of POS tag\n num_tags=len(tag_counts)\n \n # Get a list of all POS tags\n all_tags=sorted(tag_counts.keys())\n \n # Get the total number of unique words in the vocabulary\n num_words=len(vocab)\n \n # Initialize the emission matrix B with places for \n # tags in the rows and words in the columns\n B = np.zeros((num_tags,num_words))\n \n # Get a set of all (POS,word) tuples\n # from the keys of the emisson_counts dictionary\n emis_keys=set(list(emission_counts.keys()))\n \n for i in range(num_tags):\n \n for j in range(num_words):\n \n count=0\n \n # Define the (POS tag,word) tuple for this row and column\n key = (all_tags[i],vocab[j])\n \n # Check if the (POS tag,word) tuple exists as a key in emission counts\n if key in emission_counts.keys():\n \n # Get the count of (POS tag,word) from the emission_counts\n count = emission_counts[key]\n \n # Get the count of the POS tag\n count_tag = tag_counts[all_tags[i]]\n \n # Apply smoothing and store the smoothed value \n # into the emission matrix B for this row and column\n B[i,j]=(count + alpha)/(count_tag+alpha*num_words)\n \n return B",
"_____no_output_____"
],
[
"# creating your emission probability matrix. this takes a few minutes to run. \nfor i in range(46):\n tag_counts.pop(i,None)\n\nB = create_emission_matrix(alpha, tag_counts, emission_counts, list(vocab))\n\nprint(f\"View Matrix position at row 0, column 0: {B[0,0]:.9f}\")\nprint(f\"View Matrix position at row 3, column 1: {B[3,1]:.9f}\")\n\n# Try viewing emissions for a few words in a sample dataframe\ncidx = ['725','adroitly','engineers', 'promoted', 'synergy']\n\n# Get the integer ID for each word\ncols = [vocab[a] for a in cidx]\n\n# Choose POS tags to show in a sample dataframe\nrvals =['CD','NN','NNS', 'VB','RB','RP']\n\n# For each POS tag, get the row number from the 'states' list\nrows = [states.index(a) for a in rvals]\n\n# Get the emissions for the sample of words, and the sample of POS tags\nB_sub = pd.DataFrame(B[np.ix_(rows,cols)], index=rvals, columns = cidx )\nprint(B_sub)\n",
"View Matrix position at row 0, column 0: 0.000006032\nView Matrix position at row 3, column 1: 0.000000720\n 725 adroitly engineers promoted synergy\nCD 8.201296e-05 2.732854e-08 2.732854e-08 2.732854e-08 2.732854e-08\nNN 7.521128e-09 7.521128e-09 7.521128e-09 7.521128e-09 2.257091e-05\nNNS 1.670013e-08 1.670013e-08 4.676203e-04 1.670013e-08 1.670013e-08\nVB 3.779036e-08 3.779036e-08 3.779036e-08 3.779036e-08 3.779036e-08\nRB 3.226454e-08 6.456135e-05 3.226454e-08 3.226454e-08 3.226454e-08\nRP 3.723317e-07 3.723317e-07 3.723317e-07 3.723317e-07 3.723317e-07\n"
]
],
[
[
"<a name='3'></a>\n# Part 3: Viterbi Algorithm and Dynamic Programming\n\nIn this part of the assignment you will implement the Viterbi algorithm which makes use of dynamic programming. Specifically, you will use your two matrices, `A` and `B` to compute the Viterbi algorithm. We have decomposed this process into three main steps for you. \n\n* **Initialization** - In this part you initialize the `best_paths` and `best_probabilities` matrices that you will be populating in `feed_forward`.\n* **Feed forward** - At each step, you calculate the probability of each path happening and the best paths up to that point. \n* **Feed backward**: This allows you to find the best path with the highest probabilities. \n\n<a name='3.1'></a>\n## Part 3.1: Initialization \n\nYou will start by initializing two matrices of the same dimension. \n\n- best_probs: Each cell contains the probability of going from one POS tag to a word in the corpus.\n\n- best_paths: A matrix that helps you trace through the best possible path in the corpus. ",
"_____no_output_____"
],
[
"<a name='ex-05'></a>\n### Exercise 05\n**Instructions**: \nWrite a program below that initializes the `best_probs` and the `best_paths` matrix. \n\nBoth matrices will be initialized to zero except for column zero of `best_probs`. \n- Column zero of `best_probs` is initialized with the assumption that the first word of the corpus was preceded by a start token (\"--s--\"). \n- This allows you to reference the **A** matrix for the transition probability\n\nHere is how to initialize column 0 of `best_probs`:\n- The probability of the best path going from the start index to a given POS tag indexed by integer $i$ is denoted by $\\textrm{best_probs}[s_{idx}, i]$.\n- This is estimated as the probability that the start tag transitions to the POS denoted by index $i$: $\\mathbf{A}[s_{idx}, i]$ AND that the POS tag denoted by $i$ emits the first word of the given corpus, which is $\\mathbf{B}[i, vocab[corpus[0]]]$.\n- Note that vocab[corpus[0]] refers to the first word of the corpus (the word at position 0 of the corpus). \n- **vocab** is a dictionary that returns the unique integer that refers to that particular word.\n\nConceptually, it looks like this:\n$\\textrm{best_probs}[s_{idx}, i] = \\mathbf{A}[s_{idx}, i] \\times \\mathbf{B}[i, corpus[0] ]$\n\n\nIn order to avoid multiplying and storing small values on the computer, we'll take the log of the product, which becomes the sum of two logs:\n\n$best\\_probs[i,0] = log(A[s_{idx}, i]) + log(B[i, vocab[corpus[0]]$\n\nAlso, to avoid taking the log of 0 (which is defined as negative infinity), the code itself will just set $best\\_probs[i,0] = float('-inf')$ when $A[s_{idx}, i] == 0$\n\n\nSo the implementation to initialize $best\\_probs$ looks like this:\n\n$ if A[s_{idx}, i] <> 0 : best\\_probs[i,0] = log(A[s_{idx}, i]) + log(B[i, vocab[corpus[0]]$\n\n$ if A[s_{idx}, i] == 0 : best\\_probs[i,0] = float('-inf')$\n\nPlease use [math.log](https://docs.python.org/3/library/math.html) to compute the natural logarithm.",
"_____no_output_____"
],
[
"The example below shows the initialization assuming the corpus starts with the phrase \"Loss tracks upward\".\n\n<img src = \"Initialize4.PNG\"/>",
"_____no_output_____"
],
[
"Represent infinity and negative infinity like this:\n\n```CPP\nfloat('inf')\nfloat('-inf')\n```",
"_____no_output_____"
]
],
[
[
"def initialize(states, tag_counts, A, B, corpus, vocab):\n '''\n Input: \n states: a list of all possible parts-of-speech\n tag_counts: a dictionary mapping each tag to its respective count\n A: Transition Matrix of dimension (num_tags, num_tags)\n B: Emission Matrix of dimension (num_tags, len(vocab))\n corpus: a sequence of words whose POS is to be identified in a list \n vocab: a dictionary where keys are words in vocabulary and value is an index\n Output:\n best_probs: matrix of dimension (num_tags, len(corpus)) of floats\n best_paths: matrix of dimension (num_tags, len(corpus)) of integers\n '''\n # Get the total number of unique POS tags\n num_tags=len(tag_counts)\n \n # Initialize best_probs matrix\n # POS tag in the rows, number of words in the corpus as the columns\n best_probs=np.zeros((num_tags,len(corpus)))\n \n # Initialize best_path matrix\n # POS tag in the rows, number of words in the corpus as columns\n best_paths=np.zeros((num_tags,len(corpus)),dtype=int)\n \n # Define the start token\n s_idx=states.index(\"--s--\")\n \n for i in range(num_tags):\n \n # Handle the special case when the transition from start token to POS tag i is zero\n if A[s_idx,i] == 0:\n \n # Initialize best_probs as POS tag 'i',column 0,to negative infinity\n best_probs[i,0]=float('-inf')\n \n # For all other cases when transition from start token to POS tag i is non-zero:\n else:\n # Initialize best_probs at POS tag 'i', column 0\n # Check the formula in the instructions above\n best_probs[i,0] = math.log(A[s_idx,i]) + math.log(B[i,vocab[corpus[0]]] )\n \n return best_probs, best_paths",
"_____no_output_____"
],
[
"best_probs, best_paths = initialize(states, tag_counts, A, B, prep, vocab)",
"_____no_output_____"
],
[
"# Test the function\nprint(f\"best_probs[0,0]: {best_probs[0,0]:.4f}\") \nprint(f\"best_paths[2,3]: {best_paths[2,3]:.4f}\")",
"best_probs[0,0]: -22.6098\nbest_paths[2,3]: 0.0000\n"
]
],
[
[
"<a name='3.2'></a>\n## Part 3.2 Viterbi Forward\n\nIn this part of the assignment, you will implement the `viterbi_forward` segment. In other words, you will populate your `best_probs` and `best_paths` matrices.\n- Walk forward through the corpus.\n- For each word, compute a probability for each possible tag. \n- Unlike the previous algorithm `predict_pos` (the 'warm-up' exercise), this will include the path up to that (word,tag) combination. \n\nHere is an example with a three-word corpus \"Loss tracks upward\":\n- Note, in this example, only a subset of states (POS tags) are shown in the diagram below, for easier reading. \n- In the diagram below, the first word \"Loss\" is already initialized. \n- The algorithm will compute a probability for each of the potential tags in the second and future words. \n\nCompute the probability that the tag of the second work ('tracks') is a verb, 3rd person singular present (VBZ). \n- In the `best_probs` matrix, go to the column of the second word ('tracks'), and row 40 (VBZ), this cell is highlighted in light orange in the diagram below.\n- Examine each of the paths from the tags of the first word ('Loss') and choose the most likely path. \n- An example of the calculation for **one** of those paths is the path from ('Loss', NN) to ('tracks', VBZ).\n- The log of the probability of the path up to and including the first word 'Loss' having POS tag NN is $-14.32$. The `best_probs` matrix contains this value -14.32 in the column for 'Loss' and row for 'NN'.\n- Find the probability that NN transitions to VBZ. To find this probability, go to the `A` transition matrix, and go to the row for 'NN' and the column for 'VBZ'. The value is $4.37e-02$, which is circled in the diagram, so add $-14.32 + log(4.37e-02)$. \n- Find the log of the probability that the tag VBS would 'emit' the word 'tracks'. To find this, look at the 'B' emission matrix in row 'VBZ' and the column for the word 'tracks'. The value $4.61e-04$ is circled in the diagram below. So add $-14.32 + log(4.37e-02) + log(4.61e-04)$.\n- The sum of $-14.32 + log(4.37e-02) + log(4.61e-04)$ is $-25.13$. Store $-25.13$ in the `best_probs` matrix at row 'VBZ' and column 'tracks' (as seen in the cell that is highlighted in light orange in the diagram).\n- All other paths in best_probs are calculated. Notice that $-25.13$ is greater than all of the other values in column 'tracks' of matrix `best_probs`, and so the most likely path to 'VBZ' is from 'NN'. 'NN' is in row 20 of the `best_probs` matrix, so $20$ is the most likely path.\n- Store the most likely path $20$ in the `best_paths` table. This is highlighted in light orange in the diagram below.",
"_____no_output_____"
],
[
"The formula to compute the probability and path for the $i^{th}$ word in the $corpus$, the prior word $i-1$ in the corpus, current POS tag $j$, and previous POS tag $k$ is:\n\n$\\mathrm{prob} = \\mathbf{best\\_prob}_{k, i-1} + \\mathrm{log}(\\mathbf{A}_{k, j}) + \\mathrm{log}(\\mathbf{B}_{j, vocab(corpus_{i})})$\n\nwhere $corpus_{i}$ is the word in the corpus at index $i$, and $vocab$ is the dictionary that gets the unique integer that represents a given word.\n\n$\\mathrm{path} = k$\n\nwhere $k$ is the integer representing the previous POS tag.\n",
"_____no_output_____"
],
[
"<a name='ex-06'></a>\n\n### Exercise 06\n\nInstructions: Implement the `viterbi_forward` algorithm and store the best_path and best_prob for every possible tag for each word in the matrices `best_probs` and `best_tags` using the pseudo code below.\n\n`for each word in the corpus\n\n for each POS tag type that this word may be\n \n for POS tag type that the previous word could be\n \n compute the probability that the previous word had a given POS tag, that the current word has a given POS tag, and that the POS tag would emit this current word.\n \n retain the highest probability computed for the current word\n \n set best_probs to this highest probability\n \n set best_paths to the index 'k', representing the POS tag of the previous word which produced the highest probability `\n\nPlease use [math.log](https://docs.python.org/3/library/math.html) to compute the natural logarithm.",
"_____no_output_____"
],
[
"<img src = \"Forward4.PNG\"/>",
"_____no_output_____"
]
],
[
[
"def viterbi_forward(A, B, test_corpus, best_probs, best_paths, vocab):\n '''\n Input: \n A, B: The transiton and emission matrices respectively\n test_corpus: a list containing a preprocessed corpus\n best_probs: an initilized matrix of dimension (num_tags, len(corpus))\n best_paths: an initilized matrix of dimension (num_tags, len(corpus))\n vocab: a dictionary where keys are words in vocabulary and value is an index \n Output: \n best_probs: a completed matrix of dimension (num_tags, len(corpus))\n best_paths: a completed matrix of dimension (num_tags, len(corpus))\n '''\n # Get the number of unique POS tags (which is the num of rows in best_probs)\n num_tags = best_probs.shape[0]\n \n # Go through every word in the corpus starting from word 1\n # Recall that word 0 was initialized in `initialize()`\n for i in range(1, len(test_corpus)): \n \n # Print number of words processed, every 5000 words\n if i % 5000 == 0:\n print(\"Words processed: {:>8}\".format(i))\n \n ### START CODE HERE (Replace instances of 'None' with your code EXCEPT the first 'best_path_i = None') ###\n # For each unique POS tag that the current word can be\n for j in range(num_tags): # complete this line\n \n # Initialize best_prob for word i to negative infinity\n best_prob_i = float(\"-inf\")\n \n # Initialize best_path for current word i to None\n best_path_i = None\n\n # For each POS tag that the previous word can be:\n for k in range(num_tags): # complete this line\n \n # Calculate the probability = \n # best probs of POS tag k, previous word i-1 + \n # log(prob of transition from POS k to POS j) + \n # log(prob that emission of POS j is word i)\n prob = best_probs[k,i-1]+math.log(A[k,j]) +math.log(B[j,vocab[test_corpus[i]]])\n # check if this path's probability is greater than\n # the best probability up to and before this point\n if prob > best_prob_i: # complete this line\n \n # Keep track of the best probability\n best_prob_i = prob\n \n # keep track of the POS tag of the previous word\n # that is part of the best path. \n # Save the index (integer) associated with \n # that previous word's POS tag\n best_path_i = k\n\n # Save the best probability for the \n # given current word's POS tag\n # and the position of the current word inside the corpus\n best_probs[j,i] = best_prob_i\n \n # Save the unique integer ID of the previous POS tag\n # into best_paths matrix, for the POS tag of the current word\n # and the position of the current word inside the corpus.\n best_paths[j,i] = best_path_i\n\n ### END CODE HERE ###\n return best_probs, best_paths",
"_____no_output_____"
]
],
[
[
"Run the `viterbi_forward` function to fill in the `best_probs` and `best_paths` matrices.\n\n**Note** that this will take a few minutes to run. There are about 30,000 words to process.",
"_____no_output_____"
]
],
[
[
"# this will take a few minutes to run => processes ~ 30,000 words\nbest_probs, best_paths = viterbi_forward(A, B, prep, best_probs, best_paths, vocab)",
"Words processed: 5000\nWords processed: 10000\nWords processed: 15000\nWords processed: 20000\nWords processed: 25000\nWords processed: 30000\n"
],
[
"# Test this function \nprint(f\"best_probs[0,1]: {best_probs[0,1]:.4f}\") \nprint(f\"best_probs[0,4]: {best_probs[0,4]:.4f}\") ",
"best_probs[0,1]: -24.7822\nbest_probs[0,4]: -49.5601\n"
]
],
[
[
"<a name='3.3'></a>\n## Part 3.3 Viterbi backward\n\nNow you will implement the Viterbi backward algorithm.\n- The Viterbi backward algorithm gets the predictions of the POS tags for each word in the corpus using the `best_paths` and the `best_probs` matrices.\n\nThe example below shows how to walk backwards through the best_paths matrix to get the POS tags of each word in the corpus. Recall that this example corpus has three words: \"Loss tracks upward\".\n\nPOS tag for 'upward' is `RB`\n- Select the the most likely POS tag for the last word in the corpus, 'upward' in the `best_prob` table.\n- Look for the row in the column for 'upward' that has the largest probability.\n- Notice that in row 28 of `best_probs`, the estimated probability is -34.99, which is larger than the other values in the column. So the most likely POS tag for 'upward' is `RB` an adverb, at row 28 of `best_prob`. \n- The variable `z` is an array that stores the unique integer ID of the predicted POS tags for each word in the corpus. In array z, at position 2, store the value 28 to indicate that the word 'upward' (at index 2 in the corpus), most likely has the POS tag associated with unique ID 28 (which is `RB`).\n- The variable `pred` contains the POS tags in string form. So `pred` at index 2 stores the string `RB`.\n\n\nPOS tag for 'tracks' is `VBZ`\n- The next step is to go backward one word in the corpus ('tracks'). Since the most likely POS tag for 'upward' is `RB`, which is uniquely identified by integer ID 28, go to the `best_paths` matrix in column 2, row 28. The value stored in `best_paths`, column 2, row 28 indicates the unique ID of the POS tag of the previous word. In this case, the value stored here is 40, which is the unique ID for POS tag `VBZ` (verb, 3rd person singular present).\n- So the previous word at index 1 of the corpus ('tracks'), most likely has the POS tag with unique ID 40, which is `VBZ`.\n- In array `z`, store the value 40 at position 1, and for array `pred`, store the string `VBZ` to indicate that the word 'tracks' most likely has POS tag `VBZ`.\n\nPOS tag for 'Loss' is `NN`\n- In `best_paths` at column 1, the unique ID stored at row 40 is 20. 20 is the unique ID for POS tag `NN`.\n- In array `z` at position 0, store 20. In array `pred` at position 0, store `NN`.",
"_____no_output_____"
],
[
"<img src = \"Backwards5.PNG\"/>",
"_____no_output_____"
],
[
"<a name='ex-07'></a>\n### Exercise 07\nImplement the `viterbi_backward` algorithm, which returns a list of predicted POS tags for each word in the corpus.\n\n- Note that the numbering of the index positions starts at 0 and not 1. \n- `m` is the number of words in the corpus. \n - So the indexing into the corpus goes from `0` to `m - 1`.\n - Also, the columns in `best_probs` and `best_paths` are indexed from `0` to `m - 1`\n\n\n**In Step 1:** \nLoop through all the rows (POS tags) in the last entry of `best_probs` and find the row (POS tag) with the maximum value.\nConvert the unique integer ID to a tag (a string representation) using the dictionary `states`. \n\nReferring to the three-word corpus described above:\n- `z[2] = 28`: For the word 'upward' at position 2 in the corpus, the POS tag ID is 28. Store 28 in `z` at position 2.\n- states(28) is 'RB': The POS tag ID 28 refers to the POS tag 'RB'.\n- `pred[2] = 'RB'`: In array `pred`, store the POS tag for the word 'upward'.\n\n**In Step 2:** \n- Starting at the last column of best_paths, use `best_probs` to find the most likely POS tag for the last word in the corpus.\n- Then use `best_paths` to find the most likely POS tag for the previous word. \n- Update the POS tag for each word in `z` and in `preds`.\n\nReferring to the three-word example from above, read best_paths at column 2 and fill in z at position 1. \n`z[1] = best_paths[z[2],2]` \n\nThe small test following the routine prints the last few words of the corpus and their states to aid in debug.",
"_____no_output_____"
]
],
[
[
"def viterbi_backward(best_probs, best_paths, corpus, states):\n '''\n This function returns the best path.\n \n '''\n # Get the number of words in the corpus\n # which is also the number of columns in best_probs, best_paths\n m = best_paths.shape[1] \n \n # Initialize array z, same length as the corpus\n z = [None] * m\n \n # Get the number of unique POS tags\n num_tags = best_probs.shape[0]\n \n # Initialize the best probability for the last word\n best_prob_for_last_word = float('-inf')\n \n # Initialize pred array, same length as corpus\n pred = [None] * m\n \n ### START CODE HERE (Replace instances of 'None' with your code) ###\n ## Step 1 ##\n \n # Go through each POS tag for the last word (last column of best_probs)\n # in order to find the row (POS tag integer ID) \n # with highest probability for the last word\n for k in range(num_tags): # complete this line\n\n # If the probability of POS tag at row k \n # is better than the previosly best probability for the last word:\n if best_probs[k,-1]>best_prob_for_last_word: # complete this line\n \n # Store the new best probability for the last word\n best_prob_for_last_word = best_probs[k,-1]\n \n # Store the unique integer ID of the POS tag\n # which is also the row number in best_probs\n z[m - 1]=k\n \n # Convert the last word's predicted POS tag\n # from its unique integer ID into the string representation\n # using the 'states' dictionary\n # store this in the 'pred' array for the last word\n pred[m - 1] = states[k]\n \n ## Step 2 ##\n # Find the best POS tags by walking backward through the best_paths\n # From the last word in the corpus to the 0th word in the corpus\n for i in range(len(corpus)-1, -1, -1): # complete this line\n \n # Retrieve the unique integer ID of\n # the POS tag for the word at position 'i' in the corpus\n pos_tag_for_word_i = best_paths[np.argmax(best_probs[:,i]),i]\n \n # In best_paths, go to the row representing the POS tag of word i\n # and the column representing the word's position in the corpus\n # to retrieve the predicted POS for the word at position i-1 in the corpus\n z[i - 1] = best_paths[pos_tag_for_word_i,i]\n \n # Get the previous word's POS tag in string form\n # Use the 'states' dictionary, \n # where the key is the unique integer ID of the POS tag,\n # and the value is the string representation of that POS tag\n pred[i - 1] = states[pos_tag_for_word_i]\n \n ### END CODE HERE ###\n return pred",
"_____no_output_____"
],
[
"# Run and test your function\npred = viterbi_backward(best_probs, best_paths, prep, states)\nm=len(pred)\nprint('The prediction for pred[-7:m-1] is: \\n', prep[-7:m-1], \"\\n\", pred[-7:m-1], \"\\n\")\nprint('The prediction for pred[0:8] is: \\n', pred[0:7], \"\\n\", prep[0:7])",
"The prediction for pred[-7:m-1] is: \n ['see', 'them', 'here', 'with', 'us', '.'] \n ['VB', 'PRP', 'RB', 'IN', 'PRP', '.'] \n\nThe prediction for pred[0:8] is: \n ['DT', 'NN', 'POS', 'NN', 'MD', 'VB', 'VBN'] \n ['The', 'economy', \"'s\", 'temperature', 'will', 'be', 'taken']\n"
]
],
[
[
"<a name='4'></a>\n# Part 4: Predicting on a data set\n\nCompute the accuracy of your prediction by comparing it with the true `y` labels. \n- `pred` is a list of predicted POS tags corresponding to the words of the `test_corpus`. ",
"_____no_output_____"
]
],
[
[
"print('The third word is:', prep[3])\nprint('Your prediction is:', pred[3])\nprint('Your corresponding label y is: ', y[3])",
"The third word is: temperature\nYour prediction is: NN\nYour corresponding label y is: temperature\tNN\n\n"
]
],
[
[
"<a name='ex-08'></a>\n### Exercise 08\n\nImplement a function to compute the accuracy of the viterbi algorithm's POS tag predictions.\n- To split y into the word and its tag you can use `y.split()`. ",
"_____no_output_____"
]
],
[
[
"def compute_accuracy(pred, y):\n '''\n Input: \n pred: a list of the predicted parts-of-speech \n y: a list of lines where each word is separated by a '\\t' (i.e. word \\t tag)\n Output: \n \n '''\n num_correct = 0\n total = 0\n \n # Zip together the prediction and the labels\n for prediction, y in zip(pred, y):\n ### START CODE HERE (Replace instances of 'None' with your code) ###\n # Split the label into the word and the POS tag\n word_tag_tuple = y.split()\n \n # Check that there is actually a word and a tag\n # no more and no less than 2 items\n if len(word_tag_tuple)!=2: # complete this line\n continue \n # store the word and tag separately\n word, tag = word_tag_tuple\n # Check if the POS tag label matches the prediction\n if prediction == tag: # complete this line\n # count the number of times that the prediction\n # and label match\n num_correct += 1\n \n # keep track of the total number of examples (that have valid labels)\n total += 1\n \n ### END CODE HERE ###\n return (num_correct/total)",
"_____no_output_____"
],
[
"print(f\"Accuracy of the Viterbi algorithm is {compute_accuracy(pred, y):.4f}\")",
"Accuracy of the Viterbi algorithm is 0.9528\n"
]
],
[
[
"### Key Points and overview\n\nIn this assignment you learned about parts-of-speech tagging. \n- In this assignment, you predicted POS tags by walking forward through a corpus and knowing the previous word.\n- There are other implementations that use bidirectional POS tagging.\n- Bidirectional POS tagging requires knowing the previous word and the next word in the corpus when predicting the current word's POS tag.\n- Bidirectional POS tagging would tell you more about the POS instead of just knowing the previous word. \n- Since you have learned to implement the unidirectional approach, you have the foundation to implement other POS taggers used in industry.",
"_____no_output_____"
],
[
"### References\n\n- [\"Speech and Language Processing\", Dan Jurafsky and James H. Martin](https://web.stanford.edu/~jurafsky/slp3/)\n- We would like to thank Melanie Tosik for her help and inspiration",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecf8d26daa9182d52bbb02a617f51e13f84d46bd | 126,785 | ipynb | Jupyter Notebook | scratchpad/voids_paper/notebooks/scratch/check_training_pairs.ipynb | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | scratchpad/voids_paper/notebooks/scratch/check_training_pairs.ipynb | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | scratchpad/voids_paper/notebooks/scratch/check_training_pairs.ipynb | arshadzahangirchowdhury/TomoEncoders | 9c2b15fd515d864079f198546821faee5d78df17 | [
"BSD-3-Clause"
] | null | null | null | 556.074561 | 48,624 | 0.94706 | [
[
[
"## Segment a sparse 3D image with a single material component \n\nThe goal of this notebook is to develop a 3D segmentation algorithm that improves segmentation where features are detected.\n\n**Data:** AM parts from Xuan Zhang. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport h5py\nimport sys\nimport time\nimport seaborn as sns\nimport pandas as pd\n\nfrom tomo_encoders import Patches\nfrom tomo_encoders.misc import viewer\nfrom tomo_encoders import DataFile\nimport cupy as cp\nfrom tomo_encoders.reconstruction.project import get_projections\nfrom tomo_encoders.reconstruction.recon import recon_binning\nfrom tomo_encoders.misc.voxel_processing import cylindrical_mask, normalize_volume_gpu",
"_____no_output_____"
],
[
"ds1 = DataFile('/data02/MyArchive/aisteer_3Dencoders/tmp_data/train_y', tiff = True)\nplt.imshow(ds1.read_slice(axis = 0, slice_idx = 384), cmap = 'gray')",
"\n##################################################\nFound existing tiff folder: train_y\nDataset shape: (768, 5120, 5120)\n"
],
[
"ds1 = DataFile('/data02/MyArchive/aisteer_3Dencoders/tmp_data/train_x', tiff = True)\ntmp_img = ds1.read_slice(axis = 0, slice_idx = 384)\nplt.imshow(tmp_img, cmap = 'gray')\nprint(f'min {tmp_img.min()}; max {tmp_img.max()}')",
"\n##################################################\nFound existing tiff folder: train_x\nDataset shape: (768, 5120, 5120)\nmin 0.0; max 0.9781490564346313\n"
],
[
"h = plt.hist(tmp_img.reshape(-1), bins = 500)",
"_____no_output_____"
],
[
"ds1 = DataFile('/data02/MyArchive/aisteer_3Dencoders/tmp_data/train_x_rec', tiff = True)\ntmp_img = ds1.read_slice(axis = 0, slice_idx = 384)\nplt.imshow(tmp_img, cmap = 'gray')\nprint(f'min {tmp_img.min()}; max {tmp_img.max()}')",
"\n##################################################\nFound existing tiff folder: train_x_rec\nDataset shape: (768, 5120, 5120)\nmin 0.0; max 0.9432004690170288\n"
],
[
"h = plt.hist(tmp_img.reshape(-1), bins = 500)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8db442801f307895bd44322bb622fd65a20e1 | 220,550 | ipynb | Jupyter Notebook | notebooks/data/preprocess_data.ipynb | jorisvandenbossche/DS-python-geospatial | 893a12edc5c203a75815f6dcb5f1e18c577c8cd5 | [
"BSD-3-Clause"
] | 58 | 2020-10-09T10:10:59.000Z | 2022-03-07T14:58:07.000Z | notebooks/data/preprocess_data.ipynb | dongyi1996/course-python-geospatial | 9663114437a673cbe280db21e42b308c44ac222f | [
"BSD-3-Clause"
] | 24 | 2020-09-30T19:57:14.000Z | 2021-10-05T07:21:09.000Z | notebooks/data/preprocess_data.ipynb | dongyi1996/course-python-geospatial | 9663114437a673cbe280db21e42b308c44ac222f | [
"BSD-3-Clause"
] | 19 | 2020-10-05T09:32:18.000Z | 2022-03-20T00:09:14.000Z | 216.22549 | 103,040 | 0.893865 | [
[
[
"# Preprocessing of raw data",
"_____no_output_____"
],
[
"Notebook containing the code to obtain and preprocess the raw data into a smaller or simplified dataset usable for the workshop context.",
"_____no_output_____"
],
[
"### OpenStreetMap",
"_____no_output_____"
],
[
"Downloaded the .pbf file for Belgium at https://download.geofabrik.de/europe/belgium.html (around 400 MB)",
"_____no_output_____"
]
],
[
[
"import geopandas\nimport pyrosm",
"_____no_output_____"
],
[
"muni = geopandas.read_file(\"raw/Voorlopig_referentiebestand_gemeentegrenzen_toestand_17_08_2017_GewVLA_Shape/Shapefile/Refgem.shp\")\nmuniWGS84 = muni.to_crs(\"EPSG:4326\")\ngent = muniWGS84[muniWGS84[\"NAAM\"] == \"Gent\"].geometry.item()",
"_____no_output_____"
],
[
"gent.bounds",
"_____no_output_____"
],
[
"# Initialize the OSM object\nosm = pyrosm.OSM(\"raw/belgium-latest.osm.pbf\", bounding_box=list(gent.bounds))",
"_____no_output_____"
]
],
[
[
"Actually load the data while filtering for Gent, and extract the street network (note: this doesn't take *that* long, around a minute, but requires a lot of available RAM, around 12 GB):",
"_____no_output_____"
]
],
[
[
"streets = osm.get_network()\nstreets.to_file(\"raw/streets.gpkg\", driver=\"GPKG\")",
"_____no_output_____"
]
],
[
[
"Now read this again, and trim the file a bit:",
"_____no_output_____"
]
],
[
[
"import geopandas",
"_____no_output_____"
],
[
"streets = geopandas.read_file(\"raw/streets.gpkg\")",
"_____no_output_____"
],
[
"streets.plot()",
"_____no_output_____"
],
[
"#subset = streets[[\"highway\", \"access\", \"bicycle\", \"name\", \"osm_type\", \"geometry\"]]\nsubset = streets[[\"highway\", \"name\", \"osm_type\", \"geometry\"]]\nsubset",
"_____no_output_____"
],
[
"subset.to_file(\"osm_network_gent.gpkg\", driver=\"GPKG\")",
"_____no_output_____"
]
],
[
[
"## CORINE Land Cover",
"_____no_output_____"
],
[
"Downloaded the raster file from https://land.copernicus.eu/pan-european/corine-land-cover/clc2018?tab=download",
"_____no_output_____"
]
],
[
[
"import rasterio\nimport rasterio.windows",
"_____no_output_____"
],
[
"with rasterio.open(\"raw/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif\") as src:\n print(src.meta)\n print(src.bounds)",
"{'driver': 'GTiff', 'dtype': 'int8', 'nodata': -128.0, 'width': 65000, 'height': 46000, 'count': 1, 'crs': CRS.from_epsg(3035), 'transform': Affine(100.0, 0.0, 900000.0,\n 0.0, -100.0, 5500000.0)}\nBoundingBox(left=900000.0, bottom=900000.0, right=7400000.0, top=5500000.0)\n"
]
],
[
[
"Determine the bounding box of flanders:",
"_____no_output_____"
]
],
[
[
"import geopandas\nflanders = geopandas.read_file(\"VRBG/RefgemG10.shp\")\n# converting to same CRS as the raster file\nflanders_3035 = flanders.to_crs(\"EPSG:3035\")",
"_____no_output_____"
],
[
"flanders_3035.total_bounds",
"_____no_output_____"
]
],
[
[
"Adding a small buffer around it:",
"_____no_output_____"
]
],
[
[
"from shapely.geometry import box\n\n# 10 km\nbounds = box(*flanders_3035.total_bounds).buffer(10000).bounds\nbounds",
"_____no_output_____"
]
],
[
[
"Rounding it to 100 m (the resolution of the raster file):",
"_____no_output_____"
]
],
[
[
"bounds = np.round(bounds, -2) # negative number to specify the number of positions to the *left* of the decimal point.\nbounds",
"_____no_output_____"
],
[
"with rasterio.open(\"raw/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif\") as src:\n arr = src.read(1, window=rasterio.windows.from_bounds(*bounds, src.transform))",
"_____no_output_____"
],
[
"arr.shape",
"_____no_output_____"
]
],
[
[
"Reading a window with rasterio, and writing the cropped raster to a new file:",
"_____no_output_____"
]
],
[
[
"with rasterio.open(\"raw/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif\") as src:\n window = rasterio.windows.from_bounds(*bounds, src.transform)\n\n kwargs = src.meta.copy()\n kwargs.update({\n 'height': window.height,\n 'width': window.width,\n 'transform': rasterio.windows.transform(window, src.transform)})\n\n with rasterio.open('CLC2018_V2020_20u1_flanders.tif', 'w', **kwargs) as dst:\n dst.write(src.read(window=window))",
"_____no_output_____"
]
],
[
[
"Something similar with xarray (only exploring, not saving):",
"_____no_output_____"
]
],
[
[
"import xarray",
"_____no_output_____"
],
[
"data = xarray.open_rasterio(\"raw/u2018_clc2018_v2020_20u1_raster100m/DATA/U2018_CLC2018_V2020_20u1.tif\")",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"subset = data.sel(x=slice(bounds[0], bounds[2]), y=slice(bounds[3], bounds[1]))",
"_____no_output_____"
],
[
"subset.plot(vmin=1, vmax=44)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf8dbca3e21579e2085b896249ead2629dfdaad | 3,177 | ipynb | Jupyter Notebook | GSE6631/.ipynb_checkpoints/GSE6631-DEG-select-checkpoint.ipynb | AmitHasanShuvo/Biomarkers_HNSC | b2d34b38c79b8c334cc95b70289429f9fd7df1b9 | [
"MIT"
] | 10 | 2020-11-17T17:37:36.000Z | 2021-01-08T09:53:38.000Z | GSE6631/.ipynb_checkpoints/GSE6631-DEG-select-checkpoint.ipynb | AmitHasanShuvo/Biomarkers_HNSC | b2d34b38c79b8c334cc95b70289429f9fd7df1b9 | [
"MIT"
] | null | null | null | GSE6631/.ipynb_checkpoints/GSE6631-DEG-select-checkpoint.ipynb | AmitHasanShuvo/Biomarkers_HNSC | b2d34b38c79b8c334cc95b70289429f9fd7df1b9 | [
"MIT"
] | null | null | null | 24.438462 | 81 | 0.488826 | [
[
[
"import pandas\n\ndf = pandas.read_csv('processed.csv')\n\nfilename = 'GSE6631-Upregulated50.csv'\nupregulated = open(filename, 'w')\n\nfilename = 'GSE6631-Downregulated50.csv'\ndownregulated = open(filename, 'w')\n\nupregulated.write('Gene,Adj.P.Val,logFC\\n')\ndownregulated.write('Gene,Adj.P.Val,logFC\\n')\n\nupDEGs = 0\ndownDEGs = 0\nfor i in range(0,len(df)):\n if float(df['adj.P.Val'][i])< 0.01 and float(df['logFC'][i])>1.0:\n upregulated.write(str(df['Gene'][i]))\n upregulated.write(',')\n upregulated.write(str(df['adj.P.Val'][i]))\n upregulated.write(',')\n #upregulated.write(str(df['P.adjusted.BH'][i]))\n #upregulated.write(',')\n upregulated.write(str(df['logFC'][i]))\n upregulated.write('\\n')\n #upregulated.write(str(df['Gene.symbol'][i]))\n #upregulated.write('\\n')\n upDEGs = upDEGs+1\n elif float(df['adj.P.Val'][i])< 0.01 and float(df['logFC'][i])<-1.0:\n downregulated.write(str(df['Gene'][i]))\n downregulated.write(',')\n downregulated.write(str(df['adj.P.Val'][i]))\n downregulated.write(',')\n ##downregulated.write(',')\n downregulated.write(str(df['logFC'][i]))\n downregulated.write('\\n')\n #downregulated.write(str(df['Gene.symbol'][i]))\n #downregulated.write('\\n')\n downDEGs = downDEGs+1\n\nprint(upDEGs)\nprint(downDEGs)\n\nupregulated.close()\ndownregulated.close()",
"251\n14\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecf8e4ac34981c55ee490795ea9fc13e0fa39a95 | 36,515 | ipynb | Jupyter Notebook | notebooks/Cool ideas...but not working/Comp_Scatt_NeuralNetwork_ClassificationandRegression-Sim.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | notebooks/Cool ideas...but not working/Comp_Scatt_NeuralNetwork_ClassificationandRegression-Sim.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | notebooks/Cool ideas...but not working/Comp_Scatt_NeuralNetwork_ClassificationandRegression-Sim.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | 71.318359 | 7,520 | 0.810188 | [
[
[
"import pandas as pd \nimport numpy as np\nimport math\nimport tensorflow as tf\nprint(pd.__version__)\nimport matplotlib.pyplot as plt\nimport progressbar\nimport scipy\nimport random",
"1.2.0\n"
]
],
[
[
"## Print Dependencies\n\n\n\nDependences are fundamental to record the computational environment.",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n\n# python, ipython, packages, and machine characteristics\n%watermark -v -m -p pandas,keras,numpy,math,tensorflow,matplotlib,h5py,progressbar,scipy\n\n# date\nprint (\" \")\n%watermark -u -n -t -z",
"Python implementation: CPython\nPython version : 3.7.7\nIPython version : 7.19.0\n\npandas : 1.2.0\nkeras : 2.4.3\nnumpy : 1.19.5\nmath : unknown\ntensorflow : 2.4.0\nmatplotlib : 3.3.3\nh5py : 2.10.0\nprogressbar: 2.5\nscipy : 1.6.0\n\nCompiler : GCC 5.4.0 20160609\nOS : Linux\nRelease : 5.8.0-41-generic\nMachine : x86_64\nProcessor : x86_64\nCPU cores : 8\nArchitecture: 64bit\n\n \nLast updated: Tue Feb 02 2021 16:36:38CET\n\n"
]
],
[
[
"### Load of the test data",
"_____no_output_____"
]
],
[
[
"from process import loaddata\nregr_data = loaddata(\"../data/regression/100.csv\")\nclass_data = loaddata(\"../data/classifier/100.csv\")",
"_____no_output_____"
],
[
"np.random.shuffle(class_data)\nyc_test = class_data[:,0]\nxc_test = class_data[:,1:]\nxc_test.shape",
"_____no_output_____"
],
[
"np.random.shuffle(regr_data)\nyr_test = regr_data[:,-3:]\nxr_test = regr_data[:,:6]",
"_____no_output_____"
]
],
[
[
"### Model Load",
"_____no_output_____"
]
],
[
[
"from tensorflow import keras \nmodel_regr = keras.models.load_model('../models/regression/large_mse250.h5')\nmodel_class = keras.models.load_model('../models/classifier/with-dropout-100.h5')",
"_____no_output_____"
],
[
"model_regr.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 167219, 12) 84 \n_________________________________________________________________\ndense_1 (Dense) (None, 167219, 32) 416 \n_________________________________________________________________\ndense_2 (Dense) (None, 167219, 64) 2112 \n_________________________________________________________________\ndense_3 (Dense) (None, 167219, 128) 8320 \n_________________________________________________________________\ndense_4 (Dense) (None, 167219, 128) 16512 \n_________________________________________________________________\ndense_5 (Dense) (None, 167219, 64) 8256 \n_________________________________________________________________\ndense_6 (Dense) (None, 167219, 32) 2080 \n_________________________________________________________________\ndense_7 (Dense) (None, 167219, 12) 396 \n_________________________________________________________________\ndense_8 (Dense) (None, 167219, 3) 39 \n=================================================================\nTotal params: 38,215\nTrainable params: 38,215\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"model_class.summary()",
"Model: \"sequential_4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_12 (Dense) (None, 124064, 16) 176 \n_________________________________________________________________\ndropout_4 (Dropout) (None, 124064, 16) 0 \n_________________________________________________________________\ndense_13 (Dense) (None, 124064, 16) 272 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 124064, 16) 0 \n_________________________________________________________________\ndense_14 (Dense) (None, 124064, 1) 17 \n=================================================================\nTotal params: 465\nTrainable params: 465\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Simulation setup",
"_____no_output_____"
]
],
[
[
"def generate_pairs(modulus, gamma):\n \n a = random.uniform(-1, 1)\n b = random.uniform(-1, 1)\n c = random.uniform(-1, 1)\n direction = np.array([a, b, c])\n direction = direction/np.linalg.norm(direction)\n\n x_e = random.uniform(0, 1)\n y_e = random.uniform(0, 1) \n x_p = random.uniform(0, 1)\n y_p = random.uniform(0, 1)\n \n px = modulus*direction[0]\n py = modulus*direction[1]\n pz = modulus*direction[2]\n \n return np.array([gamma, 0, 0, px, py, pz, x_e, y_e, x_p, y_p])\n\n ",
"_____no_output_____"
],
[
"num_par_x = 100\n\nmodulus = 0.025\ngamma = 100\n\npairs = []\nfor i in range(num_par_x):\n pairs.append(generate_pairs(modulus, gamma))",
"_____no_output_____"
],
[
"pairs = np.array(pairs)\npairs.shape",
"_____no_output_____"
],
[
"y = []\npred = []\n\ny = model_class.predict(pairs)\ndata = np.hstack((y, pairs))\ndata = data[np.logical_not(data[:,0] < 0.5)]\nprediction = model_regr.predict(data[:,1:7])",
"_____no_output_____"
],
[
"print(data.shape)\nprint(prediction.shape)",
"(100, 11)\n(100, 3)\n"
],
[
"def energy_spectrum(energy_array, bins):\n energy_array = np.array(energy_array)\n plt.hist(energy_array, bins, alpha = 0.5, color = 'blue',histtype=u'step', density=True)\n plt.yscale(\"log\")\n plt.figure\n plt.show()",
"_____no_output_____"
],
[
"from tensorflow import keras \nphoton_final_nn = []",
"_____no_output_____"
],
[
"from tensorflow import keras \nfinal_p_nn = []\n\nfor pred in prediction:\n final_p_nn.append(np.linalg.norm(pred))\nbar.finish()",
"_____no_output_____"
],
[
"p1p_nn = prediction[:,0] \nenergy_spectrum(p1p_nn, 75)",
"_____no_output_____"
],
[
"p2p_nn = prediction[:,1] \nenergy_spectrum(p2p_nn, 75)",
"_____no_output_____"
],
[
"p3p_nn = prediction[:,2] \nenergy_spectrum(p3p_nn, 75)",
"_____no_output_____"
],
[
"energy_spectrum(final_p_nn, 75)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf902a18be16a5bac72ca77993154e8d561af42 | 411,102 | ipynb | Jupyter Notebook | neural-ode-playground/ode_demo_understanding_whole_code-checkpoint.ipynb | OmkarMehta/SSFwithNODE | 99bf8992839b3c07b6c8642a0d2b6c6f1b329d2e | [
"MIT"
] | 1 | 2021-11-13T22:32:29.000Z | 2021-11-13T22:32:29.000Z | neural-ode-playground/ode_demo_understanding_whole_code-checkpoint.ipynb | OmkarMehta/SSFwithNODE | 99bf8992839b3c07b6c8642a0d2b6c6f1b329d2e | [
"MIT"
] | null | null | null | neural-ode-playground/ode_demo_understanding_whole_code-checkpoint.ipynb | OmkarMehta/SSFwithNODE | 99bf8992839b3c07b6c8642a0d2b6c6f1b329d2e | [
"MIT"
] | null | null | null | 71.933858 | 149,636 | 0.732774 | [
[
[
"import os\nimport argparse\nimport time\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\n\n# from torchdiffeq import odeint_adjoint as odeint\nfrom torchdiffeq import odeint",
"_____no_output_____"
],
[
"method = 'dopri5'\n# size of the data\ndata_size = 1000\n# batch time and batch size\nbatch_time = 10\nbatch_size = 20\nniters = 2000\ntest_freq = 20\nviz = True\ndevice = torch.device('cuda:' + str(args.gpu) if torch.cuda.is_available() else 'cpu')\n\n# initial 2-d data\ntrue_y0 = torch.tensor([[2., 0.]]).to(device)\nprint(true_y0.shape)\n# 1000 time steps for neural ode diff eq state change\nt = torch.linspace(0., 25., data_size).to(device)\nprint(t.shape)\n# multiplied with true_y0 (1*2) to get (1*2) as output\ntrue_A = torch.tensor([[-0.1, 2.0], [-2.0, -0.1]]).to(device)\nprint(true_A.shape)\n",
"torch.Size([1, 2])\ntorch.Size([1000])\ntorch.Size([2, 2])\n"
],
[
"print(true_y0**3)\nprint(torch.mm(true_y0**3, true_A))",
"tensor([[8., 0.]])\ntensor([[-0.8000, 16.0000]])\n"
],
[
"class Lambda(nn.Module):\n\n def forward(self, t, y):\n return torch.mm(y**3, true_A)\n\n\nwith torch.no_grad():\n # for each time step, state is transformed.\n true_y = odeint(Lambda(), true_y0, t, method='dopri5')\nprint(true_y.shape)",
"torch.Size([1000, 1, 2])\n"
],
[
"true_y.shape",
"_____no_output_____"
],
[
"true_y[0] #true_y0",
"_____no_output_____"
],
[
"# equal-spaced points from 0 to (1000-10)\nnp.arange(data_size - batch_time, dtype=np.int64)",
"_____no_output_____"
],
[
"# get random samples\nnp.random.choice(np.arange(data_size - batch_time, dtype=np.int64), batch_size, replace=False)",
"_____no_output_____"
]
],
[
[
"## Don't run next cell",
"_____no_output_____"
]
],
[
[
"s = torch.from_numpy(np.random.choice(np.arange(data_size - batch_time, dtype=np.int64), batch_size, replace=False))\nprint(s)",
"tensor([346, 263, 759, 150, 850, 685, 548, 760, 33, 204, 302, 947, 293, 736,\n 780, 933, 823, 683, 200, 282])\n"
],
[
"batch_y0 = true_y[s]\nprint(batch_y0[0]) # stored at index 346\nprint(batch_y0.shape)",
"tensor([[ 0.7443, -0.1813]])\ntorch.Size([20, 1, 2])\n"
],
[
"batch_t = t[:batch_time]\nprint(batch_t)\nprint(batch_t.shape)",
"tensor([0.0000, 0.0250, 0.0501, 0.0751, 0.1001, 0.1251, 0.1502, 0.1752, 0.2002,\n 0.2252])\ntorch.Size([10])\n"
],
[
"[true_y[s + i] for i in range(batch_time)]",
"_____no_output_____"
],
[
"batch_y = torch.stack([true_y[s + i] for i in range(batch_time)], dim=0) \nprint(batch_y)\n",
"tensor([[[[ 7.4431e-01, -1.8129e-01]],\n\n [[-7.1760e-01, -6.8596e-01]],\n\n [[ 5.2162e-01, 1.0927e-01]],\n\n [[ 1.0398e+00, -2.8820e-01]],\n\n [[ 2.2853e-01, 4.9086e-01]],\n\n [[ 4.8596e-01, -4.3149e-01]],\n\n [[-5.8822e-01, -3.4899e-01]],\n\n [[ 5.2119e-01, 1.1636e-01]],\n\n [[-5.9115e-01, -1.5794e+00]],\n\n [[-4.2760e-01, 9.1278e-01]],\n\n [[ 2.5163e-01, -7.8521e-01]],\n\n [[-3.0643e-01, 4.4719e-01]],\n\n [[ 2.7639e-02, -7.9846e-01]],\n\n [[ 5.3022e-01, -5.8248e-02]],\n\n [[ 5.0707e-01, 2.5298e-01]],\n\n [[-2.4025e-01, 4.6517e-01]],\n\n [[ 3.7924e-01, 4.5714e-01]],\n\n [[ 4.7814e-01, -4.4313e-01]],\n\n [[-2.7105e-01, 9.2965e-01]],\n\n [[-2.5991e-01, -8.1068e-01]]],\n\n\n [[[ 7.4353e-01, -1.6067e-01]],\n\n [[-6.9993e-01, -7.0295e-01]],\n\n [[ 5.2119e-01, 1.1636e-01]],\n\n [[ 1.0379e+00, -2.3203e-01]],\n\n [[ 2.2257e-01, 4.9114e-01]],\n\n [[ 4.8960e-01, -4.2549e-01]],\n\n [[-5.8549e-01, -3.5900e-01]],\n\n [[ 5.2075e-01, 1.2343e-01]],\n\n [[-3.9409e-01, -1.5758e+00]],\n\n [[-4.6506e-01, 9.0644e-01]],\n\n [[ 2.7572e-01, -7.8308e-01]],\n\n [[-3.1081e-01, 4.4550e-01]],\n\n [[ 5.3056e-02, -7.9719e-01]],\n\n [[ 5.2986e-01, -5.0795e-02]],\n\n [[ 5.0590e-01, 2.5944e-01]],\n\n [[-2.4524e-01, 4.6420e-01]],\n\n [[ 3.7429e-01, 4.5957e-01]],\n\n [[ 4.8213e-01, -4.3737e-01]],\n\n [[-3.1100e-01, 9.2641e-01]],\n\n [[-2.3323e-01, -8.1010e-01]]],\n\n\n [[[ 7.4267e-01, -1.4012e-01]],\n\n [[-6.8113e-01, -7.1854e-01]],\n\n [[ 5.2075e-01, 1.2343e-01]],\n\n [[ 1.0356e+00, -1.7623e-01]],\n\n [[ 2.1661e-01, 4.9137e-01]],\n\n [[ 4.9308e-01, -4.1936e-01]],\n\n [[-5.8258e-01, -3.6885e-01]],\n\n [[ 5.2030e-01, 1.3049e-01]],\n\n [[-1.9962e-01, -1.5676e+00]],\n\n [[-5.0161e-01, 8.9893e-01]],\n\n [[ 2.9958e-01, -7.8069e-01]],\n\n [[-3.1513e-01, 4.4374e-01]],\n\n [[ 7.8350e-02, -7.9591e-01]],\n\n [[ 5.2949e-01, -4.3357e-02]],\n\n [[ 5.0467e-01, 2.6585e-01]],\n\n [[-2.5019e-01, 4.6319e-01]],\n\n [[ 3.6927e-01, 4.6190e-01]],\n\n [[ 4.8596e-01, -4.3149e-01]],\n\n [[-3.5047e-01, 9.2262e-01]],\n\n [[-2.0663e-01, -8.0931e-01]]],\n\n\n [[[ 7.4176e-01, -1.1965e-01]],\n\n [[-6.6124e-01, -7.3273e-01]],\n\n [[ 5.2030e-01, 1.3049e-01]],\n\n [[ 1.0330e+00, -1.2085e-01]],\n\n [[ 2.1065e-01, 4.9156e-01]],\n\n [[ 4.9639e-01, -4.1312e-01]],\n\n [[-5.7947e-01, -3.7854e-01]],\n\n [[ 5.1982e-01, 1.3752e-01]],\n\n [[-8.5643e-03, -1.5581e+00]],\n\n [[-5.3709e-01, 8.9012e-01]],\n\n [[ 3.2320e-01, -7.7800e-01]],\n\n [[-3.1940e-01, 4.4193e-01]],\n\n [[ 1.0352e-01, -7.9461e-01]],\n\n [[ 5.2913e-01, -3.5934e-02]],\n\n [[ 5.0338e-01, 2.7221e-01]],\n\n [[-2.5511e-01, 4.6214e-01]],\n\n [[ 3.6417e-01, 4.6412e-01]],\n\n [[ 4.8960e-01, -4.2549e-01]],\n\n [[-3.8937e-01, 9.1812e-01]],\n\n [[-1.8013e-01, -8.0835e-01]]],\n\n\n [[[ 7.4080e-01, -9.9261e-02]],\n\n [[-6.4033e-01, -7.4552e-01]],\n\n [[ 5.1982e-01, 1.3752e-01]],\n\n [[ 1.0303e+00, -6.5897e-02]],\n\n [[ 2.0468e-01, 4.9171e-01]],\n\n [[ 4.9953e-01, -4.0677e-01]],\n\n [[-5.7617e-01, -3.8805e-01]],\n\n [[ 5.1933e-01, 1.4453e-01]],\n\n [[ 1.7904e-01, -1.5487e+00]],\n\n [[-5.7137e-01, 8.7985e-01]],\n\n [[ 3.4654e-01, -7.7494e-01]],\n\n [[-3.2361e-01, 4.4005e-01]],\n\n [[ 1.2857e-01, -7.9328e-01]],\n\n [[ 5.2876e-01, -2.8528e-02]],\n\n [[ 5.0201e-01, 2.7852e-01]],\n\n [[-2.5999e-01, 4.6103e-01]],\n\n [[ 3.5902e-01, 4.6624e-01]],\n\n [[ 4.9308e-01, -4.1936e-01]],\n\n [[-4.2760e-01, 9.1278e-01]],\n\n [[-1.5373e-01, -8.0726e-01]]],\n\n\n [[[ 7.3983e-01, -7.8952e-02]],\n\n [[-6.1847e-01, -7.5695e-01]],\n\n [[ 5.1933e-01, 1.4453e-01]],\n\n [[ 1.0275e+00, -1.1380e-02]],\n\n [[ 1.9871e-01, 4.9183e-01]],\n\n [[ 5.0250e-01, -4.0031e-01]],\n\n [[-5.7266e-01, -3.9739e-01]],\n\n [[ 5.1882e-01, 1.5152e-01]],\n\n [[ 3.6309e-01, -1.5383e+00]],\n\n [[-6.0429e-01, 8.6800e-01]],\n\n [[ 3.6957e-01, -7.7149e-01]],\n\n [[-3.2776e-01, 4.3811e-01]],\n\n [[ 1.5348e-01, -7.9189e-01]],\n\n [[ 5.2839e-01, -2.1136e-02]],\n\n [[ 5.0058e-01, 2.8477e-01]],\n\n [[-2.6483e-01, 4.5989e-01]],\n\n [[ 3.5380e-01, 4.6825e-01]],\n\n [[ 4.9639e-01, -4.1312e-01]],\n\n [[-4.6506e-01, 9.0644e-01]],\n\n [[-1.2745e-01, -8.0609e-01]]],\n\n\n [[[ 7.3883e-01, -5.8724e-02]],\n\n [[-5.9576e-01, -7.6705e-01]],\n\n [[ 5.1882e-01, 1.5152e-01]],\n\n [[ 1.0248e+00, 4.2704e-02]],\n\n [[ 1.9273e-01, 4.9190e-01]],\n\n [[ 5.0531e-01, -3.9375e-01]],\n\n [[-5.6895e-01, -4.0654e-01]],\n\n [[ 5.1828e-01, 1.5849e-01]],\n\n [[ 5.4276e-01, -1.5245e+00]],\n\n [[-6.3568e-01, 8.5446e-01]],\n\n [[ 3.9224e-01, -7.6758e-01]],\n\n [[-3.3185e-01, 4.3610e-01]],\n\n [[ 1.7825e-01, -7.9042e-01]],\n\n [[ 5.2802e-01, -1.3760e-02]],\n\n [[ 4.9907e-01, 2.9096e-01]],\n\n [[-2.6963e-01, 4.5869e-01]],\n\n [[ 3.4852e-01, 4.7016e-01]],\n\n [[ 4.9953e-01, -4.0677e-01]],\n\n [[-5.0161e-01, 8.9893e-01]],\n\n [[-1.0129e-01, -8.0486e-01]]],\n\n\n [[[ 7.3783e-01, -3.8578e-02]],\n\n [[-5.7227e-01, -7.7587e-01]],\n\n [[ 5.1828e-01, 1.5849e-01]],\n\n [[ 1.0221e+00, 9.6363e-02]],\n\n [[ 1.8676e-01, 4.9195e-01]],\n\n [[ 5.0797e-01, -3.8709e-01]],\n\n [[-5.6502e-01, -4.1549e-01]],\n\n [[ 5.1772e-01, 1.6544e-01]],\n\n [[ 7.1603e-01, -1.5030e+00]],\n\n [[-6.6540e-01, 8.3914e-01]],\n\n [[ 4.1452e-01, -7.6317e-01]],\n\n [[-3.3588e-01, 4.3404e-01]],\n\n [[ 2.0288e-01, -7.8884e-01]],\n\n [[ 5.2765e-01, -6.4002e-03]],\n\n [[ 4.9749e-01, 2.9709e-01]],\n\n [[-2.7439e-01, 4.5744e-01]],\n\n [[ 3.4318e-01, 4.7196e-01]],\n\n [[ 5.0250e-01, -4.0031e-01]],\n\n [[-5.3709e-01, 8.9012e-01]],\n\n [[-7.5259e-02, -8.0359e-01]]],\n\n\n [[[ 7.3683e-01, -1.8516e-02]],\n\n [[-5.4810e-01, -7.8349e-01]],\n\n [[ 5.1772e-01, 1.6544e-01]],\n\n [[ 1.0194e+00, 1.4959e-01]],\n\n [[ 1.8078e-01, 4.9196e-01]],\n\n [[ 5.1046e-01, -3.8034e-01]],\n\n [[-5.6086e-01, -4.2423e-01]],\n\n [[ 5.1714e-01, 1.7236e-01]],\n\n [[ 8.7943e-01, -1.4690e+00]],\n\n [[-6.9331e-01, 8.2200e-01]],\n\n [[ 4.3636e-01, -7.5821e-01]],\n\n [[-3.3985e-01, 4.3190e-01]],\n\n [[ 2.2734e-01, -7.8712e-01]],\n\n [[ 5.2728e-01, 9.4467e-04]],\n\n [[ 4.9583e-01, 3.0315e-01]],\n\n [[-2.7911e-01, 4.5614e-01]],\n\n [[ 3.3780e-01, 4.7368e-01]],\n\n [[ 5.0531e-01, -3.9375e-01]],\n\n [[-5.7137e-01, 8.7985e-01]],\n\n [[-4.9348e-02, -8.0230e-01]]],\n\n\n [[[ 7.3583e-01, 1.4651e-03]],\n\n [[-5.2333e-01, -7.8997e-01]],\n\n [[ 5.1714e-01, 1.7236e-01]],\n\n [[ 1.0165e+00, 2.0237e-01]],\n\n [[ 1.7481e-01, 4.9194e-01]],\n\n [[ 5.1281e-01, -3.7350e-01]],\n\n [[-5.5649e-01, -4.3276e-01]],\n\n [[ 5.1652e-01, 1.7926e-01]],\n\n [[ 1.0283e+00, -1.4176e+00]],\n\n [[-7.1929e-01, 8.0300e-01]],\n\n [[ 4.5772e-01, -7.5265e-01]],\n\n [[-3.4375e-01, 4.2970e-01]],\n\n [[ 2.5163e-01, -7.8521e-01]],\n\n [[ 5.2692e-01, 8.2742e-03]],\n\n [[ 4.9409e-01, 3.0915e-01]],\n\n [[-2.8378e-01, 4.5479e-01]],\n\n [[ 3.3235e-01, 4.7529e-01]],\n\n [[ 5.0797e-01, -3.8709e-01]],\n\n [[-6.0429e-01, 8.6800e-01]],\n\n [[-2.3562e-02, -8.0102e-01]]]])\n"
],
[
"print(batch_y.shape)",
"torch.Size([10, 20, 1, 2])\n"
],
[
"print(true_y[346])\nprint(true_y[347])\nprint(true_y[348])\nprint(true_y[349])\nprint(true_y[350])\nprint(true_y[351])\nprint(true_y[352])\nprint(true_y[353])\nprint(true_y[354])\nprint(true_y[355])",
"tensor([[ 0.7443, -0.1813]])\ntensor([[ 0.7435, -0.1607]])\ntensor([[ 0.7427, -0.1401]])\ntensor([[ 0.7418, -0.1197]])\ntensor([[ 0.7408, -0.0993]])\ntensor([[ 0.7398, -0.0790]])\ntensor([[ 0.7388, -0.0587]])\ntensor([[ 0.7378, -0.0386]])\ntensor([[ 0.7368, -0.0185]])\ntensor([[0.7358, 0.0015]])\n"
],
[
"print(batch_y[:, 0, :, :])",
"tensor([[[ 0.7443, -0.1813]],\n\n [[ 0.7435, -0.1607]],\n\n [[ 0.7427, -0.1401]],\n\n [[ 0.7418, -0.1197]],\n\n [[ 0.7408, -0.0993]],\n\n [[ 0.7398, -0.0790]],\n\n [[ 0.7388, -0.0587]],\n\n [[ 0.7378, -0.0386]],\n\n [[ 0.7368, -0.0185]],\n\n [[ 0.7358, 0.0015]]])\n"
],
[
"def get_batch():\n # get random indices of true_y from the size (data_size-batch_time)\n s = torch.from_numpy(np.random.choice(np.arange(data_size - batch_time, dtype=np.int64), batch_size, replace=False))\n batch_y0 = true_y[s] # (M, D)\n batch_t = t[:batch_time] # (T)\n # https://deeplizard.com/learn/video/kF2AlpykJGY\n batch_y = torch.stack([true_y[s + i] for i in range(batch_time)], dim=0) # (T, M, D)\n return batch_y0.to(device), batch_t.to(device), batch_y.to(device)",
"_____no_output_____"
],
[
"\ndef makedirs(dirname):\n if not os.path.exists(dirname):\n os.makedirs(dirname)",
"_____no_output_____"
],
[
"if viz:\n makedirs('png')\n import matplotlib.pyplot as plt\n fig = plt.figure(figsize=(12, 4), facecolor='white')\n ax_traj = fig.add_subplot(131, frameon=False)\n ax_phase = fig.add_subplot(132, frameon=False)\n ax_vecfield = fig.add_subplot(133, frameon=False)\n plt.show(block=False)",
"_____no_output_____"
],
[
"true_y.cpu().numpy().shape",
"_____no_output_____"
],
[
"print(true_y.cpu().numpy())\nprint(true_y.cpu().numpy().shape)",
"[[[ 2. 0. ]]\n\n [[ 1.9795096 0.39435685]]\n\n [[ 1.9493685 0.7741643 ]]\n\n ...\n\n [[-0.4417671 0.28824317]]\n\n [[-0.44272214 0.28385508]]\n\n [[-0.4436225 0.27944273]]]\n(1000, 1, 2)\n"
],
[
"print(true_y.cpu().numpy()[:, 0])\nprint(true_y.cpu().numpy()[:, 0].shape)",
"[[ 2. 0. ]\n [ 1.9795096 0.39435685]\n [ 1.9493685 0.7741643 ]\n ...\n [-0.4417671 0.28824317]\n [-0.44272214 0.28385508]\n [-0.4436225 0.27944273]]\n(1000, 2)\n"
],
[
"print(true_y.cpu().numpy()[:, 0, 0])\nprint(true_y.cpu().numpy()[:, 0, 0].shape)",
"[ 2.00000000e+00 1.97950959e+00 1.94936848e+00 1.88667810e+00\n 1.76388586e+00 1.56502116e+00 1.29796958e+00 9.88777399e-01\n 6.64197147e-01 3.40621561e-01 2.41582021e-02 -2.84344405e-01\n -5.84121108e-01 -8.70496809e-01 -1.13273549e+00 -1.35544181e+00\n -1.52457583e+00 -1.63506424e+00 -1.69368446e+00 -1.71499944e+00\n -1.71467769e+00 -1.70490086e+00 -1.69280040e+00 -1.68060374e+00\n -1.66623545e+00 -1.64403510e+00 -1.60577214e+00 -1.54255199e+00\n -1.44759762e+00 -1.31899405e+00 -1.16066325e+00 -9.80719090e-01\n -7.88343549e-01 -5.91152966e-01 -3.94089460e-01 -1.99623823e-01\n -8.56431108e-03 1.79038838e-01 3.63085806e-01 5.42759776e-01\n 7.16030657e-01 8.79428267e-01 1.02827561e+00 1.15751982e+00\n 1.26302695e+00 1.34285271e+00 1.39785290e+00 1.43134284e+00\n 1.44808900e+00 1.45314991e+00 1.45103168e+00 1.44521737e+00\n 1.43802261e+00 1.43064511e+00 1.42324805e+00 1.41506982e+00\n 1.40452683e+00 1.38936734e+00 1.36687171e+00 1.33420789e+00\n 1.28887916e+00 1.22922814e+00 1.15482461e+00 1.06659997e+00\n 9.66635704e-01 8.57703924e-01 7.42722154e-01 6.24309719e-01\n 5.04526019e-01 3.84803087e-01 2.66009331e-01 1.48575455e-01\n 3.26462872e-02 -8.17763358e-02 -1.94702983e-01 -3.06043506e-01\n -4.15510893e-01 -5.22541404e-01 -6.26249909e-01 -7.25427270e-01\n -8.18601191e-01 -9.04170156e-01 -9.80602145e-01 -1.04666805e+00\n -1.10163379e+00 -1.14540148e+00 -1.17849874e+00 -1.20199382e+00\n -1.21730316e+00 -1.22601831e+00 -1.22971499e+00 -1.22984040e+00\n -1.22761643e+00 -1.22401750e+00 -1.21975064e+00 -1.21527588e+00\n -1.21080887e+00 -1.20634365e+00 -1.20166945e+00 -1.19638443e+00\n -1.18991864e+00 -1.18155587e+00 -1.17047203e+00 -1.15577710e+00\n -1.13659179e+00 -1.11211014e+00 -1.08169150e+00 -1.04494393e+00\n -1.00174999e+00 -9.52313125e-01 -8.97123218e-01 -8.36898029e-01\n -7.72502065e-01 -7.04856336e-01 -6.34857714e-01 -5.63312948e-01\n -4.90905285e-01 -4.18174416e-01 -3.45523477e-01 -2.73227185e-01\n -2.01457098e-01 -1.30306944e-01 -5.98128475e-02 1.00231599e-02\n 7.92082995e-02 1.47744507e-01 2.15606034e-01 2.82728761e-01\n 3.48992735e-01 4.14213032e-01 4.78130668e-01 5.40414691e-01\n 6.00662112e-01 6.58413351e-01 7.13171899e-01 7.64437020e-01\n 8.11734319e-01 8.54655564e-01 8.92896354e-01 9.26279426e-01\n 9.54772472e-01 9.78485525e-01 9.97662485e-01 1.01266181e+00\n 1.02391446e+00 1.03190529e+00 1.03713655e+00 1.04010057e+00\n 1.04126608e+00 1.04105353e+00 1.03983128e+00 1.03791618e+00\n 1.03556073e+00 1.03296125e+00 1.03025830e+00 1.02753568e+00\n 1.02483189e+00 1.02213073e+00 1.01937068e+00 1.01645172e+00\n 1.01323056e+00 1.00952685e+00 1.00512564e+00 9.99786019e-01\n 9.93247330e-01 9.85229194e-01 9.75454450e-01 9.63651657e-01\n 9.49570417e-01 9.32997763e-01 9.13766801e-01 8.91770184e-01\n 8.66965890e-01 8.39381695e-01 8.09115767e-01 7.76325047e-01\n 7.41227150e-01 7.04069555e-01 6.65130317e-01 6.24698818e-01\n 5.83060563e-01 5.40485859e-01 4.97224510e-01 4.53498513e-01\n 4.09496665e-01 3.65377516e-01 3.21265846e-01 2.77261317e-01\n 2.33435556e-01 1.89838201e-01 1.46500796e-01 1.03442743e-01\n 6.06707223e-02 1.81859136e-02 -2.40154192e-02 -6.59342781e-02\n -1.07571356e-01 -1.48920804e-01 -1.89969674e-01 -2.30692372e-01\n -2.71052390e-01 -3.10998708e-01 -3.50465149e-01 -3.89366657e-01\n -4.27603751e-01 -4.65060294e-01 -5.01605093e-01 -5.37093580e-01\n -5.71373641e-01 -6.04288161e-01 -6.35680318e-01 -6.65399551e-01\n -6.93310320e-01 -7.19291925e-01 -7.43250608e-01 -7.65119076e-01\n -7.84861684e-01 -8.02476585e-01 -8.17993164e-01 -8.31472337e-01\n -8.43002141e-01 -8.52697670e-01 -8.60686958e-01 -8.67113709e-01\n -8.72132480e-01 -8.75900209e-01 -8.78572166e-01 -8.80300403e-01\n -8.81228745e-01 -8.81494164e-01 -8.81220520e-01 -8.80515873e-01\n -8.79484057e-01 -8.78209233e-01 -8.76764953e-01 -8.75210106e-01\n -8.73592615e-01 -8.71945202e-01 -8.70293140e-01 -8.68648827e-01\n -8.67011368e-01 -8.65371406e-01 -8.63710284e-01 -8.61996949e-01\n -8.60194504e-01 -8.58254254e-01 -8.56121123e-01 -8.53730083e-01\n -8.51013243e-01 -8.47893417e-01 -8.44289362e-01 -8.40116978e-01\n -8.35288823e-01 -8.29717577e-01 -8.23317409e-01 -8.16005528e-01\n -8.07707131e-01 -7.98350692e-01 -7.87877738e-01 -7.76241124e-01\n -7.63406217e-01 -7.49352813e-01 -7.34081686e-01 -7.17599750e-01\n -6.99934065e-01 -6.81128144e-01 -6.61236763e-01 -6.40325904e-01\n -6.18471026e-01 -5.95756650e-01 -5.72267890e-01 -5.48095703e-01\n -5.23330092e-01 -4.98059243e-01 -4.72366810e-01 -4.46334273e-01\n -4.20034975e-01 -3.93535554e-01 -3.66896719e-01 -3.40172261e-01\n -3.13406646e-01 -2.86640674e-01 -2.59905636e-01 -2.33229369e-01\n -2.06631288e-01 -1.80128857e-01 -1.53732985e-01 -1.27453133e-01\n -1.01294070e-01 -7.52586871e-02 -4.93477061e-02 -2.35617980e-02\n 2.10007327e-03 2.76389737e-02 5.30561209e-02 7.83504546e-02\n 1.03521183e-01 1.28565460e-01 1.53478846e-01 1.78252980e-01\n 2.02878490e-01 2.27342293e-01 2.51628429e-01 2.75716752e-01\n 2.99584091e-01 3.23202550e-01 3.46542031e-01 3.69566917e-01\n 3.92239779e-01 4.14519042e-01 4.36360925e-01 4.57718581e-01\n 4.78543937e-01 4.98788357e-01 5.18401623e-01 5.37336528e-01\n 5.55546105e-01 5.72986722e-01 5.89618981e-01 6.05406523e-01\n 6.20319843e-01 6.34335935e-01 6.47437096e-01 6.59613907e-01\n 6.70864224e-01 6.81192338e-01 6.90611243e-01 6.99139059e-01\n 7.06801653e-01 7.13629425e-01 7.19659805e-01 7.24931300e-01\n 7.29488075e-01 7.33375430e-01 7.36642003e-01 7.39335835e-01\n 7.41506934e-01 7.43202746e-01 7.44471371e-01 7.45358944e-01\n 7.45908260e-01 7.46163189e-01 7.46162653e-01 7.45943248e-01\n 7.45542109e-01 7.44986415e-01 7.44306207e-01 7.43526816e-01\n 7.42671251e-01 7.41758347e-01 7.40804970e-01 7.39825070e-01\n 7.38830984e-01 7.37829804e-01 7.36828208e-01 7.35829651e-01\n 7.34833777e-01 7.33841002e-01 7.32846677e-01 7.31846273e-01\n 7.30830073e-01 7.29788721e-01 7.28710353e-01 7.27577329e-01\n 7.26377726e-01 7.25092113e-01 7.23700821e-01 7.22183228e-01\n 7.20515907e-01 7.18675315e-01 7.16636539e-01 7.14373708e-01\n 7.11859286e-01 7.09067523e-01 7.05970287e-01 7.02541113e-01\n 6.98752344e-01 6.94578707e-01 6.89995229e-01 6.84978008e-01\n 6.79507494e-01 6.73561931e-01 6.67124569e-01 6.60182178e-01\n 6.52721286e-01 6.44735873e-01 6.36221051e-01 6.27173603e-01\n 6.17597818e-01 6.07499003e-01 5.96885681e-01 5.85770130e-01\n 5.74168384e-01 5.62098444e-01 5.49581110e-01 5.36638916e-01\n 5.23297071e-01 5.09581745e-01 4.95519340e-01 4.81138915e-01\n 4.66467470e-01 4.51531857e-01 4.36362714e-01 4.20985132e-01\n 4.05425847e-01 3.89710426e-01 3.73861521e-01 3.57902735e-01\n 3.41853738e-01 3.25734824e-01 3.09563845e-01 2.93355852e-01\n 2.77125955e-01 2.60887861e-01 2.44651765e-01 2.28428870e-01\n 2.12226674e-01 1.96053296e-01 1.79915071e-01 1.63816497e-01\n 1.47762150e-01 1.31755903e-01 1.15799025e-01 9.98947471e-02\n 8.40423033e-02 6.82442784e-02 5.25011271e-02 3.68117020e-02\n 2.11764406e-02 5.59566543e-03 -9.93212685e-03 -2.54060160e-02\n -4.08273749e-02 -5.61956391e-02 -7.15099648e-02 -8.67709145e-02\n -1.01977006e-01 -1.17126264e-01 -1.32217944e-01 -1.47248462e-01\n -1.62215605e-01 -1.77115425e-01 -1.91942781e-01 -2.06693590e-01\n -2.21361279e-01 -2.35938504e-01 -2.50418872e-01 -2.64792711e-01\n -2.79051870e-01 -2.93185681e-01 -3.07182670e-01 -3.21032226e-01\n -3.34720939e-01 -3.48235697e-01 -3.61563623e-01 -3.74689341e-01\n -3.87598991e-01 -4.00277257e-01 -4.12708253e-01 -4.24877524e-01\n -4.36769396e-01 -4.48368609e-01 -4.59660977e-01 -4.70631570e-01\n -4.81267661e-01 -4.91557211e-01 -5.01486957e-01 -5.11047661e-01\n -5.20229757e-01 -5.29024124e-01 -5.37426293e-01 -5.45430183e-01\n -5.53033233e-01 -5.60233653e-01 -5.67030847e-01 -5.73427200e-01\n -5.79425573e-01 -5.85030496e-01 -5.90249598e-01 -5.95089197e-01\n -5.99558949e-01 -6.03668451e-01 -6.07430339e-01 -6.10855579e-01\n -6.13957703e-01 -6.16750479e-01 -6.19248271e-01 -6.21465564e-01\n -6.23417735e-01 -6.25119686e-01 -6.26586616e-01 -6.27834082e-01\n -6.28876865e-01 -6.29729807e-01 -6.30407512e-01 -6.30923867e-01\n -6.31294191e-01 -6.31529629e-01 -6.31643713e-01 -6.31648302e-01\n -6.31555200e-01 -6.31376386e-01 -6.31120443e-01 -6.30797684e-01\n -6.30417109e-01 -6.29987061e-01 -6.29515171e-01 -6.29008710e-01\n -6.28473818e-01 -6.27916157e-01 -6.27340913e-01 -6.26752496e-01\n -6.26154721e-01 -6.25551879e-01 -6.24945164e-01 -6.24337196e-01\n -6.23729527e-01 -6.23122990e-01 -6.22518837e-01 -6.21915817e-01\n -6.21313810e-01 -6.20711923e-01 -6.20108545e-01 -6.19499028e-01\n -6.18884146e-01 -6.18259788e-01 -6.17622018e-01 -6.16967261e-01\n -6.16289079e-01 -6.15585208e-01 -6.14849448e-01 -6.14076138e-01\n -6.13259256e-01 -6.12392366e-01 -6.11467779e-01 -6.10479772e-01\n -6.09420538e-01 -6.08282208e-01 -6.07057214e-01 -6.05737150e-01\n -6.04313910e-01 -6.02781475e-01 -6.01127565e-01 -5.99345446e-01\n -5.97426236e-01 -5.95361769e-01 -5.93143761e-01 -5.90764880e-01\n -5.88215530e-01 -5.85488975e-01 -5.82577467e-01 -5.79474390e-01\n -5.76171637e-01 -5.72664917e-01 -5.68948090e-01 -5.65015793e-01\n -5.60863733e-01 -5.56489408e-01 -5.51888049e-01 -5.47058225e-01\n -5.41997969e-01 -5.36706984e-01 -5.31185269e-01 -5.25432646e-01\n -5.19451916e-01 -5.13245046e-01 -5.06814897e-01 -5.00166655e-01\n -4.93302882e-01 -4.86229599e-01 -4.78952914e-01 -4.71478522e-01\n -4.63813633e-01 -4.55965787e-01 -4.47942138e-01 -4.39751267e-01\n -4.31400716e-01 -4.22899902e-01 -4.14257079e-01 -4.05480564e-01\n -3.96579534e-01 -3.87563020e-01 -3.78438801e-01 -3.69216472e-01\n -3.59903634e-01 -3.50508839e-01 -3.41040432e-01 -3.31504762e-01\n -3.21910501e-01 -3.12264711e-01 -3.02573502e-01 -2.92843878e-01\n -2.83081144e-01 -2.73292542e-01 -2.63482213e-01 -2.53654987e-01\n -2.43815973e-01 -2.33969763e-01 -2.24119604e-01 -2.14269757e-01\n -2.04422936e-01 -1.94582343e-01 -1.84751123e-01 -1.74930856e-01\n -1.65124074e-01 -1.55333072e-01 -1.45558640e-01 -1.35802910e-01\n -1.26066029e-01 -1.16350107e-01 -1.06655940e-01 -9.69835073e-02\n -8.73336866e-02 -7.77071118e-02 -6.81035444e-02 -5.85237928e-02\n -4.89672422e-02 -3.94342989e-02 -2.99253073e-02 -2.04394981e-02\n -1.09772878e-02 -1.53868611e-03 7.87702296e-03 1.72692072e-02\n 2.66386420e-02 3.59849967e-02 4.53079268e-02 5.46080433e-02\n 6.38848543e-02 7.31377825e-02 8.23672190e-02 9.15721357e-02\n 1.00752577e-01 1.09907627e-01 1.19036146e-01 1.28137961e-01\n 1.37211621e-01 1.46255612e-01 1.55269623e-01 1.64250880e-01\n 1.73198685e-01 1.82110414e-01 1.90985128e-01 1.99819162e-01\n 2.08610862e-01 2.17357874e-01 2.26056755e-01 2.34704852e-01\n 2.43299708e-01 2.51836658e-01 2.60313481e-01 2.68725216e-01\n 2.77068645e-01 2.85340399e-01 2.93534994e-01 3.01648825e-01\n 3.09677422e-01 3.17616284e-01 3.25461477e-01 3.33207160e-01\n 3.40849012e-01 3.48382920e-01 3.55802953e-01 3.63104939e-01\n 3.70284736e-01 3.77336591e-01 3.84256303e-01 3.91039342e-01\n 3.97681475e-01 4.04179066e-01 4.10527021e-01 4.16722119e-01\n 4.22761142e-01 4.28640276e-01 4.34357047e-01 4.39907968e-01\n 4.45290983e-01 4.50504422e-01 4.55545753e-01 4.60414022e-01\n 4.65108514e-01 4.69627857e-01 4.73972559e-01 4.78140682e-01\n 4.82134849e-01 4.85955358e-01 4.89602745e-01 4.93078768e-01\n 4.96385306e-01 4.99525130e-01 5.02499938e-01 5.05312324e-01\n 5.07965684e-01 5.10463417e-01 5.12808621e-01 5.15005052e-01\n 5.17057240e-01 5.18968582e-01 5.20743847e-01 5.22386909e-01\n 5.23902476e-01 5.25296152e-01 5.26570976e-01 5.27732313e-01\n 5.28784871e-01 5.29733300e-01 5.30582428e-01 5.31338871e-01\n 5.32004178e-01 5.32584488e-01 5.33084214e-01 5.33507705e-01\n 5.33859611e-01 5.34144163e-01 5.34366310e-01 5.34529150e-01\n 5.34636974e-01 5.34693778e-01 5.34703314e-01 5.34669101e-01\n 5.34594595e-01 5.34482956e-01 5.34337759e-01 5.34163952e-01\n 5.33960879e-01 5.33732831e-01 5.33482432e-01 5.33212185e-01\n 5.32924354e-01 5.32621086e-01 5.32304347e-01 5.31978011e-01\n 5.31640649e-01 5.31295240e-01 5.30943155e-01 5.30585825e-01\n 5.30224442e-01 5.29860139e-01 5.29493570e-01 5.29125154e-01\n 5.28756499e-01 5.28387785e-01 5.28019249e-01 5.27651370e-01\n 5.27284384e-01 5.26918232e-01 5.26551723e-01 5.26186407e-01\n 5.25821090e-01 5.25455594e-01 5.25089264e-01 5.24721444e-01\n 5.24352312e-01 5.23979485e-01 5.23602605e-01 5.23220658e-01\n 5.22832334e-01 5.22436500e-01 5.22031665e-01 5.21617889e-01\n 5.21191180e-01 5.20751059e-01 5.20295382e-01 5.19822717e-01\n 5.19331038e-01 5.18818200e-01 5.18282294e-01 5.17723083e-01\n 5.17135084e-01 5.16517639e-01 5.15868068e-01 5.15184343e-01\n 5.14463842e-01 5.13704181e-01 5.12902796e-01 5.12057185e-01\n 5.11164725e-01 5.10220647e-01 5.09225905e-01 5.08176446e-01\n 5.07069409e-01 5.05902112e-01 5.04672229e-01 5.03376842e-01\n 5.02013624e-01 5.00580668e-01 4.99073923e-01 4.97491956e-01\n 4.95832115e-01 4.94091898e-01 4.92269516e-01 4.90362555e-01\n 4.88368809e-01 4.86289084e-01 4.84117359e-01 4.81853962e-01\n 4.79497075e-01 4.77045268e-01 4.74497527e-01 4.71852601e-01\n 4.69109416e-01 4.66267377e-01 4.63325500e-01 4.60283875e-01\n 4.57141936e-01 4.53899294e-01 4.50556606e-01 4.47113782e-01\n 4.43570852e-01 4.39929187e-01 4.36188906e-01 4.32351589e-01\n 4.28418010e-01 4.24389124e-01 4.20265675e-01 4.16050941e-01\n 4.11745727e-01 4.07351941e-01 4.02871162e-01 3.98306072e-01\n 3.93658578e-01 3.88930559e-01 3.84125113e-01 3.79244357e-01\n 3.74290347e-01 3.69266510e-01 3.64174694e-01 3.59018236e-01\n 3.53798985e-01 3.48520130e-01 3.43184799e-01 3.37795407e-01\n 3.32354188e-01 3.26864660e-01 3.21328700e-01 3.15750122e-01\n 3.10130686e-01 3.04472744e-01 2.98779726e-01 2.93053716e-01\n 2.87297159e-01 2.81512469e-01 2.75699914e-01 2.69865096e-01\n 2.64008760e-01 2.58132488e-01 2.52239227e-01 2.46330470e-01\n 2.40407631e-01 2.34473363e-01 2.28528112e-01 2.22574979e-01\n 2.16614813e-01 2.10648596e-01 2.04678640e-01 1.98705792e-01\n 1.92731246e-01 1.86756194e-01 1.80781245e-01 1.74808115e-01\n 1.68837577e-01 1.62869945e-01 1.56906918e-01 1.50948748e-01\n 1.44995674e-01 1.39049277e-01 1.33109152e-01 1.27176762e-01\n 1.21252112e-01 1.15334779e-01 1.09426662e-01 1.03527345e-01\n 9.76371169e-02 9.17562172e-02 8.58844295e-02 8.00228119e-02\n 7.41710812e-02 6.83289021e-02 6.24972582e-02 5.66757470e-02\n 5.08640260e-02 4.50629815e-02 3.92717533e-02 3.34912278e-02\n 2.77209580e-02 2.19605006e-02 1.62107050e-02 1.04710972e-02\n 4.74167336e-03 -9.77647491e-04 -6.68730866e-03 -1.23864710e-02\n -1.80755872e-02 -2.37551015e-02 -2.94241551e-02 -3.50831747e-02\n -4.07325551e-02 -4.63714413e-02 -5.20006232e-02 -5.76191917e-02\n -6.32274747e-02 -6.88258037e-02 -7.44131804e-02 -7.99898952e-02\n -8.55562314e-02 -9.11110565e-02 -9.66550335e-02 -1.02187037e-01\n -1.07707202e-01 -1.13215625e-01 -1.18711099e-01 -1.24193661e-01\n -1.29662856e-01 -1.35118231e-01 -1.40559763e-01 -1.45985991e-01\n -1.51396737e-01 -1.56791836e-01 -1.62169784e-01 -1.67530313e-01\n -1.72873020e-01 -1.78196415e-01 -1.83500364e-01 -1.88783243e-01\n -1.94044426e-01 -1.99283540e-01 -2.04498738e-01 -2.09689319e-01\n -2.14854285e-01 -2.19992444e-01 -2.25103065e-01 -2.30184183e-01\n -2.35234946e-01 -2.40254506e-01 -2.45240748e-01 -2.50192761e-01\n -2.55109549e-01 -2.59989053e-01 -2.64830559e-01 -2.69631952e-01\n -2.74392158e-01 -2.79110044e-01 -2.83783495e-01 -2.88411349e-01\n -2.92990953e-01 -2.97522813e-01 -3.02004904e-01 -3.06434989e-01\n -3.10812056e-01 -3.15134853e-01 -3.19401383e-01 -3.23610455e-01\n -3.27761054e-01 -3.31851542e-01 -3.35880518e-01 -3.39846402e-01\n -3.43748122e-01 -3.47584814e-01 -3.51354748e-01 -3.55056971e-01\n -3.58690828e-01 -3.62255663e-01 -3.65749508e-01 -3.69171619e-01\n -3.72521400e-01 -3.75798464e-01 -3.79001468e-01 -3.82130295e-01\n -3.85184258e-01 -3.88163716e-01 -3.91067326e-01 -3.93894911e-01\n -3.96646500e-01 -3.99322182e-01 -4.01921481e-01 -4.04444784e-01\n -4.06892240e-01 -4.09263104e-01 -4.11559254e-01 -4.13780123e-01\n -4.15926456e-01 -4.17998821e-01 -4.19997573e-01 -4.21923518e-01\n -4.23777461e-01 -4.25559759e-01 -4.27272141e-01 -4.28914905e-01\n -4.30489272e-01 -4.31996375e-01 -4.33437049e-01 -4.34812665e-01\n -4.36124444e-01 -4.37372893e-01 -4.38560247e-01 -4.39687639e-01\n -4.40756083e-01 -4.41767097e-01 -4.42722142e-01 -4.43622500e-01]\n(1000,)\n"
],
[
"print(true_y.cpu().numpy()[:, 0, 1])\nprint(true_y.cpu().numpy()[:, 0, 1].shape)",
"[ 0.00000000e+00 3.94356847e-01 7.74164319e-01 1.12732625e+00\n 1.42967021e+00 1.65460420e+00 1.79119289e+00 1.85255861e+00\n 1.86579359e+00 1.85661983e+00 1.84132659e+00 1.82563341e+00\n 1.80602193e+00 1.77153242e+00 1.70673847e+00 1.59728444e+00\n 1.43696153e+00 1.23155546e+00 9.95638132e-01 7.45095432e-01\n 4.91547823e-01 2.41059825e-01 -4.36256872e-03 -2.44540945e-01\n -4.79109496e-01 -7.05913901e-01 -9.19977844e-01 -1.11352074e+00\n -1.27756929e+00 -1.40502357e+00 -1.49363041e+00 -1.54689884e+00\n -1.57249558e+00 -1.57944846e+00 -1.57582855e+00 -1.56755471e+00\n -1.55810499e+00 -1.54865646e+00 -1.53833616e+00 -1.52449381e+00\n -1.50304353e+00 -1.46903825e+00 -1.41759014e+00 -1.34507012e+00\n -1.25021267e+00 -1.13458383e+00 -1.00210834e+00 -8.57897222e-01\n -7.06952691e-01 -5.53292513e-01 -3.99632037e-01 -2.47488126e-01\n -9.75035653e-02 5.01879081e-02 1.95609927e-01 3.38648796e-01\n 4.78800267e-01 6.14972830e-01 7.45391369e-01 8.67638052e-01\n 9.78899002e-01 1.07640958e+00 1.15801775e+00 1.22266912e+00\n 1.27065277e+00 1.30349994e+00 1.32363236e+00 1.33386922e+00\n 1.33700490e+00 1.33550894e+00 1.33135414e+00 1.32596302e+00\n 1.32021976e+00 1.31449497e+00 1.30869412e+00 1.30229807e+00\n 1.29441106e+00 1.28380752e+00 1.26901937e+00 1.24845862e+00\n 1.22057807e+00 1.18407345e+00 1.13808370e+00 1.08232653e+00\n 1.01718950e+00 9.43639815e-01 8.63082170e-01 7.77134001e-01\n 6.87417805e-01 5.95388770e-01 5.02240121e-01 4.08858150e-01\n 3.15855235e-01 2.23596841e-01 1.32269546e-01 4.19370867e-02\n -4.73991372e-02 -1.35752752e-01 -2.23099798e-01 -3.09343785e-01\n -3.94274354e-01 -4.77546155e-01 -5.58654785e-01 -6.36946440e-01\n -7.11625040e-01 -7.81816959e-01 -8.46623957e-01 -9.05216992e-01\n -9.56937432e-01 -1.00136483e+00 -1.03837383e+00 -1.06814718e+00\n -1.09114420e+00 -1.10804713e+00 -1.11968029e+00 -1.12693548e+00\n -1.13070011e+00 -1.13180709e+00 -1.13099110e+00 -1.12887836e+00\n -1.12597978e+00 -1.12266147e+00 -1.11918640e+00 -1.11569595e+00\n -1.11222911e+00 -1.10872304e+00 -1.10501790e+00 -1.10086942e+00\n -1.09595788e+00 -1.08988392e+00 -1.08220768e+00 -1.07243645e+00\n -1.06006885e+00 -1.04462326e+00 -1.02565694e+00 -1.00281000e+00\n -9.75831509e-01 -9.44604516e-01 -9.09158111e-01 -8.69666278e-01\n -8.26433063e-01 -7.79867947e-01 -7.30450153e-01 -6.78690255e-01\n -6.25101209e-01 -5.70166588e-01 -5.14317691e-01 -4.57925498e-01\n -4.01294917e-01 -3.44659984e-01 -2.88197815e-01 -2.32028618e-01\n -1.76230773e-01 -1.20847128e-01 -6.58965632e-02 -1.13804741e-02\n 4.27044779e-02 9.63632539e-02 1.49590209e-01 2.02367276e-01\n 2.54653603e-01 3.06383729e-01 3.57460499e-01 4.07752872e-01\n 4.57095683e-01 5.05287886e-01 5.52097619e-01 5.97264171e-01\n 6.40513420e-01 6.81562364e-01 7.20138073e-01 7.55988896e-01\n 7.88901269e-01 8.18714321e-01 8.45331728e-01 8.68716240e-01\n 8.88909101e-01 9.06011581e-01 9.20186758e-01 9.31643486e-01\n 9.40628290e-01 9.47411776e-01 9.52271700e-01 9.55487192e-01\n 9.57329571e-01 9.58050728e-01 9.57883894e-01 9.57035661e-01\n 9.55685735e-01 9.53986168e-01 9.52060997e-01 9.50009942e-01\n 9.47901249e-01 9.45780516e-01 9.43670034e-01 9.41568911e-01\n 9.39452291e-01 9.37276185e-01 9.34975982e-01 9.32468414e-01\n 9.29652750e-01 9.26412284e-01 9.22616839e-01 9.18123662e-01\n 9.12782490e-01 9.06438887e-01 8.98934841e-01 8.90120268e-01\n 8.79851162e-01 8.67999911e-01 8.54457080e-01 8.39140713e-01\n 8.21996212e-01 8.02999914e-01 7.82161295e-01 7.59524405e-01\n 7.35162199e-01 7.09177256e-01 6.81693077e-01 6.52853489e-01\n 6.22812808e-01 5.91730237e-01 5.59767723e-01 5.27081251e-01\n 4.93819863e-01 4.60120201e-01 4.26104337e-01 3.91882211e-01\n 3.57546896e-01 3.23175997e-01 2.88832843e-01 2.54570872e-01\n 2.20426798e-01 1.86430022e-01 1.52599573e-01 1.18949249e-01\n 8.54854882e-02 5.22109084e-02 1.91248003e-02 -1.37730278e-02\n -4.64855060e-02 -7.90119916e-02 -1.11352146e-01 -1.43500000e-01\n -1.75447136e-01 -2.07179412e-01 -2.38677308e-01 -2.69913077e-01\n -3.00852895e-01 -3.31455082e-01 -3.61671060e-01 -3.91441524e-01\n -4.20701951e-01 -4.49380398e-01 -4.77398485e-01 -5.04671633e-01\n -5.31115174e-01 -5.56639493e-01 -5.81158102e-01 -6.04586661e-01\n -6.26846075e-01 -6.47865474e-01 -6.67587996e-01 -6.85960829e-01\n -7.02953100e-01 -7.18543947e-01 -7.32729316e-01 -7.45521724e-01\n -7.56947815e-01 -7.67047107e-01 -7.75874078e-01 -7.83492446e-01\n -7.89974511e-01 -7.95399964e-01 -7.99851656e-01 -8.03417981e-01\n -8.06186199e-01 -8.08242261e-01 -8.09672117e-01 -8.10557067e-01\n -8.10971677e-01 -8.10990930e-01 -8.10680151e-01 -8.10099900e-01\n -8.09306860e-01 -8.08346391e-01 -8.07261407e-01 -8.06087911e-01\n -8.04855824e-01 -8.03588748e-01 -8.02304387e-01 -8.01020026e-01\n -7.99737275e-01 -7.98460901e-01 -7.97185302e-01 -7.95905709e-01\n -7.94608831e-01 -7.93277442e-01 -7.91889727e-01 -7.90420771e-01\n -7.88840830e-01 -7.87116468e-01 -7.85210729e-01 -7.83084750e-01\n -7.80694842e-01 -7.77996242e-01 -7.74942100e-01 -7.71485150e-01\n -7.67575562e-01 -7.63165236e-01 -7.58206248e-01 -7.52651274e-01\n -7.46458769e-01 -7.39587963e-01 -7.32000589e-01 -7.23668516e-01\n -7.14565039e-01 -7.04674780e-01 -6.93981707e-01 -6.82482958e-01\n -6.70181632e-01 -6.57087326e-01 -6.43218517e-01 -6.28598690e-01\n -6.13258362e-01 -5.97234190e-01 -5.80565572e-01 -5.63299239e-01\n -5.45479953e-01 -5.27158022e-01 -5.08384466e-01 -4.89208102e-01\n -4.69679326e-01 -4.49846774e-01 -4.29755688e-01 -4.09451425e-01\n -3.88971299e-01 -3.68356168e-01 -3.47640336e-01 -3.26853096e-01\n -3.06023151e-01 -2.85175234e-01 -2.64328837e-01 -2.43503630e-01\n -2.22712964e-01 -2.01970309e-01 -1.81286454e-01 -1.60667866e-01\n -1.40121073e-01 -1.19651586e-01 -9.92606506e-02 -7.89517239e-02\n -5.87235503e-02 -3.85784395e-02 -1.85162202e-02 1.46510440e-03\n 2.13654116e-02 4.11843099e-02 6.09234236e-02 8.05808008e-02\n 1.00156903e-01 1.19649068e-01 1.39053613e-01 1.58368289e-01\n 1.77586645e-01 1.96701467e-01 2.15705916e-01 2.34588757e-01\n 2.53340065e-01 2.71945894e-01 2.90390640e-01 3.08658928e-01\n 3.26730102e-01 3.44585419e-01 3.62203330e-01 3.79559308e-01\n 3.96630794e-01 4.13390428e-01 4.29811269e-01 4.45867687e-01\n 4.61529642e-01 4.76771861e-01 4.91567671e-01 5.05889952e-01\n 5.19713163e-01 5.33016026e-01 5.45775831e-01 5.57974875e-01\n 5.69596112e-01 5.80625713e-01 5.91054380e-01 6.00877762e-01\n 6.10088825e-01 6.18689001e-01 6.26682937e-01 6.34076774e-01\n 6.40881121e-01 6.47108495e-01 6.52776122e-01 6.57902122e-01\n 6.62507772e-01 6.66614592e-01 6.70248091e-01 6.73433602e-01\n 6.76197588e-01 6.78565681e-01 6.80567086e-01 6.82228267e-01\n 6.83576524e-01 6.84637070e-01 6.85437739e-01 6.86003089e-01\n 6.86357617e-01 6.86522603e-01 6.86522007e-01 6.86375618e-01\n 6.86102986e-01 6.85721934e-01 6.85248792e-01 6.84699118e-01\n 6.84086621e-01 6.83423638e-01 6.82721376e-01 6.81989491e-01\n 6.81236207e-01 6.80468559e-01 6.79692626e-01 6.78912640e-01\n 6.78132415e-01 6.77353501e-01 6.76577210e-01 6.75803423e-01\n 6.75030112e-01 6.74255908e-01 6.73477232e-01 6.72689557e-01\n 6.71885908e-01 6.71062052e-01 6.70209587e-01 6.69320047e-01\n 6.68386400e-01 6.67395890e-01 6.66338742e-01 6.65203512e-01\n 6.63977981e-01 6.62649572e-01 6.61201775e-01 6.59624636e-01\n 6.57901943e-01 6.56019211e-01 6.53961301e-01 6.51713014e-01\n 6.49258733e-01 6.46584094e-01 6.43673897e-01 6.40513778e-01\n 6.37088239e-01 6.33385599e-01 6.29392624e-01 6.25096917e-01\n 6.20488048e-01 6.15558267e-01 6.10296428e-01 6.04696810e-01\n 5.98753452e-01 5.92462957e-01 5.85822999e-01 5.78832328e-01\n 5.71492314e-01 5.63805938e-01 5.55776656e-01 5.47410905e-01\n 5.38715363e-01 5.29698968e-01 5.20374537e-01 5.10749459e-01\n 5.00837624e-01 4.90652829e-01 4.80207920e-01 4.69518960e-01\n 4.58599776e-01 4.47467029e-01 4.36135858e-01 4.24621850e-01\n 4.12940681e-01 4.01108086e-01 3.89139563e-01 3.77048790e-01\n 3.64849776e-01 3.52556378e-01 3.40181977e-01 3.27738434e-01\n 3.15236956e-01 3.02689075e-01 2.90103823e-01 2.77491391e-01\n 2.64859736e-01 2.52216458e-01 2.39569351e-01 2.26923734e-01\n 2.14285791e-01 2.01660410e-01 1.89051822e-01 1.76464409e-01\n 1.63900450e-01 1.51363283e-01 1.38855651e-01 1.26378670e-01\n 1.13934040e-01 1.01523511e-01 8.91470015e-02 7.68061206e-02\n 6.45003468e-02 5.22301793e-02 3.99963893e-02 2.77978983e-02\n 1.56350676e-02 3.50819598e-03 -8.58391542e-03 -2.06403732e-02\n -3.26622799e-02 -4.46491912e-02 -5.66005819e-02 -6.85170665e-02\n -8.03979561e-02 -9.22422037e-02 -1.04049936e-01 -1.15819253e-01\n -1.27549693e-01 -1.39239386e-01 -1.50886029e-01 -1.62488267e-01\n -1.74043208e-01 -1.85547546e-01 -1.96999237e-01 -2.08393335e-01\n -2.19726920e-01 -2.30995193e-01 -2.42192924e-01 -2.53315777e-01\n -2.64357865e-01 -2.75312692e-01 -2.86174834e-01 -2.96936750e-01\n -3.07592779e-01 -3.18134129e-01 -3.28553438e-01 -3.38842481e-01\n -3.48994225e-01 -3.58999282e-01 -3.68850023e-01 -3.78537208e-01\n -3.88053328e-01 -3.97388995e-01 -4.06535506e-01 -4.15485412e-01\n -4.24230158e-01 -4.32761133e-01 -4.41072285e-01 -4.49155599e-01\n -4.57005024e-01 -4.64614123e-01 -4.71977055e-01 -4.79088783e-01\n -4.85945761e-01 -4.92543697e-01 -4.98880118e-01 -5.04952788e-01\n -5.10760486e-01 -5.16302466e-01 -5.21578729e-01 -5.26590645e-01\n -5.31339765e-01 -5.35828412e-01 -5.40060163e-01 -5.44038653e-01\n -5.47768891e-01 -5.51254451e-01 -5.54502666e-01 -5.57519257e-01\n -5.60310841e-01 -5.62884271e-01 -5.65248132e-01 -5.67408919e-01\n -5.69375098e-01 -5.71155071e-01 -5.72757125e-01 -5.74187160e-01\n -5.75457633e-01 -5.76575220e-01 -5.77548802e-01 -5.78386545e-01\n -5.79096854e-01 -5.79688370e-01 -5.80168128e-01 -5.80544591e-01\n -5.80825150e-01 -5.81017196e-01 -5.81127763e-01 -5.81163943e-01\n -5.81132233e-01 -5.81038833e-01 -5.80889940e-01 -5.80690920e-01\n -5.80447316e-01 -5.80164313e-01 -5.79846323e-01 -5.79498887e-01\n -5.79124749e-01 -5.78728318e-01 -5.78313053e-01 -5.77882171e-01\n -5.77438533e-01 -5.76986730e-01 -5.76526105e-01 -5.76059878e-01\n -5.75589895e-01 -5.75117528e-01 -5.74644148e-01 -5.74170828e-01\n -5.73698044e-01 -5.73226035e-01 -5.72755277e-01 -5.72285593e-01\n -5.71816742e-01 -5.71348131e-01 -5.70878088e-01 -5.70406675e-01\n -5.69932282e-01 -5.69453359e-01 -5.68967879e-01 -5.68475068e-01\n -5.67971408e-01 -5.67454875e-01 -5.66923082e-01 -5.66373229e-01\n -5.65802336e-01 -5.65207362e-01 -5.64584136e-01 -5.63930213e-01\n -5.63241661e-01 -5.62514663e-01 -5.61744928e-01 -5.60928583e-01\n -5.60061455e-01 -5.59139192e-01 -5.58157206e-01 -5.57110965e-01\n -5.55995882e-01 -5.54807365e-01 -5.53540647e-01 -5.52191019e-01\n -5.50753832e-01 -5.49224496e-01 -5.47597826e-01 -5.45869350e-01\n -5.44034481e-01 -5.42088628e-01 -5.40027320e-01 -5.37848830e-01\n -5.35545528e-01 -5.33114970e-01 -5.30553579e-01 -5.27857900e-01\n -5.25024712e-01 -5.22051394e-01 -5.18932998e-01 -5.15670598e-01\n -5.12260854e-01 -5.08701742e-01 -5.04992545e-01 -5.01131892e-01\n -4.97118324e-01 -4.92953181e-01 -4.88635540e-01 -4.84166622e-01\n -4.79546964e-01 -4.74777550e-01 -4.69858825e-01 -4.64795142e-01\n -4.59587276e-01 -4.54238594e-01 -4.48751152e-01 -4.43128794e-01\n -4.37374830e-01 -4.31492507e-01 -4.25486505e-01 -4.19360578e-01\n -4.13118958e-01 -4.06766385e-01 -4.00306672e-01 -3.93745571e-01\n -3.87087345e-01 -3.80336314e-01 -3.73498321e-01 -3.66578907e-01\n -3.59580696e-01 -3.52510303e-01 -3.45371425e-01 -3.38169962e-01\n -3.30909938e-01 -3.23595941e-01 -3.16232651e-01 -3.08824182e-01\n -3.01374197e-01 -2.93887854e-01 -2.86367923e-01 -2.78819561e-01\n -2.71245301e-01 -2.63648242e-01 -2.56032705e-01 -2.48401299e-01\n -2.40756884e-01 -2.33102188e-01 -2.25439191e-01 -2.17771471e-01\n -2.10100681e-01 -2.02428326e-01 -1.94757491e-01 -1.87089399e-01\n -1.79425091e-01 -1.71767205e-01 -1.64115921e-01 -1.56473592e-01\n -1.48840815e-01 -1.41217962e-01 -1.33607060e-01 -1.26008347e-01\n -1.18422426e-01 -1.10849924e-01 -1.03290737e-01 -9.57463160e-02\n -8.82166550e-02 -8.07013363e-02 -7.32017532e-02 -6.57174960e-02\n -5.82481213e-02 -5.07948548e-02 -4.33566086e-02 -3.59342806e-02\n -2.85276584e-02 -2.11359933e-02 -1.37603646e-02 -6.40016049e-03\n 9.44667379e-04 8.27419013e-03 1.55889448e-02 2.28879508e-02\n 3.01717483e-02 3.74408923e-02 4.46942337e-02 5.19322753e-02\n 5.91555126e-02 6.63626567e-02 7.35546723e-02 8.07302371e-02\n 8.78896341e-02 9.50330943e-02 1.02159128e-01 1.09267794e-01\n 1.16358675e-01 1.23431168e-01 1.30485162e-01 1.37518853e-01\n 1.44532010e-01 1.51524246e-01 1.58493593e-01 1.65439680e-01\n 1.72361657e-01 1.79257348e-01 1.86126560e-01 1.92966834e-01\n 1.99777216e-01 2.06556663e-01 2.13302478e-01 2.20013499e-01\n 2.26688355e-01 2.33324155e-01 2.39919901e-01 2.46472552e-01\n 2.52980411e-01 2.59441793e-01 2.65853375e-01 2.72213370e-01\n 2.78519273e-01 2.84768224e-01 2.90958852e-01 2.97087342e-01\n 3.03151548e-01 3.09149295e-01 3.15077007e-01 3.20932537e-01\n 3.26713532e-01 3.32417101e-01 3.38040531e-01 3.43580723e-01\n 3.49035501e-01 3.54402810e-01 3.59679282e-01 3.64862978e-01\n 3.69950980e-01 3.74941945e-01 3.79833639e-01 3.84623319e-01\n 3.89309376e-01 3.93890351e-01 3.98363858e-01 4.02728707e-01\n 4.06983852e-01 4.11127239e-01 4.15158540e-01 4.19076085e-01\n 4.22879398e-01 4.26568300e-01 4.30142403e-01 4.33600545e-01\n 4.36943203e-01 4.40170437e-01 4.43282664e-01 4.46279973e-01\n 4.49163049e-01 4.51933384e-01 4.54590827e-01 4.57136929e-01\n 4.59573179e-01 4.61900473e-01 4.64120686e-01 4.66235191e-01\n 4.68246251e-01 4.70155329e-01 4.71964419e-01 4.73675758e-01\n 4.75291640e-01 4.76814121e-01 4.78245795e-01 4.79588538e-01\n 4.80845451e-01 4.82018858e-01 4.83111233e-01 4.84125227e-01\n 4.85063434e-01 4.85928506e-01 4.86723125e-01 4.87449944e-01\n 4.88111585e-01 4.88710761e-01 4.89249974e-01 4.89731938e-01\n 4.90159154e-01 4.90534186e-01 4.90859687e-01 4.91137803e-01\n 4.91371214e-01 4.91562247e-01 4.91713166e-01 4.91826266e-01\n 4.91903782e-01 4.91947830e-01 4.91960555e-01 4.91944045e-01\n 4.91900116e-01 4.91830796e-01 4.91737872e-01 4.91623074e-01\n 4.91488189e-01 4.91334856e-01 4.91164565e-01 4.90978807e-01\n 4.90779072e-01 4.90565091e-01 4.90340978e-01 4.90106612e-01\n 4.89863187e-01 4.89611804e-01 4.89353418e-01 4.89089012e-01\n 4.88819450e-01 4.88545567e-01 4.88268077e-01 4.87987548e-01\n 4.87704784e-01 4.87420231e-01 4.87134457e-01 4.86847788e-01\n 4.86560762e-01 4.86273468e-01 4.85986322e-01 4.85699415e-01\n 4.85413164e-01 4.85127181e-01 4.84841675e-01 4.84556615e-01\n 4.84271944e-01 4.83987421e-01 4.83702928e-01 4.83418137e-01\n 4.83133346e-01 4.82847273e-01 4.82559800e-01 4.82270569e-01\n 4.81979072e-01 4.81684685e-01 4.81386840e-01 4.81084883e-01\n 4.80778098e-01 4.80465770e-01 4.80147004e-01 4.79821116e-01\n 4.79487121e-01 4.79144126e-01 4.78791237e-01 4.78427410e-01\n 4.78051543e-01 4.77662623e-01 4.77257878e-01 4.76838976e-01\n 4.76403385e-01 4.75950003e-01 4.75477457e-01 4.74984556e-01\n 4.74469930e-01 4.73932266e-01 4.73370194e-01 4.72782344e-01\n 4.72167522e-01 4.71523792e-01 4.70850110e-01 4.70144868e-01\n 4.69406635e-01 4.68633890e-01 4.67825174e-01 4.66979116e-01\n 4.66094166e-01 4.65168834e-01 4.64201748e-01 4.63191390e-01\n 4.62136239e-01 4.61034954e-01 4.59886074e-01 4.58688349e-01\n 4.57440376e-01 4.56140637e-01 4.54788178e-01 4.53381509e-01\n 4.51920331e-01 4.50402170e-01 4.48826283e-01 4.47191983e-01\n 4.45498019e-01 4.43743438e-01 4.41927612e-01 4.40049589e-01\n 4.38108534e-01 4.36103970e-01 4.34035003e-01 4.31901425e-01\n 4.29702669e-01 4.27438140e-01 4.25107926e-01 4.22711611e-01\n 4.20248985e-01 4.17720944e-01 4.15126026e-01 4.12465245e-01\n 4.09738541e-01 4.06946123e-01 4.04088646e-01 4.01166439e-01\n 3.98179978e-01 3.95130605e-01 3.92017573e-01 3.88842553e-01\n 3.85606259e-01 3.82309288e-01 3.78953010e-01 3.75538290e-01\n 3.72065872e-01 3.68537366e-01 3.64953637e-01 3.61316413e-01\n 3.57626617e-01 3.53885323e-01 3.50094467e-01 3.46255124e-01\n 3.42368692e-01 3.38436842e-01 3.34460467e-01 3.30441833e-01\n 3.26382101e-01 3.22282493e-01 3.18145096e-01 3.13971132e-01\n 3.09761912e-01 3.05518955e-01 3.01244378e-01 2.96938926e-01\n 2.92604864e-01 2.88243175e-01 2.83855081e-01 2.79442728e-01]\n(1000,)\n"
],
[
"print(np.mgrid[-2:2:21j, -2:2:21j].shape)\nnp.mgrid[-2:2:21j, -2:2:21j][0]",
"(2, 21, 21)\n"
],
[
"class ODEFunc(nn.Module):\n\n def __init__(self):\n super(ODEFunc, self).__init__()\n # https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html\n self.net = nn.Sequential(\n nn.Linear(2, 50),\n nn.Tanh(),\n nn.Linear(50, 2),\n )\n\n for m in self.net.modules():\n if isinstance(m, nn.Linear):\n nn.init.normal_(m.weight, mean=0, std=0.1) # https://pytorch.org/cppdocs/api/function_namespacetorch_1_1nn_1_1init_1a105c2a8ef81c6faa82a01cf35ce9f3b1.html\n nn.init.constant_(m.bias, val=0) # https://pytorch.org/cppdocs/api/function_namespacetorch_1_1nn_1_1init_1a9c886724aac3a487553dc0a406565c83.html\n\n def forward(self, t, y):\n return self.net(y**3)\nfunc = ODEFunc().to(device)",
"_____no_output_____"
],
[
"y, x = np.mgrid[-2:2:21j, -2:2:21j] \nnp.stack([x, y], -1).reshape(21 * 21, 2)",
"_____no_output_____"
],
[
"print(func(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2)).to(device)).cpu().detach().numpy().shape)\nfunc(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2)).to(device)).cpu().detach().numpy()\n",
"(441, 2)\n"
],
[
"fig = plt.figure(figsize=(12, 4), facecolor='white')\nax_traj = fig.add_subplot(131, frameon=False)\nax_traj = fig.add_subplot(131, frameon=False)\nax_phase = fig.add_subplot(132, frameon=False)\nax_vecfield = fig.add_subplot(133, frameon=False)\nax_traj.cla()\nax_traj.set_title('Trajectories')\nax_traj.set_xlabel('t')\nax_traj.set_ylabel('x,y')\nax_traj.plot(t.cpu().numpy(), true_y.cpu().numpy()[:, 0, 0], t.cpu().numpy(), true_y.cpu().numpy()[:, 0, 1], 'g-')\n# ax_traj.plot(t.cpu().numpy(), pred_y.cpu().numpy()[:, 0, 0], '--', t.cpu().numpy(), pred_y.cpu().numpy()[:, 0, 1], 'b--')\nax_traj.set_xlim(t.cpu().min(), t.cpu().max())\nax_traj.set_ylim(-2, 2)\nax_traj.legend()\n\nax_phase.cla()\nax_phase.set_title('Phase Portrait')\nax_phase.set_xlabel('x')\nax_phase.set_ylabel('y')\nax_phase.plot(true_y.cpu().numpy()[:, 0, 0], true_y.cpu().numpy()[:, 0, 1], 'g-')\n# ax_phase.plot(pred_y.cpu().numpy()[:, 0, 0], pred_y.cpu().numpy()[:, 0, 1], 'b--')\nax_phase.set_xlim(-2, 2)\nax_phase.set_ylim(-2, 2)\n\nax_vecfield.cla()\nax_vecfield.set_title('Learned Vector Field')\nax_vecfield.set_xlabel('x')\nax_vecfield.set_ylabel('y')\n\n# https://stackoverflow.com/questions/32208359/is-there-a-multi-dimensional-version-of-arange-linspace-in-numpy\n# we need 21 points between -2 and 2\ny, x = np.mgrid[-2:2:21j, -2:2:21j] # https://numpy.org/doc/stable/reference/generated/numpy.mgrid.html\ndydt = func(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2)).to(device)).cpu().detach().numpy()\nmag = np.sqrt(dydt[:, 0]**2 + dydt[:, 1]**2).reshape(-1, 1)\ndydt = (dydt / mag)\ndydt = dydt.reshape(21, 21, 2)\n\nax_vecfield.streamplot(x, y, dydt[:, :, 0], dydt[:, :, 1], color=\"black\")\nax_vecfield.set_xlim(-2, 2)\nax_vecfield.set_ylim(-2, 2)\n\nfig.tight_layout()\nplt.show()\n\n",
"No handles with labels found to put in legend.\n/tmp/ipykernel_3278/466705120.py:35: RuntimeWarning: invalid value encountered in true_divide\n dydt = (dydt / mag)\n"
],
[
"def visualize(true_y, pred_y, odefunc, itr):\n\n if viz:\n\n ax_traj.cla()\n ax_traj.set_title('Trajectories')\n ax_traj.set_xlabel('t')\n ax_traj.set_ylabel('x,y')\n ax_traj.plot(t.cpu().numpy(), true_y.cpu().numpy()[:, 0, 0], t.cpu().numpy(), true_y.cpu().numpy()[:, 0, 1], 'g-')\n ax_traj.plot(t.cpu().numpy(), pred_y.cpu().numpy()[:, 0, 0], '--', t.cpu().numpy(), pred_y.cpu().numpy()[:, 0, 1], 'b--')\n ax_traj.set_xlim(t.cpu().min(), t.cpu().max())\n ax_traj.set_ylim(-2, 2)\n ax_traj.legend()\n\n ax_phase.cla()\n ax_phase.set_title('Phase Portrait')\n ax_phase.set_xlabel('x')\n ax_phase.set_ylabel('y')\n ax_phase.plot(true_y.cpu().numpy()[:, 0, 0], true_y.cpu().numpy()[:, 0, 1], 'g-')\n ax_phase.plot(pred_y.cpu().numpy()[:, 0, 0], pred_y.cpu().numpy()[:, 0, 1], 'b--')\n ax_phase.set_xlim(-2, 2)\n ax_phase.set_ylim(-2, 2)\n\n ax_vecfield.cla()\n ax_vecfield.set_title('Learned Vector Field')\n ax_vecfield.set_xlabel('x')\n ax_vecfield.set_ylabel('y')\n\n y, x = np.mgrid[-2:2:21j, -2:2:21j]\n dydt = odefunc(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2)).to(device)).cpu().detach().numpy()\n mag = np.sqrt(dydt[:, 0]**2 + dydt[:, 1]**2).reshape(-1, 1)\n dydt = (dydt / mag)\n dydt = dydt.reshape(21, 21, 2)\n\n ax_vecfield.streamplot(x, y, dydt[:, :, 0], dydt[:, :, 1], color=\"black\")\n ax_vecfield.set_xlim(-2, 2)\n ax_vecfield.set_ylim(-2, 2)\n\n fig.tight_layout()\n plt.savefig('png/{:03d}'.format(itr))\n plt.draw()\n plt.pause(0.001)",
"_____no_output_____"
],
[
"class RunningAverageMeter(object):\n \"\"\"Computes and stores the average and current value\"\"\"\n\n def __init__(self, momentum=0.99):\n self.momentum = momentum\n self.reset()\n\n def reset(self):\n self.val = None\n self.avg = 0\n\n def update(self, val):\n if self.val is None:\n self.avg = val\n else:\n self.avg = self.avg * self.momentum + val * (1 - self.momentum)\n self.val = val\n",
"_____no_output_____"
],
[
"if __name__ == '__main__':\n\n ii = 0\n\n func = ODEFunc().to(device)\n \n optimizer = optim.RMSprop(func.parameters(), lr=1e-3)\n end = time.time()\n\n time_meter = RunningAverageMeter(0.97)\n \n loss_meter = RunningAverageMeter(0.97)\n\n for itr in range(1, niters + 1):\n optimizer.zero_grad()\n batch_y0, batch_t, batch_y = get_batch()\n pred_y = odeint(func, batch_y0, batch_t).to(device)\n loss = torch.mean(torch.abs(pred_y - batch_y))\n loss.backward()\n optimizer.step()\n\n time_meter.update(time.time() - end)\n loss_meter.update(loss.item())\n\n if itr % test_freq == 0:\n with torch.no_grad():\n pred_y = odeint(func, true_y0, t)\n loss = torch.mean(torch.abs(pred_y - true_y))\n print('Iter {:04d} | Total Loss {:.6f}'.format(itr, loss.item()))\n visualize(true_y, pred_y, func, ii)\n ii += 1\n\n end = time.time()\n",
"No handles with labels found to put in legend.\n"
],
[
"batch_y0, batch_t, batch_y = get_batch()\npred_y = odeint(func, batch_y0, batch_t).to(device)",
"_____no_output_____"
],
[
"print(batch_y.shape)\nprint(pred_y.shape)",
"torch.Size([10, 20, 1, 2])\ntorch.Size([10, 20, 1, 2])\n"
],
[
"batch_y.cpu().numpy().shape",
"_____no_output_____"
],
[
"print(batch_y.cpu().numpy()[9, :, 0, 0].shape)\nbatch_y.cpu().numpy()[9, :, 0, 0]",
"(20,)\n"
],
[
"t = torch.linspace(0, batch_size, batch_size)\nprint(t.shape)",
"torch.Size([20])\n"
],
[
"t.cpu().numpy().shape",
"_____no_output_____"
],
[
"pred_y.cpu().numpy()[9, :, 0, 0].shape",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(12, 4), facecolor='white')\nax_traj = fig.add_subplot(131, frameon=False)\nax_traj = fig.add_subplot(131, frameon=False)\nax_phase = fig.add_subplot(132, frameon=False)\nax_vecfield = fig.add_subplot(133, frameon=False)\nax_traj.cla()\nax_traj.set_title('Trajectories')\nax_traj.set_xlabel('t')\nax_traj.set_ylabel('x,y')\nax_traj.plot(t.cpu().numpy(), batch_y.cpu().numpy()[9, :, 0, 0], t.cpu().numpy(), batch_y.cpu().numpy()[9, :, 0, 1], 'g-')\nax_traj.plot(t.cpu().numpy(), pred_y.cpu().detach().numpy()[9, :, 0, 0], '--', t.cpu().numpy(), pred_y.cpu().detach().numpy()[9, :, 0, 1], 'b--')\nax_traj.set_xlim(t.cpu().min(), t.cpu().max())\nax_traj.set_ylim(-2, 2)\nax_traj.legend()\n\nax_phase.cla()\nax_phase.set_title('Phase Portrait')\nax_phase.set_xlabel('x')\nax_phase.set_ylabel('y')\nax_phase.plot(batch_y.cpu().numpy()[9, :, 0, 0], batch_y.cpu().numpy()[9, :, 0, 1], 'g-')\nax_phase.plot(pred_y.cpu().detach().numpy()[9, :, 0, 0], pred_y.cpu().detach().numpy()[9, :, 0, 1], 'b--')\nax_phase.set_xlim(-2, 2)\nax_phase.set_ylim(-2, 2)\n\nax_vecfield.cla()\nax_vecfield.set_title('Learned Vector Field')\nax_vecfield.set_xlabel('x')\nax_vecfield.set_ylabel('y')\n\n# # https://stackoverflow.com/questions/32208359/is-there-a-multi-dimensional-version-of-arange-linspace-in-numpy\n# # we need 21 points between -2 and 2\n# y, x = np.mgrid[-2:2:21j, -2:2:21j] # https://numpy.org/doc/stable/reference/generated/numpy.mgrid.html\n# dydt = func(0, torch.Tensor(np.stack([x, y], -1).reshape(21 * 21, 2)).to(device)).cpu().detach().numpy()\n# mag = np.sqrt(dydt[:, 0]**2 + dydt[:, 1]**2).reshape(-1, 1)\n# dydt = (dydt / mag)\n# dydt = dydt.reshape(21, 21, 2)\n\n# ax_vecfield.streamplot(x, y, dydt[:, :, 0], dydt[:, :, 1], color=\"black\")\n# ax_vecfield.set_xlim(-2, 2)\n# ax_vecfield.set_ylim(-2, 2)\n\nfig.tight_layout()\nplt.show()\n\n",
"No handles with labels found to put in legend.\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf903c0a05b1c8f10651f7f7337344b911c3bd1 | 8,541 | ipynb | Jupyter Notebook | docs/source/user_guide/clean/clean_cl_rut.ipynb | jwa345/dataprep | cb386934b0e151ef538c3873ae8fa37bb8bd1513 | [
"MIT"
] | 1 | 2022-02-04T00:58:04.000Z | 2022-02-04T00:58:04.000Z | docs/source/user_guide/clean/clean_cl_rut.ipynb | jwa345/dataprep | cb386934b0e151ef538c3873ae8fa37bb8bd1513 | [
"MIT"
] | null | null | null | docs/source/user_guide/clean/clean_cl_rut.ipynb | jwa345/dataprep | cb386934b0e151ef538c3873ae8fa37bb8bd1513 | [
"MIT"
] | null | null | null | 24.264205 | 361 | 0.539398 | [
[
[
"# Chile RUT/RUN Numbers",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"The function `clean_cl_rut()` cleans a column containing Chile RUT/RUN number (RUT) strings, and standardizes them in a given format. The function `validate_cl_rut()` validates either a single RUT strings, a column of RUT strings or a DataFrame of RUT strings, returning `True` if the value is valid, and `False` otherwise.",
"_____no_output_____"
],
[
"RUT strings can be converted to the following formats via the `output_format` parameter:\n\n* `compact`: only number strings without any seperators or whitespace, like \"125319092\"\n* `standard`: RUT strings with proper whitespace in the proper places, like \"12.531.909-2\"\n\nInvalid parsing is handled with the `errors` parameter:\n\n* `coerce` (default): invalid parsing will be set to NaN\n* `ignore`: invalid parsing will return the input\n* `raise`: invalid parsing will raise an exception\n\nThe following sections demonstrate the functionality of `clean_cl_rut()` and `validate_cl_rut()`. ",
"_____no_output_____"
],
[
"### An example dataset containing RUT strings",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\ndf = pd.DataFrame(\n {\n \"rut\": [\n \"125319092\",\n \"76086A28-5\",\n \"51824753556\",\n \"51 824 753 556\",\n \"hello\",\n np.nan,\n \"NULL\"\n ], \n \"address\": [\n \"123 Pine Ave.\",\n \"main st\",\n \"1234 west main heights 57033\",\n \"apt 1 789 s maple rd manhattan\",\n \"robie house, 789 north main street\",\n \"(staples center) 1111 S Figueroa St, Los Angeles\",\n \"hello\",\n ]\n }\n)\ndf",
"_____no_output_____"
]
],
[
[
"## 1. Default `clean_cl_rut`\n\nBy default, `clean_cl_rut` will clean rut strings and output them in the standard format with proper separators.",
"_____no_output_____"
]
],
[
[
"from dataprep.clean import clean_cl_rut\nclean_cl_rut(df, column = \"rut\")",
"_____no_output_____"
]
],
[
[
"## 2. Output formats",
"_____no_output_____"
],
[
"This section demonstrates the output parameter.",
"_____no_output_____"
],
[
"### `standard` (default)",
"_____no_output_____"
]
],
[
[
"clean_cl_rut(df, column = \"rut\", output_format=\"standard\")",
"_____no_output_____"
]
],
[
[
"### `compact`",
"_____no_output_____"
]
],
[
[
"clean_cl_rut(df, column = \"rut\", output_format=\"compact\")",
"_____no_output_____"
]
],
[
[
"## 3. `inplace` parameter\n\nThis deletes the given column from the returned DataFrame. \nA new column containing cleaned RUT strings is added with a title in the format `\"{original title}_clean\"`.",
"_____no_output_____"
]
],
[
[
"clean_cl_rut(df, column=\"rut\", inplace=True)",
"_____no_output_____"
]
],
[
[
"## 4. `errors` parameter",
"_____no_output_____"
],
[
"### `coerce` (default)",
"_____no_output_____"
]
],
[
[
"clean_cl_rut(df, \"rut\", errors=\"coerce\")",
"_____no_output_____"
]
],
[
[
"### `ignore`",
"_____no_output_____"
]
],
[
[
"clean_cl_rut(df, \"rut\", errors=\"ignore\")",
"_____no_output_____"
]
],
[
[
"## 4. `validate_cl_rut()`",
"_____no_output_____"
],
[
"`validate_cl_rut()` returns `True` when the input is a valid RUT. Otherwise it returns `False`.\n\nThe input of `validate_cl_rut()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.\n\nWhen the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. \n\nWhen the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_cl_rut()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_cl_rut()` returns the validation result for the whole DataFrame.",
"_____no_output_____"
]
],
[
[
"from dataprep.clean import validate_cl_rut\nprint(validate_cl_rut(\"125319092\"))\nprint(validate_cl_rut(\"76086A28-5\"))\nprint(validate_cl_rut(\"51824753556\"))\nprint(validate_cl_rut(\"51 824 753 556\"))\nprint(validate_cl_rut(\"hello\"))\nprint(validate_cl_rut(np.nan))\nprint(validate_cl_rut(\"NULL\"))",
"_____no_output_____"
]
],
[
[
"### Series",
"_____no_output_____"
]
],
[
[
"validate_cl_rut(df[\"rut\"])",
"_____no_output_____"
]
],
[
[
"### DataFrame + Specify Column",
"_____no_output_____"
]
],
[
[
"validate_cl_rut(df, column=\"rut\")",
"_____no_output_____"
]
],
[
[
"### Only DataFrame",
"_____no_output_____"
]
],
[
[
"validate_cl_rut(df)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf90a99f49690aecc7e8f3d2f56f5c327e12d70 | 10,554 | ipynb | Jupyter Notebook | YouTubeAPI-Examples/8. What time do they upload videos.ipynb | hazratali-bit/30-lines-code.py | 5d678f28b066391aa1c70dcca62324e7678ddd0e | [
"MIT"
] | null | null | null | YouTubeAPI-Examples/8. What time do they upload videos.ipynb | hazratali-bit/30-lines-code.py | 5d678f28b066391aa1c70dcca62324e7678ddd0e | [
"MIT"
] | null | null | null | YouTubeAPI-Examples/8. What time do they upload videos.ipynb | hazratali-bit/30-lines-code.py | 5d678f28b066391aa1c70dcca62324e7678ddd0e | [
"MIT"
] | null | null | null | 65.9625 | 6,640 | 0.796854 | [
[
[
"# At what time does your favorite YouTubers upload videos? \n \nhttps://www.youtube.com/watch?v=-QMg39gK624&list=PLyb_C2HpOQSBJRh38CTPvsouV4SBpyt_H",
"_____no_output_____"
]
],
[
[
"from datetime import datetime, timedelta\nfrom apiclient.discovery import build",
"_____no_output_____"
],
[
"YOUTUBE_DEVELOPER_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\nyoutube = build('youtube', 'v3', developerKey=YOUTUBE_DEVELOPER_KEY)",
"_____no_output_____"
],
[
"def get_channel(channel_name):\n return youtube.search().list(q=channel_name, type='channel', part='id,snippet').execute()['items'][0]\n\n\ndef get_videos(channel_id, part='id,snippet', limit=10):\n res = youtube.channels().list(id=channel_id, \n part='contentDetails').execute()\n playlist_id = res['items'][0]['contentDetails']['relatedPlaylists']['uploads']\n \n videos = []\n next_page_token = None\n \n while 1:\n res = youtube.playlistItems().list(playlistId=playlist_id, \n part=part, \n maxResults=min(limit, 50),\n pageToken=next_page_token).execute()\n videos += res['items']\n next_page_token = res.get('nextPageToken')\n \n if next_page_token is None or len(videos) >= limit:\n break\n\n return videos\n\ndef parse_publish_timestamp(video):\n return (datetime.strptime(video['snippet']['publishedAt'], \"%Y-%m-%dT%H:%M:%S.000Z\")\n + timedelta(hours=5, minutes=30))",
"_____no_output_____"
],
[
"channel_id = get_channel('t-series')['id']['channelId']",
"_____no_output_____"
],
[
"videos = get_videos(channel_id, limit=500)",
"_____no_output_____"
],
[
"publish_timestamps = [parse_publish_timestamp(video) for video in videos]",
"_____no_output_____"
],
[
"publish_times = [t.hour + t.minute/60 for t in publish_timestamps]",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.hist(publish_times, bins=24)\nplt.xticks(range(24))\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf90bea7e52360f8fefd7239b0d938efcafeaa7 | 46,762 | ipynb | Jupyter Notebook | Modelos_sem_reducao/CNN_IDS/ModelosCodigo1D/CNN1DosIDS(16-02-2018).ipynb | AfonsoSeguro/IDS_Comportamental | 83145f815b67b2d501eb3744367aaea9b5d11cba | [
"MIT"
] | null | null | null | Modelos_sem_reducao/CNN_IDS/ModelosCodigo1D/CNN1DosIDS(16-02-2018).ipynb | AfonsoSeguro/IDS_Comportamental | 83145f815b67b2d501eb3744367aaea9b5d11cba | [
"MIT"
] | null | null | null | Modelos_sem_reducao/CNN_IDS/ModelosCodigo1D/CNN1DosIDS(16-02-2018).ipynb | AfonsoSeguro/IDS_Comportamental | 83145f815b67b2d501eb3744367aaea9b5d11cba | [
"MIT"
] | null | null | null | 135.936047 | 15,040 | 0.888178 | [
[
[
"import os\nimport tensorflow as tf\nimport numpy as np\nimport itertools\nimport matplotlib.pyplot as plt\nimport gc\nfrom datetime import datetime\nfrom sklearn.utils import shuffle\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom sklearn.metrics import confusion_matrix",
"_____no_output_____"
],
[
"input_label = []\noutput_label = []",
"_____no_output_____"
],
[
"a,b = 0,0\n\nficheiro = open(\"..\\\\Dataset\\\\16-02-2018.csv\", \"r\")\n\nficheiro.readline()\nficheiro.readline()\nficheiro.readline()\n\nlinha = ficheiro.readline()\nwhile(linha != \"\"):\n linha = linha.split(\",\")\n out = linha.pop(19)\n if(out == \"Benign\"): \n out = 0\n b += 1\n else: \n out = 1\n a += 1\n output_label.append(out)\n input_label.append(linha)\n linha = ficheiro.readline()\n \nficheiro.close()\nprint(str(a) + \" \" + str(b))",
"601802 446772\n"
],
[
"scaler = MinMaxScaler(feature_range=(0,1))\nscaler.fit(input_label)\ninput_label = scaler.transform(input_label)",
"_____no_output_____"
],
[
"input_label = np.array(input_label).reshape(len(input_label), 78, 1)\noutput_label = np.array(output_label)",
"_____no_output_____"
],
[
"input_label, output_label = shuffle(input_label, output_label)",
"_____no_output_____"
],
[
"inp_train, inp_test, out_train, out_test = train_test_split(input_label, output_label, test_size = 0.2)",
"_____no_output_____"
],
[
"model = keras.Sequential([\n layers.Conv1D(filters = 128, kernel_size = 3, input_shape = (78,1), padding = \"same\", activation = \"relu\", use_bias = True), \n layers.MaxPool1D(),\n layers.Conv1D(filters = 64, kernel_size = 3, padding = \"same\", activation = \"relu\", use_bias = True),\n layers.MaxPool1D(),\n layers.Conv1D(filters = 32, kernel_size = 3, padding = \"same\", activation = \"relu\", use_bias = True),\n layers.MaxPool1D(),\n layers.Flatten(),\n layers.Dense(units = 2, activation = \"softmax\")\n])",
"_____no_output_____"
],
[
"model.compile(optimizer= keras.optimizers.SGD(learning_rate= 0.08), loss=\"sparse_categorical_crossentropy\", metrics=['accuracy'])",
"_____no_output_____"
],
[
"treino1 = model.fit(x = inp_train, y = out_train, validation_split= 0.1, epochs = 10, shuffle = True,verbose = 1)",
"Epoch 1/10\n 4364/23593 [====>.........................] - ETA: 2:30 - loss: 0.6834 - accuracy: 0.5721"
],
[
"plt.plot(treino1.history[\"loss\"])\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(treino1.history[\"accuracy\"])\nplt.show()",
"_____no_output_____"
],
[
"model.save(\"CNN1DosNet(16-02-2018).h5\")",
"_____no_output_____"
],
[
"res = [np.argmax(resu) for resu in model.predict(inp_test)]",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_true = out_test.reshape(len(out_test)), y_pred = np.array(res))",
"_____no_output_____"
],
[
"def plot_confusion_matrix(cm, classes, normaliza = False, title = \"Confusion matrix\", cmap = plt.cm.Blues):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n if normaliza:\n cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print(\"Confusion matrix, without normalization\")\n \n print(cm)\n \n thresh = cm.max() / 2\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, cm[i, j],\n horizontalalignment=\"center\",\n color=\"white\" if cm[i,j] > thresh else \"black\")\n \n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')",
"_____no_output_____"
],
[
"labels = [\"Benign\", \"Dos\"]\nplot_confusion_matrix(cm = cm, classes = labels, title = \"Dos IDS\")",
"Confusion matrix, without normalization\n[[197850 13]\n [ 16 10231]]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf9120b7c18c2b06eaba57e55397f20959854e7 | 8,601 | ipynb | Jupyter Notebook | Training/Engagement Model.ipynb | The-revolutionary-army/Engagement-and-comprehension-level-detection | d013f989417c8f0ecce7627de5ea480dc8cc185b | [
"MIT"
] | 4 | 2021-07-25T15:59:06.000Z | 2022-03-15T07:33:41.000Z | Training/Engagement Model.ipynb | The-revolutionary-army/Engagement-and-comprehension-level-detection | d013f989417c8f0ecce7627de5ea480dc8cc185b | [
"MIT"
] | null | null | null | Training/Engagement Model.ipynb | The-revolutionary-army/Engagement-and-comprehension-level-detection | d013f989417c8f0ecce7627de5ea480dc8cc185b | [
"MIT"
] | null | null | null | 28.292763 | 161 | 0.542379 | [
[
[
"import cv2\nfrom pathlib import Path\nfrom random import *\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom skimage.feature import hog\nfrom imutils import face_utils\nimport dlib\nimport os\nimport pickle\nnp.random.seed(1000)\n",
"_____no_output_____"
],
[
"physical_devices = tf.config.experimental.list_physical_devices('GPU')\nif len(physical_devices) > 0:\n tf.config.experimental.set_memory_growth(physical_devices[0], True)",
"_____no_output_____"
],
[
"frames = []\nlabels = []\nfor file in os.listdir('output/'):\n if file[-10:] == 'frames.pkl':\n with open('output/'+file, 'rb') as f:\n frames.append(pickle.load(f))\n elif file[-10:] == 'labels.pkl':\n with open('output/'+file, 'rb') as f:\n labels.append(pickle.load(f))",
"_____no_output_____"
],
[
"print(len(frames), len(labels))",
"9067 9067\n"
],
[
"from sklearn.model_selection import train_test_split\ntrain_clips, test_clips, train_clips_labels, test_clips_labels = \\\n train_test_split(frames, labels, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"train_images, test_images, train_labels, test_labels = [], [], [], []\n\nfor clip, label in zip(train_clips, train_clips_labels):\n try:\n train_images, train_labels = train_images + clip, train_labels + [label[0]] * len(clip)\n except:\n continue\n\nfor clip, label in zip(test_clips, test_clips_labels):\n try:\n test_images, test_labels = test_images + clip, test_labels + [label[0]] * len(clip)\n except:\n continue\n \nprint(len(train_images), len(train_labels), len(test_images), len(test_labels))",
"1944475 1944475 482212 482212\n"
],
[
"clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))\nfor i in range(len(train_images)):\n train_images[i] = clahe.apply(train_images[i])\nfor i in range(len(test_images)):\n test_images[i] = clahe.apply(test_images[i])",
"_____no_output_____"
],
[
"train_images, test_images, train_labels, test_labels = np.asarray(train_images), np.asarray(test_images), np.asarray(train_labels), np.asarray(test_labels)",
"_____no_output_____"
],
[
"test_images = np.expand_dims(test_images, axis=3)\ntrain_images = np.expand_dims(train_images, axis=3)",
"_____no_output_____"
],
[
"train_labels //= 2\ntest_labels //= 2",
"_____no_output_____"
],
[
"model = keras.Sequential([\n keras.layers.Conv2D(filters=288, input_shape=(48,48,1), kernel_size=3, padding='valid', activation='elu'),\n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Conv2D(filters=288, kernel_size=1, padding='valid', activation='elu'),\n keras.layers.Conv2D(filters=288, kernel_size=3, strides=2, padding='valid', activation='elu'),\n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Dropout(0.5),\n keras.layers.Conv2D(filters=144, kernel_size=3, padding='valid', activation='elu'),\n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Conv2D(filters=144, kernel_size=1, padding='valid', activation='elu'),\n keras.layers.Conv2D(filters=144, kernel_size=1, padding='valid', activation='elu'),\n keras.layers.MaxPooling2D(pool_size=3, strides=2),\n keras.layers.Dropout(0.5),\n keras.layers.Conv2D(filters=48, kernel_size=3, padding='valid', activation='elu'), \n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Conv2D(filters=48, kernel_size=3, padding='valid', activation='elu'), \n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Conv2D(filters=48, kernel_size=3, padding='valid', activation='elu'), \n keras.layers.BatchNormalization(axis=-1),\n keras.layers.Conv2D(filters=4, kernel_size=1, padding='valid', activation='elu'),\n keras.layers.GlobalAveragePooling2D(),\n keras.layers.Dense(2, activation='softmax')\n])",
"_____no_output_____"
],
[
"model.compile(optimizer='adamax',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
],
[
"weights = {0:8.46498598, 1:0.53138737}\nprint(weights)",
"{0: 8.46498598, 1: 0.53138737}\n"
],
[
"model.fit(train_images, train_labels, epochs=5, class_weight=weights, batch_size=50)",
"WARNING:tensorflow:sample_weight modes were coerced from\n ...\n to \n ['...']\nTrain on 1944475 samples\nEpoch 1/5\n1944475/1944475 [==============================] - 2088s 1ms/sample - loss: 0.3793 - accuracy: 0.8154\nEpoch 2/5\n1944475/1944475 [==============================] - 1982s 1ms/sample - loss: 0.2377 - accuracy: 0.8922\nEpoch 3/5\n1944475/1944475 [==============================] - 2085s 1ms/sample - loss: 0.1938 - accuracy: 0.9139\nEpoch 4/5\n1944475/1944475 [==============================] - 2046s 1ms/sample - loss: 0.1686 - accuracy: 0.9257\nEpoch 5/5\n1944475/1944475 [==============================] - 1981s 1ms/sample - loss: 0.1530 - accuracy: 0.9331\n"
],
[
"test_loss, test_acc = model.evaluate(test_images, test_labels)",
"482212/482212 [==============================] - 158s 328us/sample - loss: 0.4674 - accuracy: 0.8659\n"
],
[
"model.save('engagement85.h5')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf914db00e8b4ce07156c7ab79c1f2d4cbb51ce | 316,393 | ipynb | Jupyter Notebook | day04/IN CLASS - PCA.ipynb | flight505/Applied_AI_IT_Uni | b1d766eccdd964d5f7d9315a215ba810930ba003 | [
"MIT"
] | null | null | null | day04/IN CLASS - PCA.ipynb | flight505/Applied_AI_IT_Uni | b1d766eccdd964d5f7d9315a215ba810930ba003 | [
"MIT"
] | null | null | null | day04/IN CLASS - PCA.ipynb | flight505/Applied_AI_IT_Uni | b1d766eccdd964d5f7d9315a215ba810930ba003 | [
"MIT"
] | null | null | null | 316,393 | 316,393 | 0.946042 | [
[
[
"# PCA for dimensionality reduction in `sklearn`",
"_____no_output_____"
],
[
"In the lecture we covered the theory behind one of the main dimensionality reduction techniques: Principal Component Analysis. In this notebook we go through how to implement it in Python using `sklearn`.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nfrom sklearn.datasets import load_iris\n\nsns.set_theme()",
"_____no_output_____"
]
],
[
[
"## Loading up the data and exploring it",
"_____no_output_____"
],
[
"One classical example is the iris dataset. We will perform dimensionality reduction on it. Let's first load it and put it in a DataFrame.",
"_____no_output_____"
]
],
[
[
"# Loading the iris dataset.\niris_dataset = load_iris()\n\nprint(type(iris_dataset))\nprint(iris_dataset.keys())\nprint(iris_dataset[\"data\"].shape)\nprint(iris_dataset[\"feature_names\"])\nprint(iris_dataset[\"target\"])\nprint(iris_dataset[\"target_names\"])",
"<class 'sklearn.utils.Bunch'>\ndict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename'])\n(150, 4)\n['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2\n 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n 2 2]\n['setosa' 'versicolor' 'virginica']\n"
]
],
[
[
"This dataset contains 150 rows with descriptions of flowers, categorized among three different types. It is usually used as a first example for classification tasks, but we will use it today for showcasing several clustering algorithms, and for clustering later on.\n\nWe can create a pandas DataFrame before using `sns.pairplot`.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(\n data=iris_dataset[\"data\"],\n columns=iris_dataset[\"feature_names\"]\n)\n\ndf[\"class\"] = iris_dataset[\"target\"]\n\nsns.pairplot(data=df, hue=\"class\", corner=True)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"## Understanding the principal components",
"_____no_output_____"
],
[
"### Fitting the PCA",
"_____no_output_____"
],
[
"We can perform PCA automatically using the `PCA` object from `sklearn.decomposition`. Let's first preprocess the data using a standard scaler.",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nX = scaler.fit_transform(iris_dataset[\"data\"])\n\npca = PCA()\npca.fit(X)",
"_____no_output_____"
]
],
[
[
"After fitting, we can access many the things we discussed in class: the principal components, the variance explained by them...",
"_____no_output_____"
]
],
[
[
"print(pca.components_)\nprint(pca.explained_variance_)\nprint(pca.explained_variance_ratio_)",
"[[ 0.52106591 -0.26934744 0.5804131 0.56485654]\n [ 0.37741762 0.92329566 0.02449161 0.06694199]\n [-0.71956635 0.24438178 0.14212637 0.63427274]\n [-0.26128628 0.12350962 0.80144925 -0.52359713]]\n[2.93808505 0.9201649 0.14774182 0.02085386]\n[0.72962445 0.22850762 0.03668922 0.00517871]\n"
]
],
[
[
"As we discussed, these principal components are the orthogonal directions in $\\mathbb{R}^4$ that maximize variance in the data, ordered by the amount of variance they explain.",
"_____no_output_____"
],
[
"### Projecting to two dimensions",
"_____no_output_____"
],
[
"These are two line plots we usually get for the principal components:",
"_____no_output_____"
]
],
[
[
"sns.barplot(x=[1, 2, 3, 4], y=pca.explained_variance_ratio_)",
"_____no_output_____"
],
[
"sns.lineplot(x=[1,2,3,4], y=np.cumsum(pca.explained_variance_ratio_))",
"_____no_output_____"
]
],
[
[
"**There is no golden rule for selecting the number of principal components to project to.** Let's go with two, that already explain a little bit over 95% of the variance.\n\nOne way of finding the low dimensional representation of the data `z` is to fit another `PCA` object, and passing `n_components=2` as its flag.",
"_____no_output_____"
]
],
[
[
"pca_2 = PCA(n_components=2)\nz = pca_2.fit_transform(X)",
"_____no_output_____"
],
[
"sns.scatterplot(x=z[:, 0], y=z[:, 1])",
"_____no_output_____"
]
],
[
[
"Are these principal components representative of the classes in the data? Are the classes being separated? We can check by illuminating with the actual class:",
"_____no_output_____"
]
],
[
[
"sns.scatterplot(x=z[:, 0], y=z[:, 1], hue=iris_dataset.target)",
"_____no_output_____"
]
],
[
[
"## Using other dimensionality reduction algorithms out-of-the-box",
"_____no_output_____"
],
[
"One great thing about how `sklearn` is implemented is that we can use *almost* the same code to get low dimensional embeddings for multiple algorithms. Let me show you how to do it for, for example, t-SNE and Isomap:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE, Isomap\n\ntsne = TSNE(n_components=2)\nisomap = Isomap(n_components=2)\n\nz_tsne = tsne.fit_transform(X)\nz_isomap = isomap.fit_transform(X)\n\n_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(3*7, 1*7))\n\nax1.set_title(\"DimRed with PCA\")\nax2.set_title(\"DimRed with t-SNE\")\nax3.set_title(\"DimRed with Isomap\")\n\nsns.scatterplot(x=z[:, 0], y=z[:, 1], hue=iris_dataset[\"target\"], ax=ax1)\nsns.scatterplot(x=z_tsne[:, 0], y=z_tsne[:, 1], hue=iris_dataset[\"target\"], ax=ax2)\nsns.scatterplot(x=z_isomap[:, 0], y=z_isomap[:, 1], hue=iris_dataset[\"target\"], ax=ax3)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecf91b6c7f4e4457f19c2c5a352672460bfff6c3 | 344,472 | ipynb | Jupyter Notebook | notebooks/TimeNormalization.ipynb | modenaxe/BMC | b6f6e473878ab7b0c19430d1b66b6dba09059c63 | [
"MIT"
] | 1 | 2018-06-23T20:09:07.000Z | 2018-06-23T20:09:07.000Z | notebooks/TimeNormalization.ipynb | modenaxe/BMC | b6f6e473878ab7b0c19430d1b66b6dba09059c63 | [
"MIT"
] | null | null | null | notebooks/TimeNormalization.ipynb | modenaxe/BMC | b6f6e473878ab7b0c19430d1b66b6dba09059c63 | [
"MIT"
] | 1 | 2019-01-02T23:17:40.000Z | 2019-01-02T23:17:40.000Z | 464.247978 | 49,762 | 0.916748 | [
[
[
"# Time normalization of data\n\n> Marcos Duarte \n> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) \n> Federal University of ABC, Brazil",
"_____no_output_____"
],
[
"Time normalization is usually employed for the temporal alignment of cyclic data obtained from different trials with different duration (number of points). The most simple and common procedure for time normalization used in Biomechanics and Motor Control is known as the normalization to percent cycle (although it might not be the most adequate procedure in certain cases ([Helwig et al., 2011](http://www.sciencedirect.com/science/article/pii/S0021929010005038)).\n\nIn the percent cycle, a fixed number (typically a temporal base from 0 to 100%) of new equally spaced data is created based on the old data with a mathematical procedure known as interpolation. \n**Interpolation** is the estimation of new data points within the range of known data points. This is different from **extrapolation**, the estimation of data points outside the range of known data points. \nTime normalization of data using interpolation is a simple procedure and it doesn't matter if the original data have more or less data points than desired.\n\nThe Python function `tnorm.py` (code at the end of this text) implements the normalization to percent cycle procedure for time normalization. The function signature is: \n```python\nyn, tn, indie = tnorm(y, axis=0, step=1, k=3, smooth=0, mask=None,\n nan_at_ext='delete', show=False, ax=None)\n``` \nLet's see now how to perform interpolation and time normalization; first let's import the necessary Python libraries and configure the environment:",
"_____no_output_____"
]
],
[
[
"# Import the necessary libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport sys\nsys.path.insert(1, r'./../functions') # add to pythonpath",
"_____no_output_____"
]
],
[
[
"For instance, consider the data shown next. The time normalization of these data to represent a cycle from 0 to 100%, with a step of 1% (101 data points) is:",
"_____no_output_____"
]
],
[
[
"y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]\nprint(\"y data:\")\ny",
"y data:\n"
],
[
"t = np.linspace(0, 100, len(y)) # time vector for the original data\ntn = np.linspace(0, 100, 101) # new time vector for the new time-normalized data\nyn = np.interp(tn, t, y) # new time-normalized data\nprint(\"y data interpolated to 101 points:\")\nyn",
"y data interpolated to 101 points:\n"
]
],
[
[
"The key is the Numpy `interp` function, from its help: \n\n>interp(x, xp, fp, left=None, right=None) \n>One-dimensional linear interpolation. \n>Returns the one-dimensional piecewise linear interpolant to a function with given values at discrete data-points.\n\nA plot of the data will show what we have done:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,5))\nplt.plot(t, y, 'bo-', lw=2, label='original data')\nplt.plot(tn, yn, '.-', color=[1, 0, 0, .5], lw=2, label='time normalized')\nplt.legend(loc='best', framealpha=.5)\nplt.xlabel('Cycle [%]')\nplt.show()",
"_____no_output_____"
]
],
[
[
"The function `tnorm.py` implements this kind of normalization with option for a different interpolation than the linear one used, deal with missing points in the data (if these missing points are not at the extremities of the data because the interpolation function can not extrapolate data), other things. \nLet's see the `tnorm.py` examples:",
"_____no_output_____"
]
],
[
[
"from tnorm import tnorm",
"_____no_output_____"
],
[
" >>> # Default options: cubic spline interpolation passing through\n >>> # each datum, 101 points, and no plot\n >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]\n >>> tnorm(y)",
"_____no_output_____"
],
[
" >>> # Linear interpolation passing through each datum\n >>> yn, tn, indie = tnorm(y, k=1, smooth=0, mask=None, show=True)",
"_____no_output_____"
],
[
" >>> # Cubic spline interpolation with smoothing\n >>> yn, tn, indie = tnorm(y, k=3, smooth=1, mask=None, show=True)",
"_____no_output_____"
],
[
" >>> # Cubic spline interpolation with smoothing and 50 points\n >>> x = np.linspace(-3, 3, 60)\n >>> y = np.exp(-x**2) + np.random.randn(60)/10\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)",
"_____no_output_____"
],
[
" >>> # Deal with missing data (use NaN as mask)\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[:10] = np.NaN # first ten points are missing\n >>> y[30: 41] = np.NaN # make other 10 missing points\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)",
"_____no_output_____"
],
[
" >>> # Deal with missing data at the extremities replacing by first/last not-NaN\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[0:10] = np.NaN # first ten points are missing\n >>> y[-10:] = np.NaN # last ten points are missing\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, nan_at_ext='replace', show=True)",
"_____no_output_____"
],
[
" >>> # Deal with missing data at the extremities replacing by first/last not-NaN\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[0:10] = np.NaN # first ten points are missing\n >>> y[-10:] = np.NaN # last ten points are missing\n >>> yn, tn, indie = tnorm(y, step=-50, k=1, smooth=0, nan_at_ext='replace', show=True)",
"_____no_output_____"
],
[
" >>> # Deal with 2-D array\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y = np.vstack((y-1, y[::-1])).T\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)",
"_____no_output_____"
]
],
[
[
"## Function tnorm.py",
"_____no_output_____"
]
],
[
[
"# %load './../functions/tnorm.py'\n\"\"\"Time normalization (from 0 to 100% with step interval).\"\"\"\n\nimport numpy as np\n\n__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'\n__version__ = \"1.0.6\"\n__license__ = \"MIT\"\n\n\ndef tnorm(y, axis=0, step=1, k=3, smooth=0, mask=None, nan_at_ext='delete',\n show=False, ax=None):\n \"\"\"Time normalization (from 0 to 100% with step interval).\n\n Time normalization is usually employed for the temporal alignment of data\n obtained from different trials with different duration (number of points).\n This code implements a procedure knwown as the normalization to percent\n cycle.\n\n This code can perform simple linear interpolation passing through each\n datum or spline interpolation (up to quintic splines) passing through each\n datum (knots) or not (in case a smoothing parameter > 0 is inputted).\n\n NaNs and any value inputted as a mask parameter and that appears at the\n extremities might be removed or replaced by the first/last not-NaN value\n before the interpolation because this code does not perform extrapolation.\n For a 2D array, the entire row with NaN or a mask value at the extermity\n might be removed because of alignment issues with the data from different\n columns. As result, if there is a column of only NaNs in the data, the\n time normalization can't be performed (an empty NaNs and any value\n inputted as a mask parameter and that appears in the middle of the data\n (which may represent missing data) are ignored and the interpolation is\n performed through these points.\n\n See this IPython notebook [2]_.\n\n Parameters\n ----------\n y : 1-D or 2-D array_like\n Array of independent input data. Must be increasing.\n If 2-D array, the data in each axis will be interpolated.\n axis : int, 0 or 1, optional (default = 0)\n Axis along which the interpolation is performed.\n 0: data in each column are interpolated; 1: for row interpolation\n step : float or int, optional (default = 1)\n Interval from 0 to 100% to resample y or the number of points y\n should be interpolated. In the later case, the desired number of\n points should be expressed with step as a negative integer.\n For instance, step = 1 or step = -101 will result in the same\n number of points at the interpolation (101 points).\n If step == 0, the number of points will be the number of data in y.\n k : int, optional (default = 3)\n Degree of the smoothing spline. Must be 1 <= k <= 5.\n If 3, a cubic spline is used.\n The number of data points must be larger than k.\n smooth : float or None, optional (default = 0)\n Positive smoothing factor used to choose the number of knots.\n If 0, spline will interpolate through all data points.\n If None, smooth=len(y).\n mask : None or float, optional (default = None)\n Mask to identify missing values which will be ignored.\n It can be a list of values.\n NaN values will be ignored and don't need to be in the mask.\n nan_at_ext : string, optional (default = 'delete')\n Method to deal with NaNs at the extremities.\n 'delete' will delete any NaN at the extremities (the corresponding\n entire row in `y` for a 2-D array).\n 'replace' will replace any NaN at the extremities by first/last\n not-NaN value in `y`.\n show : bool, optional (default = False)\n True (1) plot data in a matplotlib figure.\n False (0) to not plot.\n ax : a matplotlib.axes.Axes instance, optional (default = None).\n\n Returns\n -------\n yn : 1-D or 2-D array\n Interpolated data (if axis == 0, column oriented for 2-D array).\n tn : 1-D array\n New x values (from 0 to 100) for the interpolated data.\n inds : list\n Indexes of first and last rows without NaNs at the extremities of `y`.\n If there is no NaN in the data, this list is [0, y.shape[0]-1].\n\n Notes\n -----\n This code performs interpolation to create data with the desired number of\n points using a one-dimensional smoothing spline fit to a given set of data\n points (scipy.interpolate.UnivariateSpline function).\n\n References\n ----------\n .. [1] http://www.sciencedirect.com/science/article/pii/S0021929010005038\n .. [2] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/TimeNormalization.ipynb\n\n See Also\n --------\n scipy.interpolate.UnivariateSpline:\n One-dimensional smoothing spline fit to a given set of data points.\n\n Examples\n --------\n >>> # Default options: cubic spline interpolation passing through\n >>> # each datum, 101 points, and no plot\n >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]\n >>> tnorm(y)\n\n >>> # Linear interpolation passing through each datum\n >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]\n >>> yn, tn, indie = tnorm(y, k=1, smooth=0, mask=None, show=True)\n\n >>> # Cubic spline interpolation with smoothing\n >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]\n >>> yn, tn, indie = tnorm(y, k=3, smooth=1, mask=None, show=True)\n\n >>> # Cubic spline interpolation with smoothing and 50 points\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)\n\n >>> # Deal with missing data (use NaN as mask)\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[:10] = np.NaN # first ten points are missing\n >>> y[30: 41] = np.NaN # make other 10 missing points\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)\n\n >>> # Deal with missing data at the extremities replacing by first/last not-NaN\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[0:10] = np.NaN # first ten points are missing\n >>> y[-10:] = np.NaN # last ten points are missing\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, nan_at_ext='replace', show=True)\n\n >>> # Deal with missing data at the extremities replacing by first/last not-NaN\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y[0:10] = np.NaN # first ten points are missing\n >>> y[-10:] = np.NaN # last ten points are missing\n >>> yn, tn, indie = tnorm(y, step=-50, k=1, smooth=0, nan_at_ext='replace', show=True)\n\n >>> # Deal with 2-D array\n >>> x = np.linspace(-3, 3, 100)\n >>> y = np.exp(-x**2) + np.random.randn(100)/10\n >>> y = np.vstack((y-1, y[::-1])).T\n >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True)\n\n Version history\n ---------------\n '1.0.6':\n Deleted 'from __future__ import ...'\n Added parameter `nan_at_ext`\n Adjusted outputs to have always the same type\n\n \"\"\"\n\n from scipy.interpolate import UnivariateSpline\n\n y = np.asarray(y)\n if axis:\n y = y.T\n if y.ndim == 1:\n y = np.reshape(y, (-1, 1))\n # turn mask into NaN\n if mask is not None:\n y[y == mask] = np.NaN\n\n iini = 0\n iend = y.shape[0]-1\n if nan_at_ext.lower() == 'delete':\n # delete rows with missing values at the extremities\n while y.size and np.isnan(np.sum(y[0])):\n y = np.delete(y, 0, axis=0)\n iini += 1\n while y.size and np.isnan(np.sum(y[-1])):\n y = np.delete(y, -1, axis=0)\n iend -= 1\n else:\n # replace NaN at the extremities by first/last not-NaN\n if np.any(np.isnan(y[0])):\n for col in range(y.shape[1]):\n ind_not_nan = np.nonzero(~np.isnan(y[:, col]))[0]\n if ind_not_nan.size:\n y[0, col] = y[ind_not_nan[0], col]\n else:\n y = np.empty((0, 0))\n break\n if np.any(np.isnan(y[-1])):\n for col in range(y.shape[1]):\n ind_not_nan = np.nonzero(~np.isnan(y[:, col]))[0]\n if ind_not_nan.size:\n y[-1, col] = y[ind_not_nan[-1], col]\n else:\n y = np.empty((0, 0))\n break\n\n # check if there are still data\n if not y.size:\n return np.empty((0, 0)), np.empty(0), []\n if y.size == 1:\n return y.flatten(), np.array(0), [0, 0]\n\n indie = [iini, iend]\n\n t = np.linspace(0, 100, y.shape[0])\n if step == 0:\n tn = t\n elif step > 0:\n tn = np.linspace(0, 100, np.round(100 / step + 1))\n else:\n tn = np.linspace(0, 100, -step)\n yn = np.empty([tn.size, y.shape[1]]) * np.NaN\n for col in np.arange(y.shape[1]):\n # ignore NaNs inside data for the interpolation\n ind = np.isfinite(y[:, col])\n if np.sum(ind) > 1: # at least two points for the interpolation\n spl = UnivariateSpline(t[ind], y[ind, col], k=k, s=smooth)\n yn[:, col] = spl(tn)\n\n if show:\n _plot(t, y, ax, tn, yn)\n\n if axis:\n y = y.T\n if yn.shape[1] == 1:\n yn = yn.flatten()\n\n return yn, tn, indie\n\n\ndef _plot(t, y, ax, tn, yn):\n \"\"\"Plot results of the tnorm function, see its help.\"\"\"\n try:\n import matplotlib.pyplot as plt\n except ImportError:\n print('matplotlib is not available.')\n else:\n if ax is None:\n _, ax = plt.subplots(1, 1, figsize=(8, 5))\n\n ax.set_prop_cycle('color', ['b', 'r', 'b', 'g', 'b', 'y', 'b', 'c', 'b', 'm'])\n #ax.set_color_cycle(['b', 'r', 'b', 'g', 'b', 'y', 'b', 'c', 'b', 'm'])\n for col in np.arange(y.shape[1]):\n if y.shape[1] == 1:\n ax.plot(t, y[:, col], 'o-', lw=1, label='Original data')\n ax.plot(tn, yn[:, col], '.-', lw=2,\n label='Interpolated')\n else:\n ax.plot(t, y[:, col], 'o-', lw=1)\n ax.plot(tn, yn[:, col], '.-', lw=2, label='Col= %d' % col)\n ax.locator_params(axis='y', nbins=7)\n ax.legend(fontsize=12, loc='best', framealpha=.5, numpoints=1)\n plt.xlabel('[%]')\n plt.tight_layout()\n plt.show()\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf928b145dc2bc88bbc40cb02f50dc53b12aafc | 4,908 | ipynb | Jupyter Notebook | jupyter/object_detection_with_model_zoo.ipynb | keerthanvasist/djl | 9cd66c953d3e079c507e75c8f199d34656199957 | [
"Apache-2.0"
] | null | null | null | jupyter/object_detection_with_model_zoo.ipynb | keerthanvasist/djl | 9cd66c953d3e079c507e75c8f199d34656199957 | [
"Apache-2.0"
] | 1 | 2019-12-06T20:34:53.000Z | 2019-12-06T20:34:53.000Z | jupyter/object_detection_with_model_zoo.ipynb | keerthanvasist/djl | 9cd66c953d3e079c507e75c8f199d34656199957 | [
"Apache-2.0"
] | null | null | null | 27.573034 | 230 | 0.590057 | [
[
[
"# Object detection with model zoo model\n\nIn this tutorial, you learn how to use a built-in model zoo model (SSD) to achieve an [object detection](https://en.wikipedia.org/wiki/Object_detection) task.\n\n## Preparation\n\nThis tutorial requires the installation of Java Kernel. To install Java Kernel, see the [README](https://github.com/awslabs/djl/blob/master/jupyter/README.md).",
"_____no_output_____"
]
],
[
[
"%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/\n\n%maven ai.djl:api:0.3.0-SNAPSHOT\n%maven ai.djl:repository:0.3.0-SNAPSHOT\n%maven ai.djl.mxnet:mxnet-engine:0.3.0-SNAPSHOT\n%maven ai.djl.mxnet:mxnet-model-zoo:0.3.0-SNAPSHOT\n%maven org.slf4j:slf4j-api:1.7.26\n%maven org.slf4j:slf4j-simple:1.7.26\n%maven net.java.dev.jna:jna:5.3.0",
"_____no_output_____"
]
],
[
[
"### Include MXNet engine dependency\n\nThis tutorial uses MXNet engine as its backend. MXNet has different [build flavor](https://mxnet.apache.org/get_started?version=v1.5.1&platform=linux&language=python&environ=pip&processor=cpu) and it is platform specific.\nPlease read [here](https://github.com/awslabs/djl/blob/master/examples/README.md#engine-selection) for how to select MXNet engine flavor.",
"_____no_output_____"
]
],
[
[
"String osName = System.getProperty(\"os.name\");\nString classifier = osName.startsWith(\"Mac\") ? \"osx-x86_64\" : osName.startsWith(\"Win\") ? \"win-x86_64\" : \"linux-x86_64\";\n\n\n%maven ai.djl.mxnet:mxnet-native-mkl:jar:${classifier}:1.6.0-c-SNAPSHOT",
"_____no_output_____"
],
[
"import java.awt.image.*;\nimport java.nio.file.*;\nimport ai.djl.modality.cv.*;\nimport ai.djl.modality.cv.util.*;\nimport ai.djl.mxnet.zoo.*;\nimport ai.djl.repository.zoo.*;\nimport ai.djl.training.util.*;",
"_____no_output_____"
]
],
[
[
"## Step 1: Load image",
"_____no_output_____"
]
],
[
[
"var img = BufferedImageUtils.fromUrl(\"https://djl-ai.s3.amazonaws.com/resources/images/dog_bike_car.jpg\");\nimg",
"_____no_output_____"
]
],
[
[
"## Step 2: Load model zoo model\n\nIn this example, you load a SSD (Single Shot MultiBox Detector) model from the MXNet model zoo.\nFor more information about model zoo, see the [Model Zoo Documentation](https://github.com/awslabs/djl/blob/master/docs/model-zoo.md) ",
"_____no_output_____"
]
],
[
[
"var model = MxModelZoo.SSD.loadModel(new ProgressBar());",
"_____no_output_____"
]
],
[
[
"## Step 3: Create Predictor and detect an object in the image",
"_____no_output_____"
]
],
[
[
"var detections = model.newPredictor().predict(img);\n\ndetections",
"_____no_output_____"
]
],
[
[
"## Check detected result",
"_____no_output_____"
]
],
[
[
"ImageVisualization.drawBoundingBoxes(img, detections);\nimg",
"_____no_output_____"
]
],
[
[
"## Summary\n\nUsing the model zoo model provided, you can run inference with just the following three lines of code:\n\n```\nvar img = BufferedImageUtils.fromUrl(\"https://djl-ai.s3.amazonaws.com/resources/images/dog_bike_car.jpg\");\nvar model = MxModelZoo.SSD.loadModel();\nvar detections = model.newPredictor().predict(img);\n```\n\nYou can find full SsdExample source code [here](https://github.com/awslabs/djl/blob/master/examples/docs/object_detection.md).\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecf92d1ead1178f1329ed90b5f4aeda219c52141 | 405,139 | ipynb | Jupyter Notebook | Epoch_Checking/2_LSTM_Univar - 05SHY_check_epoch.ipynb | fiona-liyc/Deep-Portfolio | 25ee700e4c0a94fcad9b71a533e9329d4ce1569d | [
"MIT"
] | null | null | null | Epoch_Checking/2_LSTM_Univar - 05SHY_check_epoch.ipynb | fiona-liyc/Deep-Portfolio | 25ee700e4c0a94fcad9b71a533e9329d4ce1569d | [
"MIT"
] | null | null | null | Epoch_Checking/2_LSTM_Univar - 05SHY_check_epoch.ipynb | fiona-liyc/Deep-Portfolio | 25ee700e4c0a94fcad9b71a533e9329d4ce1569d | [
"MIT"
] | null | null | null | 52.93167 | 281 | 0.376335 | [
[
[
"## Libs",
"_____no_output_____"
]
],
[
[
"import matplotlib as matpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport math\n\n# Using the Tensorflow backend (default).\nfrom keras.models import Sequential\nfrom keras.layers.recurrent import LSTM\nfrom keras.layers.core import Dense, Activation, Dropout\nfrom tensorflow import set_random_seed\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.utils import shuffle\nfrom keras.callbacks import EarlyStopping\n\n# advanced plotting\nimport seaborn as sns\nplt.style.use('seaborn-darkgrid')\n%matplotlib inline",
"Using TensorFlow backend.\n"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"%store -r data_SHY",
"_____no_output_____"
],
[
"mm = MinMaxScaler(feature_range = (0,1))\ndataset = mm.fit_transform(data_SHY)",
"_____no_output_____"
],
[
"split = 0.6\n\ntrain_size = int(len(dataset) * split)\n#validation\ntest_size = len(dataset) - train_size\n\ntrain, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]\n\nprint(\"training, test set: \" + str((len(train), len(test))))",
"training, test set: (313, 209)\n"
],
[
"def input_dataset(dataset, window):\n data_X, data_y = [], []\n for i in range(len(dataset) - window - 1):\n a = dataset[i:(i + window), 0]\n data_X.append(a)\n data_y.append(dataset[i + window, 0])\n return(np.array(data_X), np.array(data_y))\n",
"_____no_output_____"
],
[
"# New testing and training sets for rolling forecast.\nwindow = 1\ntrain_X, train_Y = input_dataset(train, window)\ntest_X, test_Y = input_dataset(test, window)\nprint(\"Original train shape:\")\nprint(train_X.shape)\n\n# Reshape input data to match Keras format.\ntrain_X = np.reshape(train_X, (train_X.shape[0], 1, train_X.shape[1]))\ntest_X = np.reshape(test_X, (test_X.shape[0], 1, test_X.shape[1]))\nprint(\"New train shape:\")\nprint(train_X.shape)",
"Original train shape:\n(311, 1)\nNew train shape:\n(311, 1, 1)\n"
]
],
[
[
"## 1 Epoch",
"_____no_output_____"
]
],
[
[
"def fit_LSTM(train_X, train_Y, window = 1, neurons=128):\n set_random_seed(3)\n model = Sequential()\n \n model.add(LSTM(neurons, \n input_shape = (1, window)\n ))\n model.add(Dense(1))\n model.compile(loss = 'mean_squared_error', \n optimizer = 'adam')\n earlyStop=EarlyStopping(monitor='val_loss',verbose=2,patience=15)\n model.fit(train_X, \n train_Y, \n epochs = 1, \n batch_size = 1,\n shuffle = False\n # verbose = 2\n )\n \n return(model)\n\n# Fit the first model.\nmodel1 = fit_LSTM(train_X, train_Y, window)",
"WARNING: Logging before flag parsing goes to stderr.\nW0919 02:28:12.462113 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nW0919 02:28:12.463948 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0919 02:28:12.466881 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nW0919 02:28:12.678099 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nW0919 02:28:12.930221 4604585408 deprecation.py:323] From //anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0919 02:28:13.364934 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n\nW0919 02:28:13.442570 4604585408 deprecation_wrapper.py:119] From //anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n\n"
],
[
"def prediction_score(model, X, Y):\n # Make predictions using input data\n pred = mm.inverse_transform(model.predict(X))\n # Show Y on original scale\n original_data = mm.inverse_transform([Y])\n # RMSE.\n score = math.sqrt(mean_squared_error(original_data[0], pred[:, 0]))\n return(score, pred)",
"_____no_output_____"
],
[
"rmse_test, test_pred = prediction_score(model1, test_X, test_Y)\nprint(\"Testing score: %.2f RMSE\" % rmse_test)",
"Testing score: 0.57 RMSE\n"
]
],
[
[
"## 1000 Epochs",
"_____no_output_____"
]
],
[
[
"def fit_LSTM(train_X, train_Y, window = 1, neurons=128):\n set_random_seed(3)\n model = Sequential()\n \n model.add(LSTM(neurons, \n input_shape = (1, window)\n ))\n model.add(Dense(1))\n model.compile(loss = \"mean_squared_error\", \n optimizer = \"adam\")\n earlyStop=EarlyStopping(monitor=\"val_loss\",verbose=2,patience=15)\n model.fit(train_X, \n train_Y, \n epochs = 1000, \n batch_size = 1,\n shuffle = False\n # verbose = 2\n )\n \n return(model)\n\n# Fit the first model.\nmodel1 = fit_LSTM(train_X, train_Y, window)",
"Epoch 1/1000\n311/311 [==============================] - 5s 17ms/step - loss: 4.4452e-04\nEpoch 2/1000\n311/311 [==============================] - 3s 8ms/step - loss: 0.0028\nEpoch 3/1000\n311/311 [==============================] - 3s 8ms/step - loss: 0.0015\nEpoch 4/1000\n311/311 [==============================] - 3s 9ms/step - loss: 7.9968e-04\nEpoch 5/1000\n311/311 [==============================] - 3s 8ms/step - loss: 4.8485e-04\nEpoch 6/1000\n311/311 [==============================] - 3s 9ms/step - loss: 3.3778e-04\nEpoch 7/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.7300e-04\nEpoch 8/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.4596e-04\nEpoch 9/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3554e-04\nEpoch 10/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3248e-04\nEpoch 11/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3277e-04\nEpoch 12/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3453e-04\nEpoch 13/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3680e-04\nEpoch 14/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3905e-04\nEpoch 15/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.4101e-04\nEpoch 16/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.4251e-04\nEpoch 17/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.4351e-04\nEpoch 18/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.4403e-04\nEpoch 19/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.4412e-04\nEpoch 20/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.4388e-04\nEpoch 21/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.4338e-04\nEpoch 22/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.4267e-04\nEpoch 23/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.4184e-04\nEpoch 24/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.4093e-04\nEpoch 25/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.3997e-04\nEpoch 26/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.3899e-04\nEpoch 27/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3802e-04\nEpoch 28/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3705e-04\nEpoch 29/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.3607e-04\nEpoch 30/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3516e-04\nEpoch 31/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3427e-04\nEpoch 32/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3340e-04\nEpoch 33/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3256e-04\nEpoch 34/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3175e-04\nEpoch 35/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3096e-04\nEpoch 36/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.3023e-04\nEpoch 37/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2946e-04\nEpoch 38/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2874e-04\nEpoch 39/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2804e-04\nEpoch 40/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2738e-04\nEpoch 41/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2674e-04\nEpoch 42/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2609e-04\nEpoch 43/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.2554e-04\nEpoch 44/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.2499e-04\nEpoch 45/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.2437e-04\nEpoch 46/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2383e-04\nEpoch 47/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2326e-04\nEpoch 48/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2270e-04\nEpoch 49/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2218e-04\nEpoch 50/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2167e-04\nEpoch 51/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2121e-04\nEpoch 52/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.2073e-04\nEpoch 53/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.2025e-04\nEpoch 54/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.1979e-04\nEpoch 55/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1932e-04\nEpoch 56/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1890e-04\nEpoch 57/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.1848e-04\nEpoch 58/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1806e-04\nEpoch 59/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1765e-04\nEpoch 60/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1722e-04\nEpoch 61/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1685e-04\nEpoch 62/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.1647e-04\nEpoch 63/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.1611e-04\nEpoch 64/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1576e-04\nEpoch 65/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1542e-04\nEpoch 66/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1507e-04\nEpoch 67/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.1472e-04\nEpoch 68/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.1440e-04\nEpoch 69/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.1409e-04\nEpoch 70/1000\n311/311 [==============================] - 3s 10ms/step - loss: 2.1373e-04\nEpoch 71/1000\n311/311 [==============================] - 3s 10ms/step - loss: 2.1347e-04\nEpoch 72/1000\n311/311 [==============================] - 3s 10ms/step - loss: 2.1316e-04\nEpoch 73/1000\n311/311 [==============================] - 3s 11ms/step - loss: 2.1286e-04\nEpoch 74/1000\n311/311 [==============================] - 3s 11ms/step - loss: 2.1257e-04\nEpoch 75/1000\n311/311 [==============================] - 3s 10ms/step - loss: 2.1229e-04\nEpoch 76/1000\n311/311 [==============================] - 3s 10ms/step - loss: 2.1201e-04\nEpoch 77/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.1175e-04\nEpoch 78/1000\n311/311 [==============================] - 3s 9ms/step - loss: 2.1148e-04\nEpoch 79/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.1126e-04\nEpoch 80/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1099e-04\nEpoch 81/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1076e-04\nEpoch 82/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1050e-04\nEpoch 83/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1027e-04\nEpoch 84/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.1004e-04\nEpoch 85/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0977e-04\nEpoch 86/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0964e-04\nEpoch 87/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0944e-04\nEpoch 88/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0919e-04\nEpoch 89/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0899e-04\nEpoch 90/1000\n311/311 [==============================] - 3s 8ms/step - loss: 2.0877e-04\nEpoch 91/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0856e-04\nEpoch 92/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0837e-04\nEpoch 93/1000\n311/311 [==============================] - 2s 8ms/step - loss: 2.0817e-04\nEpoch 94/1000\n"
],
[
"rmse_test, test_pred = prediction_score(model1, test_X, test_Y)\n\nprint(\"Testing score: %.2f RMSE\" % rmse_test)",
"Testing score: 0.24 RMSE\n"
]
],
[
[
"## 2000 Epochs",
"_____no_output_____"
]
],
[
[
"rmse_test, test_pred = prediction_score(model1, test_X, test_Y)\nprint(\"Testing score: %.2f RMSE\" % rmse_test)",
"Testing score: 0.31 RMSE\n"
]
],
[
[
"## 2500 Epochs",
"_____no_output_____"
]
],
[
[
"def fit_LSTM(train_X, train_Y, window = 1, neurons=128):\n set_random_seed(3)\n model = Sequential()\n \n model.add(LSTM(neurons, \n input_shape = (1, window)\n ))\n model.add(Dense(1))\n model.compile(loss = \"mean_squared_error\", \n optimizer = \"adam\")\n earlyStop=EarlyStopping(monitor=\"val_loss\",verbose=2,patience=15)\n model.fit(train_X, \n train_Y, \n epochs = 2500, \n batch_size = 1,\n shuffle = False\n # verbose = 2\n )\n \n return(model)\n\n# Fit the first model.\nmodel1 = fit_LSTM(train_X, train_Y, window)",
"Epoch 1/2500\n311/311 [==============================] - 7s 22ms/step - loss: 4.5241e-04\nEpoch 2/2500\n311/311 [==============================] - 2s 5ms/step - loss: 0.0029\nEpoch 3/2500\n311/311 [==============================] - 2s 5ms/step - loss: 0.0015\nEpoch 4/2500\n311/311 [==============================] - 2s 5ms/step - loss: 8.1852e-04\nEpoch 5/2500\n311/311 [==============================] - 2s 6ms/step - loss: 4.9183e-04\nEpoch 6/2500\n311/311 [==============================] - 2s 6ms/step - loss: 3.4003e-04\nEpoch 7/2500\n311/311 [==============================] - 2s 5ms/step - loss: 2.7356e-04\nEpoch 8/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4604e-04\nEpoch 9/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3552e-04\nEpoch 10/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3250e-04\nEpoch 11/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3289e-04\nEpoch 12/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3478e-04\nEpoch 13/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3719e-04\nEpoch 14/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3959e-04\nEpoch 15/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4168e-04\nEpoch 16/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4330e-04\nEpoch 17/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4440e-04\nEpoch 18/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4500e-04\nEpoch 19/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.4515e-04\nEpoch 20/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.4492e-04\nEpoch 21/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4441e-04\nEpoch 22/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4369e-04\nEpoch 23/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4283e-04\nEpoch 24/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4187e-04\nEpoch 25/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.4088e-04\nEpoch 26/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3986e-04\nEpoch 27/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.3885e-04\nEpoch 28/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3785e-04\nEpoch 29/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3687e-04\nEpoch 30/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3590e-04\nEpoch 31/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3496e-04\nEpoch 32/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3414e-04\nEpoch 33/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3326e-04\nEpoch 34/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3247e-04\nEpoch 35/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3169e-04\nEpoch 36/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.3102e-04\nEpoch 37/2500\n311/311 [==============================] - 2s 8ms/step - loss: 2.3011e-04\nEpoch 38/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2940e-04\nEpoch 39/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2855e-04\nEpoch 40/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2796e-04\nEpoch 41/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2727e-04\nEpoch 42/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2670e-04\nEpoch 43/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2592e-04\nEpoch 44/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2539e-04\nEpoch 45/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2466e-04\nEpoch 46/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2412e-04\nEpoch 47/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2348e-04\nEpoch 48/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2298e-04\nEpoch 49/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2236e-04\nEpoch 50/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2186e-04\nEpoch 51/2500\n311/311 [==============================] - 2s 6ms/step - loss: 2.2127e-04\nEpoch 52/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.2079e-04\nEpoch 53/2500\n311/311 [==============================] - 2s 7ms/step - loss: 2.2023e-04\nEpoch 54/2500\n311/311 [==============================] - 2s 8ms/step - loss: 2.1978e-04\nEpoch 55/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1930e-04\nEpoch 56/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1884e-04\nEpoch 57/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1837e-04\nEpoch 58/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1793e-04\nEpoch 59/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1749e-04\nEpoch 60/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1708e-04\nEpoch 61/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1664e-04\nEpoch 62/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1625e-04\nEpoch 63/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1581e-04\nEpoch 64/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1549e-04\nEpoch 65/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1509e-04\nEpoch 66/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1473e-04\nEpoch 67/2500\n311/311 [==============================] - 2s 8ms/step - loss: 2.1435e-04\nEpoch 68/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1403e-04\nEpoch 69/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1367e-04\nEpoch 70/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1335e-04\nEpoch 71/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1301e-04\nEpoch 72/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1274e-04\nEpoch 73/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1242e-04\nEpoch 74/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1213e-04\nEpoch 75/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1183e-04\nEpoch 76/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1154e-04\nEpoch 77/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1128e-04\nEpoch 78/2500\n311/311 [==============================] - 2s 8ms/step - loss: 2.1100e-04\nEpoch 79/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1073e-04\nEpoch 80/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1048e-04\nEpoch 81/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1021e-04\nEpoch 82/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.1003e-04\nEpoch 83/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0977e-04\nEpoch 84/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0953e-04\nEpoch 85/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0928e-04\nEpoch 86/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0908e-04\nEpoch 87/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0884e-04\nEpoch 88/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0869e-04\nEpoch 89/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0846e-04\nEpoch 90/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0826e-04\nEpoch 91/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0807e-04\nEpoch 92/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0791e-04\nEpoch 93/2500\n311/311 [==============================] - 3s 8ms/step - loss: 2.0770e-04\nEpoch 94/2500\n"
],
[
"rmse_test, test_pred = prediction_score(model1, test_X, test_Y)\nprint(\"Testing score: %.2f RMSE\" % rmse_test)",
"Testing score: 0.31 RMSE\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf92e71f84b6d7a469bfab917416f6927b718e6 | 80,894 | ipynb | Jupyter Notebook | 2.12 Example - Principal Components Analysis/2.12 Example - Principal Components Analysis.ipynb | sidraju/linear_algebra | 0a2b33249d5cb854725a075cf0eac7c05a245562 | [
"MIT"
] | 5 | 2021-02-16T10:30:38.000Z | 2021-11-08T09:30:22.000Z | 2.12 Example - Principal Components Analysis/2.12 Example - Principal Components Analysis.ipynb | sidraju/linear_algebra | 0a2b33249d5cb854725a075cf0eac7c05a245562 | [
"MIT"
] | null | null | null | 2.12 Example - Principal Components Analysis/2.12 Example - Principal Components Analysis.ipynb | sidraju/linear_algebra | 0a2b33249d5cb854725a075cf0eac7c05a245562 | [
"MIT"
] | 2 | 2020-05-09T00:49:56.000Z | 2021-08-28T07:24:46.000Z | 67.637124 | 9,194 | 0.730474 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"/Users/lsp/.virtualenvs/kaggle/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n"
],
[
"# Plot style\nsns.set()\n%pylab inline\npylab.rcParams['figure.figsize'] = (4, 4)",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"%%html\n<style>\n.pquote {\n text-align: left;\n margin: 40px 0 40px auto;\n width: 70%;\n font-size: 1.5em;\n font-style: italic;\n display: block;\n line-height: 1.3em;\n color: #5a75a7;\n font-weight: 600;\n border-left: 5px solid rgba(90, 117, 167, .1);\n padding-left: 6px;\n}\n.notes {\n font-style: italic;\n display: block;\n margin: 40px 10%;\n}\nimg + em {\n text-align: center;\n display: block;\n color: gray;\n font-size: 0.9em;\n font-weight: 600;\n}\n</style>",
"_____no_output_____"
],
[
"def plotVectors(vecs, cols, alpha=1):\n \"\"\"\n Plot set of vectors.\n\n Parameters\n ----------\n vecs : array-like\n Coordinates of the vectors to plot. Each vectors is in an array. For\n instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.\n cols : array-like\n Colors of the vectors. For instance: ['red', 'blue'] will display the\n first vector in red and the second in blue.\n alpha : float\n Opacity of vectors\n\n Returns:\n\n fig : instance of matplotlib.figure.Figure\n The figure of the vectors\n \"\"\"\n plt.axvline(x=0, color='#A9A9A9', zorder=0)\n plt.axhline(y=0, color='#A9A9A9', zorder=0)\n\n for i in range(len(vecs)):\n if (isinstance(alpha, list)):\n alpha_i = alpha[i]\n else:\n alpha_i = alpha\n x = np.concatenate([[0,0],vecs[i]])\n plt.quiver([x[0]],\n [x[1]],\n [x[2]],\n [x[3]],\n angles='xy', scale_units='xy', scale=1, color=cols[i],\n alpha=alpha_i)",
"_____no_output_____"
]
],
[
[
"$$\n\\newcommand\\norm[1]{\\left\\lVert#1\\right\\rVert} \n\\DeclareMathOperator{\\Tr}{Tr}\n\\newcommand\\bs[1]{\\boldsymbol{#1}}\n\\newcommand\\argmin[1]{\\underset{\\bs{#1}}{\\arg\\min}}\n\\newcommand\\argmax[1]{\\underset{\\bs{#1}}{\\arg\\max}}\n$$",
"_____no_output_____"
],
[
"<span class='notes'>\n This content is part of a series following the chapter 2 on linear algebra from the [Deep Learning Book](http://www.deeplearningbook.org/) by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is constructed as my understanding of these concepts. You can check the syllabus in the [introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/).\n</span>",
"_____no_output_____"
],
[
"# Introduction\n\nThis chapter is the last chapter of this series on linear algebra! It is about Principal Components Analysis. We will use some knowledge that we acquired along the preceding chapters of the series to understand this important data analysis tool! Feel free to check out the preceding chapters!",
"_____no_output_____"
],
[
"# 2.12 Example - Principal Components Analysis\n",
"_____no_output_____"
],
[
"Dimensions are a crucial topic in data science. The dimensions are all the features of the dataset. For instance, if you are looking at a dataset containing pieces of music, dimensions could be the genre, the length of the piece, the number of instruments, the presence of a singer etc. You can imagine all these dimensions as different columns. When there is only two dimensions, it is very convenient to plot: you can use the $x$- and $y$-axis. Add color and you can represent a third dimension. It is similar if you have tens or hundereds of dimensions, it will just be harder to visualize it.\n\nWhen you have that many dimensions it happens that some of them are correlated. For instance, we can reasonably think that the genre dimension will correlate with the instruments dimensions in our previous example. One way to reduce dimensionality is simply to keep only some of them. The problem is that you loose good information. It would be nice to have a way to reduce these dimensions while keeping all the information present in the data set.\n\nThe aim of principal components analysis (PCA) is generaly to reduce the number of dimensions of a dataset where dimensions are not completly decorelated. PCA provides us with a new set of dimensions, the principal components (PC). They are ordered: the first PC is the dimension having the largest variance. In addition, each PC is orthogonal to the preceding one. Remember that orthogonal vectors means that their dot product is equal to $0$ (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)). This means that each PC is decorelated to the preceding one. It is way better than feature selection where you loose a lot of information.\n\n### Example 1.\n\nUnit vectors are an example of orthogonal vectors:\n\n<img src=\"images/orthogonal-vectors.png\" width=\"200\" alt=\"Example of orthogonal vectors\" title=\"Orthogonal vectors\">\n<em>Orthogonal vectors</em>\n",
"_____no_output_____"
],
[
"## Describing the problem\n\nThe problem can be expressed as finding a function that converts a set of data points from $\\mathbb{R}^n$ to $\\mathbb{R}^l$. This means that we change the number of dimensions of our dataset. We also need a function that can decode back from the transformed dataset to the initial one:\n\n<img src=\"images/principal-components-analysis-PCA-change-coordinates.png\" width=\"80%\" alt=\"Principal components analysis (PCA)\" title=\"Principal components analysis (PCA)\">\n<em>Principal components analysis as a change of coordinate system</em>\n\nThe first step is to understand the shape of the data. $x^{(i)}$ is one data point containing $n$ dimensions. Let's have $m$ data points organized as column vectors (one column per point):\n\n$$\n\\bs{x}=\n\\begin{bmatrix}\n x^{(1)} & x^{(2)} & \\cdots & x^{(m)}\n\\end{bmatrix}\n$$\n\nIf we deploy the $n$ dimensions of our data points we will have:\n\n$$\n\\bs{x}=\n\\begin{bmatrix}\n x_1^{(1)} & x_1^{(2)} & \\cdots & x_1^{(m)}\\\\\\\\\n x_2^{(1)} & x_2^{(2)} & \\cdots & x_2^{(m)}\\\\\\\\\n \\cdots & \\cdots & \\cdots & \\cdots\\\\\\\\\n x_n^{(1)} & x_n^{(2)} & \\cdots & x_n^{(m)}\n\\end{bmatrix}\n$$\n\nWe can also write:\n\n$$\n\\bs{x}=\n\\begin{bmatrix}\n x_1\\\\\\\\\n x_2\\\\\\\\\n \\cdots\\\\\\\\\n x_n\n\\end{bmatrix}\n$$\n\n$c$ will have the shape:\n\n$$\n\\bs{c}=\n\\begin{bmatrix}\n c_1\\\\\\\\\n c_2\\\\\\\\\n \\cdots\\\\\\\\\n c_l\n\\end{bmatrix}\n$$",
"_____no_output_____"
],
[
"## Adding some constraints: the decoding function\n\nThe encoding function $f(\\bs{x})$ transforms $\\bs{x}$ into $\\bs{c}$ and the decoding function transforms back $\\bs{c}$ into an approximation of $\\bs{x}$. To keep things simple, PCA will respect some constraints:\n\n### Constraint 1.\n\nThe decoding function has to be a simple matrix multiplication:\n\n$$\ng(\\bs{c})=\\bs{Dc}\n$$\n\nBy applying the matrix $\\bs{D}$ to the dataset from the new coordinates system we should get back to the initial coordinate system.\n\n### Constraint 2.\n\nThe columns of $\\bs{D}$ must be orthogonal (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)).\n\n### Constraint 3.\n\nThe columns of $\\bs{D}$ must have unit norm (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)).",
"_____no_output_____"
],
[
"## Finding the encoding function\n\nImportant: For now we will consider only **one data point**. Thus we will have the following dimensions for these matrices (note that $\\bs{x}$ and $\\bs{c}$ are column vectors):\n\n<img src=\"images/principal-components-analysis-PCA-decoding-function.png\" width=\"250\" alt=\"Principal components analysis (PCA) - the decoding function\" title=\"The decoding function\">\n<em>The decoding function</em>\n\nWe want a decoding function which is a simple matrix multiplication. For that reason, we have $g(\\bs{c})=\\bs{Dc}$. We will then find the encoding function from the decoding function. We want to minimize the error between the decoded data point and the actual data point. With our previous notation, this means reducing the distance between $\\bs{x}$ and $g(\\bs{c})$. As an indicator of this distance, we will use the squared $L^2$ norm (see [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/)):\n\n$$\n\\norm{\\bs{x} - g(\\bs{c})}_2^2\n$$\n\nThis is what we want to minimize. Let's call $\\bs{c}^*$ the optimal $\\bs{c}$. Mathematically it can be written:\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} \\norm{\\bs{x} - g(\\bs{c})}_2^2\n$$\n\nThis means that we want to find the values of the vector $\\bs{c}$ such that $\\norm{\\bs{x} - g(\\bs{c})}_2^2$ is as small as possible.\n\nIf you have a look back to [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/) you can see that the squared $L^2$ norm can be expressed as:\n\n$$\n\\norm{\\bs{y}}_2^2 = \\bs{y}^\\text{T}\\bs{y}\n$$\n\nWe have named the variable $\\bs{y}$ to avoid confusion with our $\\bs{x}$. Here $\\bs{y}=\\bs{x} - g(\\bs{c})$",
"_____no_output_____"
],
[
"Thus the equation that we want to minimize becomes:\n\n$$\n(\\bs{x} - g(\\bs{c}))^\\text{T}(\\bs{x} - g(\\bs{c}))\n$$\n\nSince the transpose respects addition we have:\n\n$$\n(\\bs{x}^\\text{T} - g(\\bs{c})^\\text{T})(\\bs{x} - g(\\bs{c}))\n$$\n\nBy the distributive property (see [2.2](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors/)) we can develop:\n\n$$\n\\bs{x^\\text{T}x} - \\bs{x}^\\text{T}g(\\bs{c}) - g(\\bs{c})^\\text{T}\\bs{x} + g(\\bs{c})^\\text{T}g(\\bs{c})\n$$\n\nThe commutative property (see [2.2](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors/)) tells us that $\n\\bs{x^\\text{T}y} = \\bs{y^\\text{T}x}\n$. We can use that in the previous equation: we have $\n\\bs{x}^\\text{T}g(\\bs{c}) = g(\\bs{c})^\\text{T}\\bs{x}\n$. So the equation becomes:\n\n$$\n\\bs{x^\\text{T}x} -2\\bs{x}^\\text{T}g(\\bs{c}) + g(\\bs{c})^\\text{T}g(\\bs{c})\n$$\n\nThe first term $\\bs{x^\\text{T}x}$ does not depends on $\\bs{c}$ and since we want to minimize the function according to $\\bs{c}$ we can just get off this term. We simplify to:\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} -2\\bs{x}^\\text{T}g(\\bs{c}) + g(\\bs{c})^\\text{T}g(\\bs{c})\n$$\n\nSince $g(\\bs{c})=\\bs{Dc}$:\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} -2\\bs{x}^\\text{T}\\bs{Dc} + (\\bs{Dc})^\\text{T}\\bs{Dc}\n$$\n\nWith $(\\bs{Dc})^\\text{T}=\\bs{c}^\\text{T}\\bs{D}^\\text{T}$ (see [2.2](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors/)), we have:\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} -2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{D}^\\text{T}\\bs{Dc}\n$$\n\nAs we saw in [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/), $\\bs{D}^\\text{T}\\bs{D}=\\bs{I}_l$ because $\\bs{D}$ is orthogonal (actually, it is [semi-orthogonal](https://en.wikipedia.org/wiki/Semi-orthogonal_matrix) if $n \\neq l$) and their columns have unit norm. We can replace in the equation:\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} -2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{I}_l\\bs{c}\n$$\n\n$$\n\\bs{c}^* = \\underset{c}{\\arg\\min} -2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{c}\n$$",
"_____no_output_____"
],
[
"### Minimizing the function\n\nSo far so good! Now the goal is to find the minimum of the function $- 2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{c}$. One widely used way of doing that is to use the **gradient descent** algorithm. It is not the focus of this chapter but we will say a word about it (see [4.3](http://www.deeplearningbook.org/contents/numerical.html) of the Deep Learning Book for more details). The main idea is that the sign of the derivative of the function at a specific value of $x$ tells you if you need to increase or decrease $x$ to reach the minimum. When the slope is near $0$, the minimum should have been reached.\n\n<img src=\"images/gradient-descent.png\" width=\"400\" alt=\"Mechanism of the gradient descent algorithm\" title=\"Mechanism of the gradient descent algorithm\">\n<em>Gradient descent</em>\n\nHowever, functions with local minima can trouble the descent:\n\n<img src=\"images/gradient-descent-local-minima.png\" width=\"400\" alt=\"Gradient descent in the case of local minimum\" title=\"Gradient descent\">\n<em>Gradient descent can get stuck in local minima</em>\n\nThese examples are in 2 dimensions but the principle stands for higher dimensional functions. The gradient is a vector containing the partial derivatives of all dimensions. Its mathematical notation is $\\nabla_xf(\\bs{x})$.",
"_____no_output_____"
],
[
"### Calculating the gradient of the function\n\nHere we want to minimize through each dimension of $\\bs{c}$. We are looking for a slope of $0$. The equation is:\n\n$$\n\\nabla_c(-2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{c})=0\n$$\n\nLet's take these terms separately to calculate the derivative according to $\\bs{c}$. \n\n$$\n\\frac{d(-2\\bs{x}^\\text{T}\\bs{Dc})}{d\\bs{c}} = -2\\bs{x}^\\text{T}\\bs{D}\n$$\n\nThe second term is $\\bs{c}^\\text{T}\\bs{c}$. We can develop the vector $\\bs{c}$ and calculate the derivative for each element:\n\n$$\n\\begin{align*}\n\\frac{d(\\bs{c}\\text{T}\\bs{c})}{d\\bs{c}} &=\n\\left(\\frac{d(\\bs{c}_1^2 + \\bs{c}_2^2 + \\cdots + \\bs{c}_l^2)}{d\\bs{c}_1},\n\\frac{d(\\bs{c}_1^2 + \\bs{c}_2^2 + \\cdots + \\bs{c}_l^2)}{d\\bs{c}_2},\n\\cdots,\n\\frac{d(\\bs{c}_1^2 + \\bs{c}_2^2 + \\cdots + \\bs{c}_l^2)}{d\\bs{c}_l}\\right) \\\\\\\\\n&=(2\\bs{c}_1, 2\\bs{c}_2, \\cdots, 2\\bs{c}_l)\\\\\\\\\n&=2(\\bs{c}_1, \\bs{c}_2, \\cdots, \\bs{c}_l)\\\\\\\\\n&=2\\bs{c}\n\\end{align*}\n$$\n\nSo we can progress in our derivatives:\n\n$$\n\\nabla_c(-2\\bs{x}^\\text{T}\\bs{Dc} + \\bs{c}^\\text{T}\\bs{c})=0\\\\\\\\\n-2\\bs{x}^\\text{T}\\bs{D} + 2\\bs{c}=0\\\\\\\\\n-2\\bs{D}^\\text{T}\\bs{x} + 2\\bs{c}=0\\\\\\\\\n\\bs{c}=\\bs{D}^\\text{T}\\bs{x}\n$$\n\nGreat! We found the encoding function! Here are its dimensions:\n\n<img src=\"images/principal-components-analysis-PCA-encoding-function.png\" width=\"250\" alt=\"Expression of the encoding function\" title=\"The encoding function\">\n<em>The encoding function</em>\n\nTo go back from $\\bs{c}$ to $\\bs{x}$ we use $g(\\bs{c})=\\bs{Dc}$:\n\n$$\nr(\\bs{x}) = g(f(\\bs{x})=\\bs{D}\\bs{D}^\\text{T}\\bs{x}\n$$\n\n<img src=\"images/principal-components-analysis-PCA-reconstruction-function.png\" width=\"300\" alt=\"Expression of the reconstruction function\" title=\"The reconstruction function\">\n<em>The reconstruction function</em>",
"_____no_output_____"
],
[
"## Finding $\\bs{D}$\n\nThe next step is to find the matrix $\\bs{D}$. Recall that the purpose of the PCA is to change the coordinate system in order to maximize the variance along the first dimensions of the projected space. This is equivalent to minimizing the error between data points and their reconstruction ([cf here](https://stats.stackexchange.com/questions/32174/pca-objective-function-what-is-the-connection-between-maximizing-variance-and-m)). See bellow the covariance matrix to have more details.\n\n<span class='pquote'>\n Maximizing the variance corresponds to minimizing the error of the reconstruction.\n</span>",
"_____no_output_____"
],
[
"### The Frobenius norm\n\nSince we have to take all points into account (the same matrix $\\bs{D}$ will be used for all points) we will use the Frobenius norm of the errors (see [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/)) which is the equivalent of the $L^2$ norm for matrices. Here the formula of the Frobenius norm:\n\n$$\n\\norm{\\bs{A}}_F=\\sqrt{\\sum_{i,j}A^2_{i,j}}\n$$\n\nIt is like if you unroll the matrix to end up with a one dimensional vector and that you take the $L^2$ norm of this vector.\n\nWe will call $\\bs{D}^*$ the optimal $\\bs{D}$ (in the sense that the error is as small as possible). We have:\n\n$$\n\\bs{D}^* = \\underset{\\bs{D}}{\\arg\\min} \\sqrt{\\sum_{i,j}(x_j^{(i)}-r(\\bs{x}^{(i)})_j})^2\n$$\n\nWith the constraint that $\\bs{D}^\\text{T}\\bs{D}=\\bs{I}_l$ because we have chosen the constraint of having the columns of $\\bs{D}$ orthogonal.",
"_____no_output_____"
],
[
"### The first principal component\n\nWe will start to find only the first principal component (PC). For that reason, we will have $l=1$. So the matrix $\\bs{D}$ will have the shape $(n \\times 1)$: it is a simple column vector. Since it is a vector we will call it $\\bs{d}$:\n\n<img src=\"images/first-principal-component.png\" width=\"100\" alt=\"Dimension of the first principal component\" title=\"The first principal component\">\n<em>The first principal component</em>\n\nWe can therefore remove the sum over $j$ and the square root since we will take the squared $L^2$ norm:\n\n$$\n\\bs{d}^* = \\underset{\\bs{d}}{\\arg\\min} \\sum_{i}\\norm{(\\bs{x}^{(i)}-r(\\bs{x}^{(i)}))}_2^2\n$$\n\n\nWe have also seen that:\n\n$$\nr(\\bs{x})=\\bs{D}\\bs{D}^\\text{T}\\bs{x}\n$$\n\nSince we are looking only for the first PC:\n\n$$\nr(\\bs{x})=\\bs{d}\\bs{d}^\\text{T}\\bs{x}\n$$\n\nWe can plug $r(\\bs{x})$ into the equation:\n\n$$\n\\bs{d}^* = \\underset{\\bs{d}}{\\arg\\min} \\sum_{i}\\norm{\\bs{x}^{(i)}-\\bs{dd}^\\text{T}\\bs{x}^{(i)}}_2^2\n$$\n\nBecause of the constraint 3. (the columns of $\\bs{D}$ have unit norms) we have $\\norm{\\bs{d}}_2 = 1$. $\\bs{d}$ is one of the columns of $\\bs{D}$ and thus has a unit norm.\n\n\nInstead of using the sum along the $m$ data points $\\bs{x}$ we can have the matrix $\\bs{X}$ which gather all the observations:\n\n$$\n\\bs{X} = \\begin{bmatrix}\n \\bs{x}^{(1)\\text{T}}\\\\\\\\\n \\bs{x}^{(2)\\text{T}}\\\\\\\\\n \\cdots\\\\\\\\\n \\bs{x}^{(m)\\text{T}}\n\\end{bmatrix}=\n\\begin{bmatrix}\n \\bs{x}_1^{(1)} & \\bs{x}_2^{(1)} & \\cdots & \\bs{x}_n^{(1)}\\\\\\\\\n \\bs{x}_1^{(2)} & \\bs{x}_2^{(2)} & \\cdots & \\bs{x}_n^{(2)}\\\\\\\\\n \\cdots & \\cdots & \\cdots & \\cdots\\\\\\\\\n \\bs{x}_0^{(m)} & \\bs{x}_1^{(m)} & \\cdots & \\bs{x}_n^{(m)}\n\\end{bmatrix}\n$$\n\nWe want $\\bs{x}^{(i)\\text{T}}$ instead of $\\bs{x}^{(i)}$ in our expression of $\\bs{d}^*$. We can transpose the content of the norm:\n\n$$\n\\begin{align*}\n\\bs{d}^* &= \\underset{\\bs{d}}{\\arg\\min} \\sum_{i}\\norm{(\\bs{x}^{(i)}-\\bs{dd}^\\text{T}\\bs{x}^{(i)})^\\text{T}}_2^2\\\\\\\\\n&=\\underset{\\bs{d}}{\\arg\\min} \\sum_{i}\\norm{\\bs{x}^{(i)\\text{T}}-\\bs{x}^{(i)\\text{T}}\\bs{dd}^\\text{T}}_2^2\\\\\\\\\n\\end{align*}\n$$\n\nand\n\n$$\n\\bs{d}^* = \\underset{\\bs{d}}{\\arg\\min} \\norm{\\bs{X}-\\bs{X}\\bs{dd}^\\text{T}}_\\text{F}^2\n$$\n\nwith the constraint that $\\bs{dd}^\\text{T}=1$.",
"_____no_output_____"
],
[
"### Using the Trace operator\n\nWe will now use the Trace operator (see [2.10](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.10-The-Trace-Operator/)) to simplify the equation to minimize. Recall that:\n\n$$\n\\norm{\\bs{A}}_F=\\sqrt{\\Tr({\\bs{AA}^T})}\n$$\n\nSo here $\\bs{A}=\\bs{X}-\\bs{X}\\bs{dd}^\\text{T}$. So we have:\n\n$$\n\\bs{d}^* = \\underset{\\bs{d}}{\\arg\\min} \\Tr{((\\bs{X}-\\bs{Xdd}^\\text{T})}(\\bs{X}-\\bs{Xdd}^\\text{T})^\\text{T})\n$$\n\nSince we can cycle the order of the matrices in a Trace (see [2.10](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.10-The-Trace-Operator/)) we can write:\n\n$$\n\\begin{align*}\n\\bs{d}^* &= \\argmin{d} \\Tr{((\\bs{X}-\\bs{Xdd}^\\text{T})^\\text{T}}(\\bs{X}-\\bs{Xdd}^\\text{T}))\\\\\\\\\n&=\\argmin{d} \\Tr{((\\bs{X}^\\text{T}-(\\bs{Xdd}^\\text{T})^\\text{T})}(\\bs{X}-\\bs{Xdd}^\\text{T}))\n\\end{align*}\n$$\n\nAnd $(\\bs{Xdd}^\\text{T})^\\text{T}=(\\bs{d}^\\text{T})^\\text{T}\\bs{d}^\\text{T}\\bs{X}^\\text{T}=\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}$. Let's plug that into our equation:",
"_____no_output_____"
],
[
"\n$$\n\\begin{align*}\n\\bs{d}^* &= \\argmin{d} \\Tr{(\\bs{X}^\\text{T}-\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T})}(\\bs{X}-\\bs{Xdd}^\\text{T}))\\\\\\\\\n&= \\argmin{d} \\Tr{(\\bs{X}^\\text{T}\\bs{X}-\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T} -\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{X} +\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T}})\\\\\\\\\n&= \\argmin{d} \\Tr{(\\bs{X}^\\text{T}\\bs{X})} - \\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n- \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{X})} + \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n\\end{align*}\n$$\n\nWe can remove the first term that not depends on $d$:\n\n$$\n\\bs{d}^* = \\argmin{d} - \\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n- \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{X})} + \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n$$\n\nStill because of the cycling property of a trace, we have\n\n$$\n\\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})} = \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{X})}\n$$\n\nWe can simplify to:\n\n$$\n\\bs{d}^* = \\argmin{d} -2\\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n + \\Tr{(\\bs{d}\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n$$\n\nand then\n\n$$\n\\bs{d}^* = \\argmin{d} -2\\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n + \\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T}\\bs{d}\\bs{d}^\\text{T})}\n$$\n\nBecause of the constraint $\\bs{dd}^\\text{T}=1$:\n\n$$\n\\begin{align*}\n\\bs{d}^* &= \\argmin{d} -2\\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\n + \\Tr{(\\bs{X}^\\text{T}\\bs{Xd}\\bs{d}^\\text{T})}\\textrm{ subject to }\\bs{dd}^\\text{T}=1\\\\\\\\\n&= \\argmin{d} -\\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\\textrm{ subject to }\\bs{dd}^\\text{T}=1\\\\\\\\\n&=\\argmax{d} \\Tr{(\\bs{X}^\\text{T}\\bs{Xdd}^\\text{T})}\\textrm{ subject to }\\bs{dd}^\\text{T}=1\n\\end{align*}\n$$\n\nand with the cycling property:\n\n$$\n\\bs{d}^* = \\argmax{d} \\Tr{(\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xd})} \\textrm{ subject to }\\bs{dd}^\\text{T}=1\n$$",
"_____no_output_____"
],
[
"### Eigendecomposition\n\nWe will see that we can find the maximum of the function by calculating the eigenvectors of $\\bs{X^\\text{T}X}$.\n\n\n### Covariance matrix\n\nAs we wrote above, the optimization problem of maximizing the variance of the components and minimizing the error between the reconstructed and the actual data are equivalent. Actually, if you look at the formula of $\\bs{d}$ you can see that there is the term $\\bs{X^\\text{T}X}$ in the middle.\n\nIf we have centered our data around 0 (see bellow for more details about centering), $\\bs{X^\\text{T}X}$ is the covariance matrix (see [this Quora question](https://www.quora.com/Why-do-we-need-to-center-the-data-for-Principle-Components-Analysis)).\n\nThe covariance matrix is a $n$ by $n$ matrix ($n$ being the number of dimensions). Its diagonal is the variance of the corresponding dimensions and the other cells are the covariance between the two corresponding dimensions (the amount of redundancy).\n\nThis means that the largest covariance we have between two dimensions the more redundancy exists between these dimensions. This also means that the best-fit line is associated with small errors if the covariance is hight. To maximize the variance and minimize the covariance (in order to decorrelate the dimensions) means that the ideal covariance matrix is a diagonal matrix (non-zero values in the diagonal only). Therefore the diagonalization of the covariance matrix will give us the optimal solution.",
"_____no_output_____"
],
[
"### Example 2.\n\nAs an example we will create again a 2D data set (like in [2.9](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.9-The-Moore-Penrose-Pseudoinverse/)). To see the effect of the PCA we will introduce some correlations between the two dimensions. Let's create 100 data points with 2 dimensions:",
"_____no_output_____"
]
],
[
[
"np.random.seed(123)\nx = 5*np.random.rand(100)\ny = 2*x + 1 + np.random.randn(100)\n\nx = x.reshape(100, 1)\ny = y.reshape(100, 1)\n\nX = np.hstack([x, y])\nX.shape",
"_____no_output_____"
]
],
[
[
"Let's plot the data:",
"_____no_output_____"
]
],
[
[
"plt.plot(X[:,0], X[:,1], '*')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Highly correlated data means that the dimensions are redundant. It is possible to predict one from the other without losing much information.\n\nThe first processing we will do is to center the data around 0. PCA is a regression model without intercept (see [here](https://stats.stackexchange.com/questions/22329/how-does-centering-the-data-get-rid-of-the-intercept-in-regression-and-pca)) and the first component is thus necessarly crossing the origin.\n\nHere is a simple function that substract the mean of each column to each data point of this column. It can be used to center the data points around 0.",
"_____no_output_____"
]
],
[
[
"def centerData(X):\n X = X.copy()\n X -= np.mean(X, axis = 0)\n return X",
"_____no_output_____"
]
],
[
[
"So let's center our data $\\bs{X}$ around 0 for both dimensions:",
"_____no_output_____"
]
],
[
[
"X_centered = centerData(X)\nplt.plot(X_centered[:,0], X_centered[:,1], '*')\nplt.show()",
"_____no_output_____"
]
],
[
[
"That's better!\n\nWe can now look for PCs. We saw that they correspond to values taken by $\\bs{d}$ that maximize the following function:\n\n$$\n\\bs{d}^* = \\argmax{d} \\Tr{(\\bs{d}^\\text{T}\\bs{X}^\\text{T}\\bs{Xd})} \\textrm{ subject to }\\bs{dd}^\\text{T}=1\n$$\n\nTo find $\\bs{d}$ we can calculate the eigenvectors of $\\bs{X^\\text{T}X}$ (see [2.7](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.7-Eigendecomposition/) for more details about eigendecomposition). So let's do that:",
"_____no_output_____"
]
],
[
[
"eigVals, eigVecs = np.linalg.eig(X_centered.T.dot(X_centered)/100)\neigVecs",
"_____no_output_____"
],
[
"print np.linalg.eig(X_centered.T.dot(X_centered))\nprint np.linalg.eig(X_centered.T.dot(X_centered)/100)",
"(array([ 18.04730409, 798.35242844]), array([[-0.91116273, -0.41204669],\n [ 0.41204669, -0.91116273]]))\n(array([ 0.18047304, 7.98352428]), array([[-0.91116273, -0.41204669],\n [ 0.41204669, -0.91116273]]))\n"
]
],
[
[
"These are the vectors maximizing our function. Each column vector is associated with an eigenvalue. The vector associated with the larger eigenvalue tells us the direction associated with the larger variance in our data. To check that, we will plot these vectors along with the data. ",
"_____no_output_____"
]
],
[
[
"orange = '#FF9A13'\nblue = '#1190FF'\nplotVectors(eigVecs.T, [orange, blue])\nplt.plot(X_centered[:,0], X_centered[:,1], '*')\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see that the blue vector direction corresponds to the oblique shape of our data. The idea is that if you project the data points on the line corresponding to the blue vector direction you will end up with the largest variance. This vector has the direction that maximizes variance of projected data. Have a look at the following figure:\n\n<img src=\"images/principal-component-analysis-variance-explained.png\" width=\"400\" alt=\"Representation of the variance explained across directions\" title=\"Maximizing the variance\">\n<em>Projection of the data point: this line direction is the one with the largest variance</em>\n\nWhen you project data points on the pink line there is more variance. This line has the direction that maximizes the variance of the data points. It is the same for the figure above: our blue vector has the direction of the line where data point projection has the higher variance. Then the second eigenvector is orthogonal to the first.\n\nIn our figure above, the blue vector is the second eigenvector so let's check that it is the one associated with the bigger eigenvalue:",
"_____no_output_____"
]
],
[
[
"eigVals",
"_____no_output_____"
]
],
[
[
"So yes, the second vector corresponds to the biggest eigenvalue.\n\nNow that we have found the matrix $\\bs{d}$ we will use the encoding function to rotate the data. The goal of the rotation is to end up with a new coordinate system where data is uncorrelated and thus where the basis axes gather all the variance. It is then possible to keep only few axes: this is the purpose of dimensionality reduction.\n\nRecall that the encoding function is:\n\n$$\n\\bs{c}=\\bs{D}^\\text{T}\\bs{x}\n$$\n\n$\\bs{D}$ is the matrix containing the eigenvectors that we have calculated before. In addition, this formula corresponds to only one data point where dimensions are the rows of $\\bs{x}$. In our case, we will apply it to all data points and since $\\bs{X}$ has dimensions on the columns we need to transpose it.",
"_____no_output_____"
]
],
[
[
"X_new = eigVecs.T.dot(X_centered.T)\n\nplt.plot(eigVecs.T.dot(X_centered.T)[0, :], eigVecs.T.dot(X_centered.T)[1, :], '*')\nplt.xlim(-5, 5)\nplt.ylim(-5, 5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"It worked! The rotation transformed our dataset that have now the more variance on one of the basis axis. You could keep only this dimension and have a fairly good representation of the data.",
"_____no_output_____"
],
[
"### About the unit norm constraint\n\nWe saw that the maximization is subject to $\\bs{dd}^\\text{T}=1$. This means that the solution vector has to be a unit vector. Without this constraint, you could scale $\\bs{d}$ up to the infinity to increase the function to maximize (see [here](https://stats.stackexchange.com/questions/117695/why-is-the-eigenvector-in-pca-taken-to-be-unit-norm)). For instance, let's see some vectors $\\bs{x}$ that could maximize the function:",
"_____no_output_____"
]
],
[
[
"d = np.array([[12], [26]])\nd.T.dot(X.T).dot(X).dot(d)",
"_____no_output_____"
]
],
[
[
"However this $\\bs{d}$ has not a unit norm (since $\\bs{d}$ is a column vector we use the transpose of $\\bs{dd}^\\text{T}$ (see [2.2](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors/)):",
"_____no_output_____"
]
],
[
[
"d.T.dot(d)",
"_____no_output_____"
]
],
[
[
"The eigenvectors have unit norm and thus respect the constraint:",
"_____no_output_____"
]
],
[
[
"eigVecs[:,0].dot(eigVecs[:,0].T)",
"_____no_output_____"
]
],
[
[
"and",
"_____no_output_____"
]
],
[
[
"eigVecs[:,1].dot(eigVecs[:,1].T)",
"_____no_output_____"
]
],
[
[
"And... This is the end! We have gone through a lot of things during this series on linear algebra! I hope that it was a useful introduction to this topic which is of large importance in the data science/machine learning/deep learning fields.",
"_____no_output_____"
],
[
"<span class='notes'>\n Feel free to drop me an email or a comment. The syllabus of this series can be found [in the introduction post](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/). All the notebooks can be found on [Github](https://github.com/hadrienj/deepLearningBook-Notes).\n</span>",
"_____no_output_____"
],
[
"# References\n\n## PCA\n\n- A lot of intuitive explanations on PCA: https://arxiv.org/pdf/1404.1100.pdf\n\n- https://brilliant.org/wiki/principal-component-analysis/#from-approximate-equality-to-minimizing-function\n\n- http://www4.ncsu.edu/~slrace/LinearAlgebra2017/Slides/PCAPrint.pdf\n\n- https://towardsdatascience.com/a-one-stop-shop-for-principal-component-analysis-5582fb7e0a9c\n\n- https://www.cs.bgu.ac.il/~inabd171/wiki.files/lecture14_handouts.pdf\n\n## Semi-orthogonal matrix\n\n- https://en.wikipedia.org/wiki/Semi-orthogonal_matrix\n\n## Intuition about PCA\n\n- https://georgemdallas.wordpress.com/2013/10/30/principal-component-analysis-4-dummies-eigenvectors-eigenvalues-and-dimension-reduction/\n\n## Derivatives\n\n- https://math.stackexchange.com/questions/1377764/derivative-of-vector-and-vector-transpose-product\n\n## Link between variance maximized and error minimized:\n\n- https://stats.stackexchange.com/questions/130721/what-norm-of-the-reconstruction-error-is-minimized-by-the-low-rank-approximation\n\n- https://stats.stackexchange.com/questions/32174/pca-objective-function-what-is-the-connection-between-maximizing-variance-and-m\n\n- https://stats.stackexchange.com/questions/318625/why-do-the-leading-eigenvectors-of-a-maximize-texttrdtad\n\n## Centering data\n\n- https://www.quora.com/Why-do-we-need-to-center-the-data-for-Principle-Components-Analysis\n- https://stats.stackexchange.com/questions/22329/how-does-centering-the-data-get-rid-of-the-intercept-in-regression-and-pca\n\n## Unit norm constraint\n\n- https://stats.stackexchange.com/questions/117695/why-is-the-eigenvector-in-pca-taken-to-be-unit-norm",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ecf930530ef637d0d04a5ea36d88cd21e4667e1a | 716,296 | ipynb | Jupyter Notebook | Time_series_bit_coin_USING_ARIMA.ipynb | muli2487/Twitter-Sentiment-Analysis-and-Bitcoin-Stock-Prediction | 8201a0360b97fed86eb012499a7e0223a092f474 | [
"MIT"
] | null | null | null | Time_series_bit_coin_USING_ARIMA.ipynb | muli2487/Twitter-Sentiment-Analysis-and-Bitcoin-Stock-Prediction | 8201a0360b97fed86eb012499a7e0223a092f474 | [
"MIT"
] | null | null | null | Time_series_bit_coin_USING_ARIMA.ipynb | muli2487/Twitter-Sentiment-Analysis-and-Bitcoin-Stock-Prediction | 8201a0360b97fed86eb012499a7e0223a092f474 | [
"MIT"
] | null | null | null | 301.21783 | 114,494 | 0.85346 | [
[
[
"<a href=\"https://colab.research.google.com/github/muli2487/Twitter-Sentiment-Analysis-and-Bitcoin-Stock-Prediction/blob/master/Time_series_bit_coin_USING_ARIMA.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"# First, import the relevant modules\nimport requests\nimport json",
"_____no_output_____"
],
[
"# Now, call the Quandl API and pull out a small sample of the data (only one day) to get a glimpse\n# into the JSON structure that will be returned\nr= requests.get('https://www.quandl.com/api/v3/datasets/BCHAIN/MKPRU.json?api_key=Lq43ztbiWJ73CJUDPiye&start_date=2016-01-01&end_date=2020-2-29')\nprint(r.status_code)",
"200\n"
],
[
"dict = r.json()",
"_____no_output_____"
],
[
"#explore the structure of the dictionary\nfor key, value in dict.items() :\n print (key)",
"dataset\n"
],
[
"print(dict['dataset'])",
"{'id': 7692468, 'dataset_code': 'MKPRU', 'database_code': 'BCHAIN', 'name': 'Bitcoin Market Price USD', 'description': 'Data showing the USD market price from Mt.gox', 'refreshed_at': '2020-06-13T05:00:45.781Z', 'newest_available_date': '2020-06-14', 'oldest_available_date': '2009-01-03', 'column_names': ['Date', 'Value'], 'frequency': 'daily', 'type': 'Time Series', 'premium': False, 'limit': None, 'transform': None, 'column_index': None, 'start_date': '2016-01-01', 'end_date': '2020-02-29', 'data': [['2020-02-29', 8804.72], ['2020-02-28', 8785.52], ['2020-02-27', 9309.15], ['2020-02-26', 9663.75], ['2020-02-25', 9989.39], ['2020-02-24', 9669.63], ['2020-02-23', 9696.58], ['2020-02-22', 9606.86], ['2020-02-21', 9604.72], ['2020-02-20', 10180.65], ['2020-02-19', 9703.93], ['2020-02-18', 9937.67], ['2020-02-17', 9904.17], ['2020-02-16', 10368.53], ['2020-02-15', 10242.43], ['2020-02-14', 10354.3], ['2020-02-13', 10275.38], ['2020-02-12', 9854.79], ['2020-02-11', 10162.41], ['2020-02-10', 9907.12], ['2020-02-09', 9807.54], ['2020-02-08', 9755.66], ['2020-02-07', 9614.9], ['2020-02-06', 9162.14], ['2020-02-05', 9284.51], ['2020-02-04', 9314.56], ['2020-02-03', 9378.09], ['2020-02-02', 9333.77], ['2020-02-01', 9502.37], ['2020-01-31', 9279.81], ['2020-01-30', 9385.69], ['2020-01-29', 8895.78], ['2020-01-28', 8588.42], ['2020-01-27', 8327.36], ['2020-01-26', 8428.17], ['2020-01-25', 8388.11], ['2020-01-24', 8658.94], ['2020-01-23', 8722.26], ['2020-01-22', 8626.47], ['2020-01-21', 8703.36], ['2020-01-20', 8910.85], ['2020-01-19', 8900.34], ['2020-01-18', 8722.03], ['2020-01-17', 8813.89], ['2020-01-16', 8842.42], ['2020-01-15', 8105.24], ['2020-01-14', 8173.97], ['2020-01-13', 8021.49], ['2020-01-12', 8184.66], ['2020-01-11', 7817.92], ['2020-01-10', 8042.65], ['2020-01-09', 8165.47], ['2020-01-08', 7759.24], ['2020-01-07', 7351.57], ['2020-01-06', 7347.89], ['2020-01-05', 7326.35], ['2020-01-04', 6944.33], ['2020-01-03', 7175.68], ['2020-01-02', 7168.31], ['2020-01-01', 7219.6], ['2019-12-31', 7385.36], ['2019-12-30', 7301.07], ['2019-12-29', 7243.93], ['2019-12-28', 7194.4], ['2019-12-27', 7192.72], ['2019-12-26', 7250.69], ['2019-12-25', 7322.08], ['2019-12-24', 7514.41], ['2019-12-23', 7143.2], ['2019-12-22', 7190.17], ['2019-12-21', 7150.86], ['2019-12-20', 7284.29], ['2019-12-19', 6612.12], ['2019-12-18', 6879.54], ['2019-12-17', 7111.14], ['2019-12-16', 7067.74], ['2019-12-15', 7251.87], ['2019-12-14', 7189.16], ['2019-12-13', 7202.31], ['2019-12-12', 7220.76], ['2019-12-11', 7337.42], ['2019-12-10', 7522.39], ['2019-12-09', 7504.83], ['2019-12-08', 7547.19], ['2019-12-07', 7395.97], ['2019-12-06', 7192.85], ['2019-12-05', 7296.77], ['2019-12-04', 7309.59], ['2019-12-03', 7402.69], ['2019-12-02', 7557.72], ['2019-12-01', 7757.47], ['2019-11-30', 7431.0], ['2019-11-29', 7523.83], ['2019-11-28', 7163.63], ['2019-11-27', 7130.25], ['2019-11-26', 6907.4], ['2019-11-25', 7324.03], ['2019-11-24', 7286.35], ['2019-11-23', 7617.07], ['2019-11-22', 8081.81], ['2019-11-21', 8120.8], ['2019-11-20', 8175.99], ['2019-11-19', 8503.93], ['2019-11-18', 8482.7], ['2019-11-17', 8457.69], ['2019-11-16', 8632.32], ['2019-11-15', 8762.42], ['2019-11-14', 8801.52], ['2019-11-13', 8717.81], ['2019-11-12', 9037.12], ['2019-11-11', 8809.41], ['2019-11-10', 8766.04], ['2019-11-09', 9204.24], ['2019-11-08', 9343.34], ['2019-11-07', 9310.19], ['2019-11-06', 9418.05], ['2019-11-05', 9206.16], ['2019-11-04', 9301.18], ['2019-11-03', 9252.99], ['2019-11-02', 9147.98], ['2019-11-01', 9164.62], ['2019-10-31', 9433.35], ['2019-10-30', 9218.76], ['2019-10-29', 9551.54], ['2019-10-28', 9259.8], ['2019-10-27', 8666.79], ['2019-10-26', 7431.88], ['2019-10-25', 7469.56], ['2019-10-24', 8026.76], ['2019-10-23', 8222.52], ['2019-10-22', 8231.06], ['2019-10-21', 7960.04], ['2019-10-20', 7954.15], ['2019-10-19', 8076.78], ['2019-10-18', 8002.51], ['2019-10-17', 8162.16], ['2019-10-16', 8353.54], ['2019-10-15', 8283.76], ['2019-10-14', 8308.01], ['2019-10-13', 8269.73], ['2019-10-12', 8586.9], ['2019-10-11', 8587.92], ['2019-10-10', 8190.0], ['2019-10-09', 8212.02], ['2019-10-08', 7869.74], ['2019-10-07', 8147.69], ['2019-10-06', 8155.48], ['2019-10-05', 8236.17], ['2019-10-04', 8382.03], ['2019-10-03', 8322.92], ['2019-10-02', 8307.74], ['2019-10-01', 8056.74], ['2019-09-30', 8225.0], ['2019-09-29', 8193.9], ['2019-09-28', 8055.64], ['2019-09-27', 8432.23], ['2019-09-26', 8553.61], ['2019-09-25', 9683.38], ['2019-09-24', 10033.05], ['2019-09-23', 9979.49], ['2019-09-22', 10173.11], ['2019-09-21', 10275.88], ['2019-09-20', 10157.59], ['2019-09-19', 10190.36], ['2019-09-18', 10265.63], ['2019-09-17', 10310.43], ['2019-09-16', 10361.33], ['2019-09-15', 10363.9], ['2019-09-14', 10420.16], ['2019-09-13', 10159.32], ['2019-09-12', 10101.03], ['2019-09-11', 10313.66], ['2019-09-10', 10406.31], ['2019-09-09', 10487.21], ['2019-09-08', 10317.47], ['2019-09-07', 10577.8], ['2019-09-06', 10584.16], ['2019-09-05', 10621.29], ['2019-09-04', 10386.64], ['2019-09-03', 9769.79], ['2019-09-02', 9600.9], ['2019-09-01', 9577.99], ['2019-08-31', 9484.55], ['2019-08-30', 9717.82], ['2019-08-29', 10171.95], ['2019-08-28', 10360.28], ['2019-08-27', 10135.06], ['2019-08-26', 10143.8], ['2019-08-25', 10405.81], ['2019-08-24', 10111.98], ['2019-08-23', 10129.4], ['2019-08-22', 10760.56], ['2019-08-21', 10917.26], ['2019-08-20', 10317.6], ['2019-08-19', 10214.52], ['2019-08-18', 10359.44], ['2019-08-17', 10302.17], ['2019-08-16', 10016.96], ['2019-08-15', 10858.12], ['2019-08-14', 11386.26], ['2019-08-13', 11566.84], ['2019-08-12', 11282.22], ['2019-08-11', 11856.64], ['2019-08-10', 11996.41], ['2019-08-09', 11960.82], ['2019-08-08', 11465.67], ['2019-08-07', 11787.99], ['2019-08-06', 10980.23], ['2019-08-05', 10814.57], ['2019-08-04', 10529.55], ['2019-08-03', 10407.17], ['2019-08-02', 10084.7], ['2019-08-01', 9589.13], ['2019-07-31', 9501.33], ['2019-07-30', 9530.0], ['2019-07-29', 9473.99], ['2019-07-28', 9848.65], ['2019-07-27', 9875.17], ['2019-07-26', 9772.17], ['2019-07-25', 9844.3], ['2019-07-24', 10323.62], ['2019-07-23', 10587.41], ['2019-07-22', 10764.9], ['2019-07-21', 10535.75], ['2019-07-20', 10638.56], ['2019-07-19', 9674.28], ['2019-07-18', 9397.72], ['2019-07-17', 10873.5], ['2019-07-16', 10186.96], ['2019-07-15', 11389.1], ['2019-07-14', 11803.97], ['2019-07-13', 11352.87], ['2019-07-12', 12099.9], ['2019-07-11', 12586.78], ['2019-07-10', 12313.08], ['2019-07-09', 11477.01], ['2019-07-08', 11232.04], ['2019-07-07', 10955.16], ['2019-07-06', 11160.49], ['2019-07-05', 11984.76], ['2019-07-04', 10805.4], ['2019-07-03', 10578.72], ['2019-07-02', 10737.73], ['2019-07-01', 11890.38], ['2019-06-30', 12374.35], ['2019-06-29', 11132.85], ['2019-06-28', 12932.55], ['2019-06-27', 11766.4], ['2019-06-26', 11017.5], ['2019-06-25', 10814.48], ['2019-06-24', 10668.63], ['2019-06-23', 10209.38], ['2019-06-22', 9531.35], ['2019-06-21', 9281.7], ['2019-06-20', 9076.18], ['2019-06-19', 9327.48], ['2019-06-18', 8978.99], ['2019-06-17', 8859.47], ['2019-06-16', 8693.6], ['2019-06-15', 8237.88], ['2019-06-14', 8173.06], ['2019-06-13', 7917.58], ['2019-06-12', 8020.38], ['2019-06-11', 7640.61], ['2019-06-10', 7931.34], ['2019-06-09', 7998.29], ['2019-06-08', 7803.23], ['2019-06-07', 7786.04], ['2019-06-06', 7675.8], ['2019-06-05', 8134.92], ['2019-06-04', 8739.03], ['2019-06-03', 8554.8], ['2019-06-02', 8553.81], ['2019-06-01', 8272.46], ['2019-05-31', 8662.98], ['2019-05-30', 8719.88], ['2019-05-29', 8770.06], ['2019-05-28', 8744.42], ['2019-05-27', 8071.45], ['2019-05-26', 8003.26], ['2019-05-25', 7880.29], ['2019-05-24', 7625.93], ['2019-05-23', 7952.5], ['2019-05-22', 8007.03], ['2019-05-21', 8193.7], ['2019-05-20', 7265.05], ['2019-05-19', 7362.22], ['2019-05-18', 7885.96], ['2019-05-17', 8206.9], ['2019-05-16', 7992.69], ['2019-05-15', 7823.11], ['2019-05-14', 6978.63], ['2019-05-13', 7248.31], ['2019-05-12', 6348.02], ['2019-05-11', 6146.91], ['2019-05-10', 5936.72], ['2019-05-09', 5755.72], ['2019-05-08', 5684.47], ['2019-05-07', 5717.66], ['2019-05-06', 5771.08], ['2019-05-05', 5657.14], ['2019-05-04', 5390.16], ['2019-05-03', 5326.67], ['2019-05-02', 5269.79], ['2019-05-01', 5260.65], ['2019-04-30', 5301.29], ['2019-04-29', 5280.19], ['2019-04-28', 5292.83], ['2019-04-27', 5195.61], ['2019-04-26', 5434.19], ['2019-04-25', 5518.16], ['2019-04-24', 5377.19], ['2019-04-23', 5281.83], ['2019-04-22', 5309.28], ['2019-04-21', 5277.04], ['2019-04-20', 5276.31], ['2019-04-19', 5220.37], ['2019-04-18', 5196.65], ['2019-04-17', 5036.18], ['2019-04-16', 5152.51], ['2019-04-15', 5064.62], ['2019-04-14', 5072.85], ['2019-04-13', 5035.02], ['2019-04-12', 5310.18], ['2019-04-11', 5175.81], ['2019-04-10', 5268.71], ['2019-04-09', 5189.39], ['2019-04-08', 5053.52], ['2019-04-07', 5028.77], ['2019-04-06', 4911.24], ['2019-04-05', 4959.81], ['2019-04-04', 4882.88], ['2019-04-03', 4152.53], ['2019-04-02', 4114.16], ['2019-04-01', 4114.44], ['2019-03-31', 4115.55], ['2019-03-30', 4037.37], ['2019-03-29', 4048.51], ['2019-03-28', 3947.74], ['2019-03-27', 3935.89], ['2019-03-26', 3994.11], ['2019-03-25', 4011.92], ['2019-03-24', 3999.06], ['2019-03-23', 3992.34], ['2019-03-22', 4056.4], ['2019-03-21', 4029.11], ['2019-03-20', 3998.77], ['2019-03-19', 3994.92], ['2019-03-18', 4015.95], ['2019-03-17', 3936.5], ['2019-03-16', 3885.21], ['2019-03-15', 3877.77], ['2019-03-14', 3892.51], ['2019-03-13', 3881.09], ['2019-03-12', 3928.17], ['2019-03-11', 3950.56], ['2019-03-10', 3875.96], ['2019-03-09', 3886.82], ['2019-03-08', 3876.85], ['2019-03-07', 3872.25], ['2019-03-06', 3730.98], ['2019-03-05', 3814.58], ['2019-03-04', 3831.44], ['2019-03-03', 3834.62], ['2019-03-02', 3850.08], ['2019-03-01', 3833.23], ['2019-02-28', 3831.45], ['2019-02-27', 3848.33], ['2019-02-26', 3766.89], ['2019-02-25', 4139.6], ['2019-02-24', 3983.48], ['2019-02-23', 3944.88], ['2019-02-22', 3987.18], ['2019-02-21', 3926.53], ['2019-02-20', 3910.8], ['2019-02-19', 3670.74], ['2019-02-18', 3619.55], ['2019-02-17', 3608.2], ['2019-02-16', 3600.31], ['2019-02-15', 3614.5], ['2019-02-14', 3632.42], ['2019-02-13', 3629.47], ['2019-02-12', 3686.29], ['2019-02-11', 3664.38], ['2019-02-10', 3661.08], ['2019-02-09', 3394.76], ['2019-02-08', 3405.7], ['2019-02-07', 3467.34], ['2019-02-06', 3452.35], ['2019-02-05', 3456.02], ['2019-02-04', 3506.22], ['2019-02-03', 3475.33], ['2019-02-02', 3444.81], ['2019-02-01', 3470.0], ['2019-01-31', 3421.6], ['2019-01-30', 3448.58], ['2019-01-29', 3554.5], ['2019-01-28', 3576.3], ['2019-01-27', 3577.79], ['2019-01-26', 3586.25], ['2019-01-25', 3566.4], ['2019-01-24', 3587.35], ['2019-01-23', 3540.02], ['2019-01-22', 3548.4275], ['2019-01-21', 3620.1275], ['2019-01-20', 3690.52333333], ['2019-01-19', 3632.395], ['2019-01-18', 3619.96416667], ['2019-01-17', 3621.27083333], ['2019-01-16', 3656.785], ['2019-01-15', 3604.1175], ['2019-01-14', 3599.84166667], ['2019-01-13', 3646.34583333], ['2019-01-12', 3656.73583333], ['2019-01-11', 3812.58583333], ['2019-01-10', 4034.13833333], ['2019-01-09', 4035.855], ['2019-01-08', 4036.09333333], ['2019-01-07', 3920.45666667], ['2019-01-06', 3868.4875], ['2019-01-05', 3822.62666667], ['2019-01-04', 3865.7975], ['2019-01-03', 3867.13833333], ['2019-01-02', 3752.27166667], ['2019-01-01', 3791.54583333], ['2018-12-31', 3832.92166667], ['2018-12-30', 3912.28583333], ['2018-12-29', 3743.905], ['2018-12-28', 3747.83916667], ['2018-12-27', 3825.37916667], ['2018-12-26', 3813.88], ['2018-12-25', 4178.59083333], ['2018-12-24', 4027.47833333], ['2018-12-23', 3910.97333333], ['2018-12-22', 4015.60916667], ['2018-12-21', 3982.85083333], ['2018-12-20', 3808.0425], ['2018-12-19', 3567.47], ['2018-12-18', 3392.405], ['2018-12-17', 3271.23833333], ['2018-12-16', 3225.29916667], ['2018-12-15', 3278.37416667], ['2018-12-14', 3406.7625], ['2018-12-13', 3462.04], ['2018-12-12', 3426.19], ['2018-12-11', 3523.96], ['2018-12-10', 3528.80333333], ['2018-12-09', 3435.34], ['2018-12-08', 3405.64333333], ['2018-12-07', 3742.94333333], ['2018-12-06', 3858.34916667], ['2018-12-05', 3961.49333333], ['2018-12-04', 3967.52416667], ['2018-12-03', 4167.54666667], ['2018-12-02', 4116.7775], ['2018-12-01', 4106.87166667], ['2018-11-30', 4263.78333333], ['2018-11-29', 4103.45384615], ['2018-11-28', 3751.66833333], ['2018-11-27', 3920.53666667], ['2018-11-26', 3823.51166667], ['2018-11-25', 4293.84083333], ['2018-11-24', 4309.3375], ['2018-11-23', 4548.7975], ['2018-11-22', 4533.68083333], ['2018-11-21', 4671.97], ['2018-11-20', 5303.9425], ['2018-11-19', 5606.04416667], ['2018-11-18', 5558.24333333], ['2018-11-17', 5596.1925], ['2018-11-16', 5615.18], ['2018-11-15', 6176.155], ['2018-11-14', 6372.06333333], ['2018-11-13', 6401.93666667], ['2018-11-12', 6378.26833333], ['2018-11-11', 6399.03333333], ['2018-11-10', 6411.28083333], ['2018-11-09', 6486.25166667], ['2018-11-08', 6538.79], ['2018-11-07', 6445.35416667], ['2018-11-06', 6436.965], ['2018-11-05', 6391.87333333], ['2018-11-04', 6363.79583333], ['2018-11-03', 6387.67416667], ['2018-11-02', 6342.28083333], ['2018-11-01', 6310.28416667], ['2018-10-31', 6309.45285714], ['2018-10-30', 6382.66833333], ['2018-10-29', 6448.22166667], ['2018-10-28', 6465.9175], ['2018-10-27', 6473.75333333], ['2018-10-26', 6478.0825], ['2018-10-25', 6508.31], ['2018-10-24', 6481.426], ['2018-10-23', 6498.48583333], ['2018-10-22', 6531.60166667], ['2018-10-21', 6488.82583333], ['2018-10-20', 6487.44416667], ['2018-10-19', 6568.04076923], ['2018-10-18', 6596.27615385], ['2018-10-17', 6596.61833333], ['2018-10-16', 6452.57166667], ['2018-10-15', 6299.39916667], ['2018-10-14', 6260.64583333], ['2018-10-13', 6260.53083333], ['2018-10-12', 6248.63583333], ['2018-10-11', 6563.00916667], ['2018-10-10', 6621.71166667], ['2018-10-09', 6618.56769231], ['2018-10-08', 6558.5375], ['2018-10-07', 6581.48666667], ['2018-10-06', 6568.54916667], ['2018-10-05', 6563.62833333], ['2018-10-04', 6470.4025], ['2018-10-03', 6562.64166667], ['2018-10-02', 6590.96833333], ['2018-10-01', 6593.135], ['2018-09-30', 6550.47416667], ['2018-09-29', 6677.3425], ['2018-09-28', 6535.47666667], ['2018-09-27', 6468.63166667], ['2018-09-26', 6412.45916667], ['2018-09-25', 6639.30416667], ['2018-09-24', 6710.445], ['2018-09-23', 6709.3125], ['2018-09-22', 6669.99083333], ['2018-09-21', 6418.56266667], ['2018-09-20', 6335.82666667], ['2018-09-19', 6296.63166667], ['2018-09-18', 6400.60083333], ['2018-09-17', 6480.64416667], ['2018-09-16', 6518.655], ['2018-09-15', 6499.0625], ['2018-09-14', 6450.17923077], ['2018-09-13', 6273.1375], ['2018-09-12', 6296.32083333], ['2018-09-11', 6297.87769231], ['2018-09-10', 6286.42583333], ['2018-09-09', 6366.1075], ['2018-09-08', 6444.80416667], ['2018-09-07', 6433.27166667], ['2018-09-06', 7113.06923077], ['2018-09-05', 7326.8525], ['2018-09-04', 7260.94923077], ['2018-09-03', 7247.93538462], ['2018-09-02', 7100.94666667], ['2018-09-01', 6981.94615385], ['2018-08-31', 6932.6625], ['2018-08-30', 7054.27642857], ['2018-08-29', 7000.04], ['2018-08-28', 6719.26615385], ['2018-08-27', 6673.27416667], ['2018-08-26', 6719.42923077], ['2018-08-25', 6543.64571429], ['2018-08-24', 6434.88166667], ['2018-08-23', 6575.22916667], ['2018-08-22', 6401.24615385], ['2018-08-21', 6434.55916667], ['2018-08-20', 6404.06333333], ['2018-08-19', 6436.72083333], ['2018-08-18', 6476.9], ['2018-08-17', 6342.62923077], ['2018-08-16', 6362.67692308], ['2018-08-15', 6050.9425], ['2018-08-14', 6347.07], ['2018-08-13', 6311.13166667], ['2018-08-12', 6195.65333333], ['2018-08-11', 6396.49466667], ['2018-08-10', 6396.7725], ['2018-08-09', 6450.36692308], ['2018-08-08', 6993.51333333], ['2018-08-07', 6988.07916667], ['2018-08-06', 6998.71833333], ['2018-08-05', 7247.76916667], ['2018-08-04', 7394.49916667], ['2018-08-03', 7593.14916667], ['2018-08-02', 7570.86916667], ['2018-08-01', 7916.80083333], ['2018-07-31', 8143.14833333], ['2018-07-30', 8206.34166667], ['2018-07-29', 8182.25166667], ['2018-07-28', 8025.2575], ['2018-07-27', 8187.32416667], ['2018-07-26', 8251.165], ['2018-07-25', 8112.93], ['2018-07-24', 7689.88416667], ['2018-07-23', 7451.28916667], ['2018-07-22', 7352.49538462], ['2018-07-21', 7428.045], ['2018-07-20', 7396.40166667], ['2018-07-19', 7398.66166667], ['2018-07-18', 6869.91083333], ['2018-07-17', 6514.39083333], ['2018-07-16', 6316.88166667], ['2018-07-15', 6241.0], ['2018-07-14', 6244.3575], ['2018-07-13', 6233.595], ['2018-07-12', 6377.36333333], ['2018-07-11', 6510.79166667], ['2018-07-10', 6723.8725], ['2018-07-09', 6753.55916667], ['2018-07-08', 6594.28166667], ['2018-07-07', 6569.49615385], ['2018-07-06', 6603.37666667], ['2018-07-05', 6593.28916667], ['2018-07-04', 6613.68583333], ['2018-07-03', 6466.06916667], ['2018-07-02', 6374.75416667], ['2018-07-01', 6381.39083333], ['2018-06-30', 5908.7025], ['2018-06-29', 6107.89615385], ['2018-06-28', 6105.295], ['2018-06-27', 6218.595], ['2018-06-26', 6211.4475], ['2018-06-25', 6037.00833333], ['2018-06-24', 6141.60583333], ['2018-06-23', 6332.57333333], ['2018-06-22', 6733.90166667], ['2018-06-21', 6714.71833333], ['2018-06-20', 6737.0], ['2018-06-19', 6713.488], ['2018-06-18', 6464.41166667], ['2018-06-17', 6509.42666667], ['2018-06-16', 6434.088], ['2018-06-15', 6647.01333333], ['2018-06-14', 6315.7], ['2018-06-13', 6535.08166667], ['2018-06-12', 6875.67833333], ['2018-06-11', 6776.88833333], ['2018-06-10', 7564.40833333], ['2018-06-09', 7620.15333333], ['2018-06-08', 7676.27166667], ['2018-06-07', 7655.61333333], ['2018-06-06', 7615.7221421], ['2018-06-05', 7500.27333333], ['2018-06-04', 7699.886], ['2018-06-03', 7645.14666667], ['2018-06-02', 7535.14666667], ['2018-06-01', 7491.434], ['2018-05-31', 7385.395], ['2018-05-30', 7459.87666667], ['2018-05-29', 7130.54166667], ['2018-05-28', 7361.83166667], ['2018-05-27', 7342.90833333], ['2018-05-26', 7445.23833333], ['2018-05-25', 7566.19333333], ['2018-05-24', 7555.74], ['2018-05-23', 8015.58], ['2018-05-22', 8385.55618693], ['2018-05-21', 8507.40666667], ['2018-05-20', 8223.28833333], ['2018-05-19', 8240.055], ['2018-05-18', 8106.11833333], ['2018-05-17', 8340.70333333], ['2018-05-16', 8511.458], ['2018-05-15', 8652.03833333], ['2018-05-14', 8727.65166667], ['2018-05-13', 8484.34666667], ['2018-05-12', 8468.788], ['2018-05-11', 9101.48333333], ['2018-05-10', 9322.04166667], ['2018-05-09', 9228.60833333], ['2018-05-08', 9345.69], ['2018-05-07', 9630.13627668], ['2018-05-06', 9803.30666667], ['2018-05-05', 9710.73], ['2018-05-04', 9639.26833333], ['2018-05-03', 9221.426], ['2018-05-02', 9075.13666667], ['2018-05-01', 9259.57], ['2018-04-30', 9334.28166667], ['2018-04-29', 9326.17333333], ['2018-04-28', 9010.32], ['2018-04-27', 9258.39833333], ['2018-04-26', 8995.50666667], ['2018-04-25', 9555.542], ['2018-04-24', 8933.86166667], ['2018-04-23', 8838.54833333], ['2018-04-22', 8807.205], ['2018-04-21', 8852.71833333], ['2018-04-20', 8254.625], ['2018-04-19', 8164.93742577], ['2018-04-18', 7895.41692596], ['2018-04-17', 8043.80333333], ['2018-04-16', 8340.74833333], ['2018-04-15', 8036.51105141], ['2018-04-14', 7895.40833333], ['2018-04-13', 7847.845], ['2018-04-12', 6926.26666667], ['2018-04-11', 6787.57166667], ['2018-04-10', 6699.27333333], ['2018-04-09', 7017.65666667], ['2018-04-08', 6927.688], ['2018-04-07', 6603.87666667], ['2018-04-06', 6826.51], ['2018-04-05', 6787.76166667], ['2018-04-04', 7410.435], ['2018-04-03', 7035.84833333], ['2018-04-02', 6794.105], ['2018-04-01', 6935.48], ['2018-03-31', 6882.53166667], ['2018-03-30', 7172.28], ['2018-03-29', 7960.38], ['2018-03-28', 7876.195], ['2018-03-27', 8197.54833333], ['2018-03-26', 8617.29666667], ['2018-03-25', 8662.37833333], ['2018-03-24', 8686.82666667], ['2018-03-23', 8690.40833333], ['2018-03-22', 8947.75333333], ['2018-03-21', 8986.94833333], ['2018-03-20', 8412.03333333], ['2018-03-19', 8171.415], ['2018-03-18', 7993.67464364], ['2018-03-17', 8530.402], ['2018-03-16', 8358.12166667], ['2018-03-15', 8151.53166667], ['2018-03-14', 9154.7], ['2018-03-13', 9182.84333333], ['2018-03-12', 9761.39666667], ['2018-03-11', 8746.002], ['2018-03-10', 9089.27833333], ['2018-03-09', 9429.11166667], ['2018-03-08', 10118.058], ['2018-03-07', 10763.1983333], ['2018-03-06', 11595.54], ['2018-03-05', 11430.1816667], ['2018-03-04', 11326.9483333], ['2018-03-03', 11055.815], ['2018-03-02', 11009.3816667], ['2018-03-01', 10370.165], ['2018-02-28', 10763.8833333], ['2018-02-27', 10348.6033333], ['2018-02-26', 9696.59333333], ['2018-02-25', 9697.956], ['2018-02-24', 10162.1166667], ['2018-02-23', 9931.07166667], ['2018-02-22', 10532.7916667], ['2018-02-21', 11390.3916667], ['2018-02-20', 11110.965], ['2018-02-19', 10503.2983333], ['2018-02-18', 10841.9916667], ['2018-02-17', 10127.1616667], ['2018-02-16', 9977.154], ['2018-02-15', 9334.63333333], ['2018-02-14', 8597.7675], ['2018-02-13', 8811.34333333], ['2018-02-12', 8343.455], ['2018-02-11', 8319.87656618], ['2018-02-10', 8535.51666667], ['2018-02-09', 8240.53666667], ['2018-02-08', 8099.95833333], ['2018-02-07', 7685.63333333], ['2018-02-06', 6838.81666667], ['2018-02-05', 8400.64833333], ['2018-02-04', 9076.67833333], ['2018-02-03', 8901.90166667], ['2018-02-02', 9083.25833333], ['2018-02-01', 10125.0133333], ['2018-01-31', 10184.0616667], ['2018-01-30', 11212.655], ['2018-01-29', 11765.71], ['2018-01-28', 11524.7766667], ['2018-01-27', 10969.815], ['2018-01-26', 11214.44], ['2018-01-25', 11282.2583333], ['2018-01-24', 11223.064], ['2018-01-23', 10544.5933333], ['2018-01-22', 11505.228], ['2018-01-21', 12950.7933333], ['2018-01-20', 11422.44], ['2018-01-19', 11345.4233333], ['2018-01-18', 11116.9466667], ['2018-01-17', 11180.9983333], ['2018-01-16', 14012.196], ['2018-01-15', 13852.92], ['2018-01-14', 14499.7733333], ['2018-01-13', 13912.882], ['2018-01-12', 13296.794], ['2018-01-11', 15126.3983333], ['2018-01-10', 14714.2533333], ['2018-01-09', 15265.9066667], ['2018-01-08', 16651.4716667], ['2018-01-07', 17319.198], ['2018-01-06', 17174.12], ['2018-01-05', 15199.355], ['2018-01-04', 15053.2616667], ['2018-01-03', 15005.8566667], ['2018-01-02', 13812.1866667], ['2018-01-01', 14165.575], ['2017-12-31', 13215.574], ['2017-12-30', 14640.14], ['2017-12-29', 14380.5816667], ['2017-12-28', 15589.3216667], ['2017-12-27', 15999.0483333], ['2017-12-26', 14119.0283333], ['2017-12-25', 13949.175], ['2017-12-24', 15360.2616667], ['2017-12-23', 15190.945], ['2017-12-22', 16047.51], ['2017-12-21', 16026.2716667], ['2017-12-20', 17737.1116667], ['2017-12-19', 18961.8566667], ['2017-12-18', 19289.785], ['2017-12-17', 19498.6833333], ['2017-12-16', 17771.9], ['2017-12-15', 16678.892], ['2017-12-14', 16808.3666667], ['2017-12-13', 17276.3933333], ['2017-12-12', 16762.1166667], ['2017-12-11', 14869.805], ['2017-12-10', 15142.8341521], ['2017-12-09', 16007.4366667], ['2017-12-08', 16501.9716667], ['2017-12-07', 13540.98], ['2017-12-06', 11878.4333333], ['2017-12-05', 11584.83], ['2017-12-04', 11332.622], ['2017-12-03', 11071.3683333], ['2017-12-02', 10883.912], ['2017-12-01', 10147.372], ['2017-11-30', 9879.32833333], ['2017-11-29', 9952.50882], ['2017-11-28', 9718.29505], ['2017-11-27', 9284.1438], ['2017-11-26', 8707.40726667], ['2017-11-25', 8250.97833333], ['2017-11-24', 8148.95], ['2017-11-23', 8268.035], ['2017-11-22', 8059.8], ['2017-11-21', 8255.59681667], ['2017-11-20', 8007.65406667], ['2017-11-19', 7817.14038333], ['2017-11-18', 7786.88436667], ['2017-11-17', 7815.0307], ['2017-11-16', 7301.42992], ['2017-11-15', 6635.41263333], ['2017-11-14', 6550.22753333], ['2017-11-13', 5716.30158333], ['2017-11-12', 6362.85103333], ['2017-11-11', 6719.39785], ['2017-11-10', 7158.03706], ['2017-11-09', 7415.87825], ['2017-11-08', 7092.12723333], ['2017-11-07', 6989.07166667], ['2017-11-06', 7377.01236667], ['2017-11-05', 7437.54331667], ['2017-11-04', 7197.72006], ['2017-11-03', 7068.0201], ['2017-11-02', 6665.30668333], ['2017-11-01', 6388.64516667], ['2017-10-31', 6105.87422], ['2017-10-30', 6155.43402], ['2017-10-29', 5776.69695], ['2017-10-28', 5772.50498333], ['2017-10-27', 5893.13841667], ['2017-10-26', 5669.62253333], ['2017-10-25', 5505.82776667], ['2017-10-24', 5876.07986667], ['2017-10-23', 5983.18455], ['2017-10-22', 6020.37168333], ['2017-10-21', 5979.45984], ['2017-10-20', 5727.6335], ['2017-10-19', 5546.1761], ['2017-10-18', 5603.71294], ['2017-10-17', 5711.20586667], ['2017-10-16', 5647.31166667], ['2017-10-15', 5739.43873333], ['2017-10-14', 5563.80656667], ['2017-10-13', 5325.13068333], ['2017-10-12', 4819.48576667], ['2017-10-11', 4782.28], ['2017-10-10', 4777.96781667], ['2017-10-09', 4602.28088333], ['2017-10-08', 4376.19166667], ['2017-10-07', 4345.60333333], ['2017-10-06', 4338.852], ['2017-10-05', 4225.175], ['2017-10-04', 4293.3066], ['2017-10-03', 4386.88375], ['2017-10-02', 4360.72296667], ['2017-10-01', 4335.36831667], ['2017-09-30', 4193.57466667], ['2017-09-29', 4201.98905], ['2017-09-28', 4202.55498333], ['2017-09-27', 3910.30738333], ['2017-09-26', 3942.555], ['2017-09-25', 3703.04065], ['2017-09-24', 3776.3869], ['2017-09-23', 3637.50255], ['2017-09-22', 3658.89818333], ['2017-09-21', 3977.56166667], ['2017-09-20', 3943.41333333], ['2017-09-19', 4093.31666667], ['2017-09-18', 3746.06078333], ['2017-09-17', 3763.62604], ['2017-09-16', 3774.26528333], ['2017-09-15', 3319.63], ['2017-09-14', 3961.27126667], ['2017-09-13', 4219.03661667], ['2017-09-12', 4248.09001667], ['2017-09-11', 4329.955], ['2017-09-10', 4375.55952], ['2017-09-09', 4310.75018333], ['2017-09-08', 4654.6585], ['2017-09-07', 4641.82201667], ['2017-09-06', 4488.72014], ['2017-09-05', 4344.09831667], ['2017-09-04', 4648.15998333], ['2017-09-03', 4580.38748], ['2017-09-02', 4911.74001667], ['2017-09-01', 4748.255], ['2017-08-31', 4594.98785], ['2017-08-30', 4607.98545], ['2017-08-29', 4391.67351667], ['2017-08-28', 4354.30833333], ['2017-08-27', 4360.51331667], ['2017-08-26', 4363.05445], ['2017-08-25', 4340.31671667], ['2017-08-24', 4174.95], ['2017-08-23', 4082.18098333], ['2017-08-22', 4043.722], ['2017-08-21', 4157.95803333], ['2017-08-20', 4222.66221429], ['2017-08-19', 4130.44006667], ['2017-08-18', 4328.72571667], ['2017-08-17', 4360.87687143], ['2017-08-16', 4217.02832857], ['2017-08-15', 4282.992], ['2017-08-14', 4125.54802], ['2017-08-13', 3852.80291429], ['2017-08-12', 3632.50666667], ['2017-08-11', 3424.4042], ['2017-08-10', 3357.32631667], ['2017-08-09', 3457.37433333], ['2017-08-08', 3407.22683333], ['2017-08-07', 3252.56253333], ['2017-08-06', 3218.11501667], ['2017-08-05', 2873.85108333], ['2017-08-04', 2794.11771667], ['2017-08-03', 2693.63398333], ['2017-08-02', 2710.41306667], ['2017-08-01', 2866.43166667], ['2017-07-31', 2745.95541667], ['2017-07-30', 2722.51278571], ['2017-07-29', 2781.63658333], ['2017-07-28', 2647.625], ['2017-07-27', 2495.02858571], ['2017-07-26', 2560.99791667], ['2017-07-25', 2751.82102857], ['2017-07-24', 2725.54971667], ['2017-07-23', 2807.60985714], ['2017-07-22', 2682.1953625], ['2017-07-21', 2898.18841667], ['2017-07-20', 2264.7657], ['2017-07-19', 2320.12225], ['2017-07-18', 2176.6234875], ['2017-07-17', 1931.2143], ['2017-07-16', 2058.9956], ['2017-07-15', 2190.94783333], ['2017-07-14', 2354.78341667], ['2017-07-13', 2385.74857143], ['2017-07-12', 2369.86212857], ['2017-07-11', 2366.17014286], ['2017-07-10', 2536.2389375], ['2017-07-09', 2562.1306625], ['2017-07-08', 2491.20121429], ['2017-07-07', 2609.96775], ['2017-07-06', 2619.187503], ['2017-07-05', 2599.7298375], ['2017-07-04', 2561.22542857], ['2017-07-03', 2561.22542857], ['2017-07-02', 2501.19134286], ['2017-07-01', 2477.641375], ['2017-06-30', 2544.414475], ['2017-06-29', 2585.34918571], ['2017-06-28', 2517.90311429], ['2017-06-27', 2436.45105714], ['2017-06-26', 2512.36628571], ['2017-06-25', 2589.1648875], ['2017-06-24', 2710.41228571], ['2017-06-23', 2727.2880125], ['2017-06-22', 2671.04325], ['2017-06-21', 2754.97825], ['2017-06-20', 2617.2102625], ['2017-06-19', 2507.38925214], ['2017-06-18', 2665.927], ['2017-06-17', 2464.95981429], ['2017-06-16', 2442.48025], ['2017-06-15', 2447.0415625], ['2017-06-14', 2748.18508571], ['2017-06-13', 2657.6750625], ['2017-06-12', 2961.8296125], ['2017-06-11', 2845.37285714], ['2017-06-10', 2827.4913], ['2017-06-09', 2792.9991875], ['2017-06-08', 2664.9208625], ['2017-06-07', 2883.31369664], ['2017-06-06', 2698.3138125], ['2017-06-05', 2516.17314286], ['2017-06-04', 2525.76515847], ['2017-06-03', 2446.14241429], ['2017-06-02', 2399.24267143], ['2017-06-01', 2285.93391429], ['2017-05-31', 2239.20534286], ['2017-05-30', 2275.9307], ['2017-05-29', 2192.9808], ['2017-05-28', 2014.0529625], ['2017-05-27', 2211.97685714], ['2017-05-26', 2387.20628571], ['2017-05-25', 2379.19383333], ['2017-05-24', 2287.7102875], ['2017-05-23', 2090.6623125], ['2017-05-22', 2046.5344625], ['2017-05-21', 2052.9097875], ['2017-05-20', 1961.5204875], ['2017-05-19', 1899.0828875], ['2017-05-18', 1807.4850625], ['2017-05-17', 1739.031975], ['2017-05-16', 1723.1269375], ['2017-05-15', 1776.3165], ['2017-05-14', 1771.9200125], ['2017-05-13', 1720.4785], ['2017-05-12', 1820.9905625], ['2017-05-11', 1762.88625], ['2017-05-10', 1721.28497143], ['2017-05-09', 1640.619225], ['2017-05-08', 1535.86842857], ['2017-05-07', 1560.4102], ['2017-05-06', 1533.33507143], ['2017-05-05', 1508.292125], ['2017-05-04', 1507.57685714], ['2017-05-03', 1452.0762875], ['2017-05-02', 1417.1728125], ['2017-05-01', 1353.0045], ['2017-04-30', 1334.9790375], ['2017-04-29', 1331.29442857], ['2017-04-28', 1345.3539125], ['2017-04-27', 1309.109875], ['2017-04-26', 1279.4146875], ['2017-04-25', 1262.902775], ['2017-04-24', 1257.9881125], ['2017-04-23', 1261.311225], ['2017-04-22', 1258.3614125], ['2017-04-21', 1241.686325], ['2017-04-20', 1217.9300875], ['2017-04-19', 1216.18674286], ['2017-04-18', 1205.634875], ['2017-04-17', 1186.9274125], ['2017-04-16', 1184.88067143], ['2017-04-15', 1185.26005714], ['2017-04-14', 1180.0237125], ['2017-04-13', 1218.92205], ['2017-04-12', 1226.6170375], ['2017-04-11', 1207.744875], ['2017-04-10', 1208.8005], ['2017-04-09', 1181.1498375], ['2017-04-08', 1190.45425], ['2017-04-07', 1196.3079375], ['2017-04-06', 1133.07931429], ['2017-04-05', 1141.6003625], ['2017-04-04', 1141.813], ['2017-04-03', 1099.169125], ['2017-04-02', 1086.92957143], ['2017-04-01', 1079.54931429], ['2017-03-31', 1037.90455], ['2017-03-30', 1040.5755], ['2017-03-29', 1046.127625], ['2017-03-28', 1037.22925], ['2017-03-27', 956.7863125], ['2017-03-26', 959.340085714], ['2017-03-25', 941.919714286], ['2017-03-24', 1038.789], ['2017-03-23', 1028.7268625], ['2017-03-22', 1118.63004286], ['2017-03-21', 1049.0844875], ['2017-03-20', 1029.8008125], ['2017-03-19', 952.2323625], ['2017-03-18', 1091.1718875], ['2017-03-17', 1180.94565714], ['2017-03-16', 1257.399625], ['2017-03-15', 1245.37078571], ['2017-03-14', 1239.816225], ['2017-03-13', 1227.494625], ['2017-03-12', 1179.159875], ['2017-03-11', 1098.61712857], ['2017-03-10', 1192.46914286], ['2017-03-09', 1157.3933], ['2017-03-08', 1238.447], ['2017-03-07', 1275.197375], ['2017-03-06', 1270.9333], ['2017-03-05', 1267.0272], ['2017-03-04', 1285.14], ['2017-03-03', 1259.41081667], ['2017-03-02', 1222.4994], ['2017-03-01', 1187.56528571], ['2017-02-28', 1190.75195], ['2017-02-27', 1175.04975], ['2017-02-26', 1150.60571429], ['2017-02-25', 1174.86625], ['2017-02-24', 1172.01715], ['2017-02-23', 1123.2231875], ['2017-02-22', 1123.78842857], ['2017-02-21', 1084.7550125], ['2017-02-20', 1052.77928571], ['2017-02-19', 1056.6371375], ['2017-02-18', 1055.53685], ['2017-02-17', 1035.208125], ['2017-02-16', 1012.3259875], ['2017-02-15', 1011.78025], ['2017-02-14', 999.877375], ['2017-02-13', 1000.604625], ['2017-02-12', 1008.8466625], ['2017-02-11', 999.1035], ['2017-02-10', 976.103], ['2017-02-09', 1052.3766125], ['2017-02-08', 1050.11], ['2017-02-07', 1024.01375], ['2017-02-06', 1014.837725], ['2017-02-05', 1030.9994125], ['2017-02-04', 1013.027], ['2017-02-03', 1007.6137125], ['2017-02-02', 979.703875], ['2017-02-01', 964.706075], ['2017-01-31', 921.179325], ['2017-01-30', 915.933], ['2017-01-29', 920.31225], ['2017-01-28', 919.27975], ['2017-01-27', 915.95625], ['2017-01-26', 893.045625], ['2017-01-25', 890.320225], ['2017-01-24', 922.0736125], ['2017-01-23', 918.603625], ['2017-01-22', 920.4479], ['2017-01-21', 893.6210875], ['2017-01-20', 895.798875], ['2017-01-19', 874.99], ['2017-01-18', 903.84], ['2017-01-17', 830.5], ['2017-01-16', 822.2], ['2017-01-15', 817.91], ['2017-01-14', 826.29], ['2017-01-13', 803.37], ['2017-01-12', 785.22], ['2017-01-11', 906.05], ['2017-01-10', 894.18], ['2017-01-09', 908.14], ['2017-01-08', 896.83], ['2017-01-07', 883.09], ['2017-01-06', 994.67], ['2017-01-05', 1126.76], ['2017-01-04', 1023.14], ['2017-01-03', 1015.97], ['2017-01-02', 997.72], ['2017-01-01', 959.87], ['2016-12-31', 952.15], ['2016-12-30', 963.38], ['2016-12-29', 967.48], ['2016-12-28', 930.37], ['2016-12-27', 897.33], ['2016-12-26', 886.9], ['2016-12-25', 891.61], ['2016-12-24', 901.31], ['2016-12-23', 860.59], ['2016-12-22', 824.21], ['2016-12-21', 793.09], ['2016-12-20', 789.52], ['2016-12-19', 788.4], ['2016-12-18', 788.7], ['2016-12-17', 781.56], ['2016-12-16', 776.75], ['2016-12-15', 774.89], ['2016-12-14', 778.49], ['2016-12-13', 777.0], ['2016-12-12', 777.0], ['2016-12-11', 773.4], ['2016-12-10', 770.02], ['2016-12-09', 769.72], ['2016-12-08', 766.11], ['2016-12-07', 756.62], ['2016-12-06', 754.63], ['2016-12-05', 764.81], ['2016-12-04', 764.33], ['2016-12-03', 772.43], ['2016-12-02', 752.24], ['2016-12-01', 742.69], ['2016-11-30', 732.71], ['2016-11-29', 733.05], ['2016-11-28', 727.96], ['2016-11-27', 733.67], ['2016-11-26', 739.78], ['2016-11-25', 737.45], ['2016-11-24', 741.63], ['2016-11-23', 748.74], ['2016-11-22', 738.53], ['2016-11-21', 729.06], ['2016-11-20', 750.03], ['2016-11-19', 747.52], ['2016-11-18', 736.96], ['2016-11-17', 736.91], ['2016-11-16', 710.91], ['2016-11-15', 706.46], ['2016-11-14', 701.9], ['2016-11-13', 703.71], ['2016-11-12', 715.45], ['2016-11-11', 713.69], ['2016-11-10', 720.93], ['2016-11-09', 708.97], ['2016-11-08', 703.81], ['2016-11-07', 712.0], ['2016-11-06', 704.79], ['2016-11-05', 703.69], ['2016-11-04', 686.17], ['2016-11-03', 733.33], ['2016-11-02', 728.2], ['2016-11-01', 702.0], ['2016-10-31', 698.0], ['2016-10-30', 714.89], ['2016-10-29', 687.68], ['2016-10-28', 682.22], ['2016-10-27', 672.22], ['2016-10-26', 655.31], ['2016-10-25', 651.39], ['2016-10-24', 653.0], ['2016-10-23', 655.48], ['2016-10-22', 631.92], ['2016-10-21', 630.22], ['2016-10-20', 629.25], ['2016-10-19', 636.29], ['2016-10-18', 638.18], ['2016-10-17', 641.42], ['2016-10-16', 637.94], ['2016-10-15', 639.56], ['2016-10-14', 635.96], ['2016-10-13', 635.01], ['2016-10-12', 639.3], ['2016-10-11', 617.54], ['2016-10-10', 615.65], ['2016-10-09', 618.04], ['2016-10-08', 617.21], ['2016-10-07', 612.08], ['2016-10-06', 612.35], ['2016-10-05', 609.62], ['2016-10-04', 611.85], ['2016-10-03', 610.51], ['2016-10-02', 614.82], ['2016-10-01', 609.39], ['2016-09-30', 606.36], ['2016-09-29', 605.67], ['2016-09-28', 605.96], ['2016-09-27', 608.14], ['2016-09-26', 601.74], ['2016-09-25', 603.88], ['2016-09-24', 604.22], ['2016-09-23', 597.42], ['2016-09-22', 598.88], ['2016-09-21', 609.74], ['2016-09-20', 610.19], ['2016-09-19', 611.58], ['2016-09-18', 608.0], ['2016-09-17', 609.11], ['2016-09-16', 610.38], ['2016-09-15', 612.08], ['2016-09-14', 610.92], ['2016-09-13', 609.67], ['2016-09-12', 608.15], ['2016-09-11', 625.76], ['2016-09-10', 625.07], ['2016-09-09', 627.77], ['2016-09-08', 614.46], ['2016-09-07', 611.09], ['2016-09-06', 608.1], ['2016-09-05', 610.59], ['2016-09-04', 600.88], ['2016-09-03', 576.21], ['2016-09-02', 572.81], ['2016-09-01', 574.82], ['2016-08-31', 578.61], ['2016-08-30', 575.22], ['2016-08-29', 576.53], ['2016-08-28', 571.54], ['2016-08-27', 581.07], ['2016-08-26', 579.86], ['2016-08-25', 582.84], ['2016-08-24', 586.18], ['2016-08-23', 588.97], ['2016-08-22', 582.82], ['2016-08-21', 583.2], ['2016-08-20', 576.65], ['2016-08-19', 576.03], ['2016-08-18', 575.51], ['2016-08-17', 581.11], ['2016-08-16', 570.14], ['2016-08-15', 570.97], ['2016-08-14', 586.43], ['2016-08-13', 587.75], ['2016-08-12', 589.24], ['2016-08-11', 592.18], ['2016-08-10', 588.77], ['2016-08-09', 592.69], ['2016-08-08', 594.27], ['2016-08-07', 588.23], ['2016-08-06', 576.55], ['2016-08-05', 577.31], ['2016-08-04', 565.05], ['2016-08-03', 515.06], ['2016-08-02', 606.32], ['2016-08-01', 628.01], ['2016-07-31', 654.73], ['2016-07-30', 656.88], ['2016-07-29', 655.32], ['2016-07-28', 654.98], ['2016-07-27', 655.06], ['2016-07-26', 654.69], ['2016-07-25', 661.41], ['2016-07-24', 655.84], ['2016-07-23', 651.58], ['2016-07-22', 664.88], ['2016-07-21', 665.64], ['2016-07-20', 672.84], ['2016-07-19', 673.2], ['2016-07-18', 675.45], ['2016-07-17', 662.15], ['2016-07-16', 664.57], ['2016-07-15', 658.83], ['2016-07-14', 657.45], ['2016-07-13', 672.76], ['2016-07-12', 647.98], ['2016-07-11', 648.09], ['2016-07-10', 651.86], ['2016-07-09', 659.55], ['2016-07-08', 636.35], ['2016-07-07', 673.78], ['2016-07-06', 666.48], ['2016-07-05', 676.32], ['2016-07-04', 658.41], ['2016-07-03', 701.71], ['2016-07-02', 675.18], ['2016-07-01', 671.4], ['2016-06-30', 637.95], ['2016-06-29', 646.37], ['2016-06-28', 646.7], ['2016-06-27', 627.39], ['2016-06-26', 664.9], ['2016-06-25', 660.6], ['2016-06-24', 624.54], ['2016-06-23', 587.45], ['2016-06-22', 665.95], ['2016-06-21', 729.19], ['2016-06-20', 760.81], ['2016-06-19', 754.36], ['2016-06-18', 743.91], ['2016-06-17', 762.43], ['2016-06-16', 692.28], ['2016-06-15', 685.24], ['2016-06-14', 702.0665], ['2016-06-13', 664.8465], ['2016-06-12', 594.4399875], ['2016-06-11', 577.549875], ['2016-06-10', 575.2941375], ['2016-06-09', 578.68], ['2016-06-08', 577.54], ['2016-06-07', 584.5], ['2016-06-06', 574.02], ['2016-06-05', 573.74], ['2016-06-04', 568.0], ['2016-06-03', 539.99], ['2016-06-02', 539.47], ['2016-06-01', 531.15], ['2016-05-31', 525.15], ['2016-05-30', 512.16], ['2016-05-29', 522.43], ['2016-05-28', 470.29], ['2016-05-27', 452.49], ['2016-05-26', 447.94], ['2016-05-25', 445.34], ['2016-05-24', 442.17], ['2016-05-23', 438.0], ['2016-05-22', 443.89], ['2016-05-21', 441.25], ['2016-05-20', 441.87], ['2016-05-19', 452.9], ['2016-05-18', 452.89], ['2016-05-17', 454.88], ['2016-05-16', 457.08], ['2016-05-15', 455.65], ['2016-05-14', 456.82], ['2016-05-13', 454.7], ['2016-05-12', 452.51], ['2016-05-11', 450.0], ['2016-05-10', 461.63], ['2016-05-09', 457.82], ['2016-05-08', 459.04], ['2016-05-07', 460.91], ['2016-05-06', 448.51], ['2016-05-05', 447.37], ['2016-05-04', 450.96], ['2016-05-03', 443.9], ['2016-05-02', 451.9], ['2016-05-01', 447.17], ['2016-04-30', 456.16], ['2016-04-29', 450.0], ['2016-04-28', 447.31], ['2016-04-27', 467.98], ['2016-04-26', 463.86], ['2016-04-25', 459.4], ['2016-04-24', 451.29], ['2016-04-23', 447.25], ['2016-04-22', 451.0], ['2016-04-21', 442.34], ['2016-04-20', 435.91], ['2016-04-19', 427.28], ['2016-04-18', 425.71], ['2016-04-17', 429.43], ['2016-04-16', 429.09], ['2016-04-15', 423.76], ['2016-04-14', 424.61], ['2016-04-13', 425.89], ['2016-04-12', 420.69], ['2016-04-11', 421.63], ['2016-04-10', 417.73], ['2016-04-09', 422.2], ['2016-04-08', 420.33], ['2016-04-07', 421.09], ['2016-04-06', 421.77], ['2016-04-05', 418.99], ['2016-04-04', 418.07], ['2016-04-03', 418.07], ['2016-04-02', 416.25], ['2016-04-01', 416.94], ['2016-03-31', 412.35], ['2016-03-30', 414.0], ['2016-03-29', 422.96], ['2016-03-28', 426.8], ['2016-03-27', 415.71], ['2016-03-26', 416.44], ['2016-03-25', 416.47], ['2016-03-24', 417.8], ['2016-03-23', 415.35], ['2016-03-22', 410.49], ['2016-03-21', 410.77], ['2016-03-20', 405.59], ['2016-03-19', 408.87], ['2016-03-18', 418.64], ['2016-03-17', 415.99], ['2016-03-16', 414.78], ['2016-03-15', 414.78], ['2016-03-14', 413.89], ['2016-03-13', 412.44], ['2016-03-12', 421.22], ['2016-03-11', 416.0], ['2016-03-10', 413.1], ['2016-03-09', 412.39], ['2016-03-08', 411.94], ['2016-03-07', 408.86], ['2016-03-06', 401.0], ['2016-03-05', 416.48], ['2016-03-04', 421.44], ['2016-03-03', 429.99], ['2016-03-02', 431.87], ['2016-03-01', 436.18], ['2016-02-29', 433.47], ['2016-02-28', 432.0], ['2016-02-27', 424.34], ['2016-02-26', 421.4], ['2016-02-25', 423.1], ['2016-02-24', 417.0], ['2016-02-23', 438.5], ['2016-02-22', 434.67], ['2016-02-21', 439.48], ['2016-02-20', 418.02], ['2016-02-19', 418.97], ['2016-02-18', 418.04], ['2016-02-17', 406.69], ['2016-02-16', 402.64], ['2016-02-15', 402.38], ['2016-02-14', 388.6], ['2016-02-13', 381.46], ['2016-02-12', 376.75], ['2016-02-11', 378.98], ['2016-02-10', 372.61], ['2016-02-09', 375.8], ['2016-02-08', 373.74], ['2016-02-07', 373.04], ['2016-02-06', 386.49], ['2016-02-05', 385.06], ['2016-02-04', 368.38], ['2016-02-03', 373.48], ['2016-02-02', 372.0], ['2016-02-01', 376.86], ['2016-01-31', 378.24], ['2016-01-30', 377.26], ['2016-01-29', 382.44], ['2016-01-28', 394.45], ['2016-01-27', 394.12], ['2016-01-26', 387.09], ['2016-01-25', 404.75], ['2016-01-24', 388.5], ['2016-01-23', 385.7], ['2016-01-22', 408.0], ['2016-01-21', 408.33], ['2016-01-20', 378.66], ['2016-01-19', 385.28], ['2016-01-18', 385.45], ['2016-01-17', 370.4], ['2016-01-16', 391.62], ['2016-01-15', 431.9], ['2016-01-14', 429.57], ['2016-01-13', 446.66], ['2016-01-12', 447.11], ['2016-01-11', 446.24], ['2016-01-10', 450.15], ['2016-01-09', 447.04], ['2016-01-08', 453.71], ['2016-01-07', 430.75], ['2016-01-06', 431.9], ['2016-01-05', 433.0], ['2016-01-04', 428.13], ['2016-01-03', 433.94], ['2016-01-02', 432.33], ['2016-01-01', 429.34]], 'collapse': None, 'order': None, 'database_id': 893}\n"
],
[
"# To make this clearer, let's iterate over the nested elements in the main dictionary\n\nfor key, value in dict['dataset'].items():\n print (key, value)",
"id 7692468\ndataset_code MKPRU\ndatabase_code BCHAIN\nname Bitcoin Market Price USD\ndescription Data showing the USD market price from Mt.gox\nrefreshed_at 2020-06-13T05:00:45.781Z\nnewest_available_date 2020-06-14\noldest_available_date 2009-01-03\ncolumn_names ['Date', 'Value']\nfrequency daily\ntype Time Series\npremium False\nlimit None\ntransform None\ncolumn_index None\nstart_date 2016-01-01\nend_date 2020-02-29\ndata [['2020-02-29', 8804.72], ['2020-02-28', 8785.52], ['2020-02-27', 9309.15], ['2020-02-26', 9663.75], ['2020-02-25', 9989.39], ['2020-02-24', 9669.63], ['2020-02-23', 9696.58], ['2020-02-22', 9606.86], ['2020-02-21', 9604.72], ['2020-02-20', 10180.65], ['2020-02-19', 9703.93], ['2020-02-18', 9937.67], ['2020-02-17', 9904.17], ['2020-02-16', 10368.53], ['2020-02-15', 10242.43], ['2020-02-14', 10354.3], ['2020-02-13', 10275.38], ['2020-02-12', 9854.79], ['2020-02-11', 10162.41], ['2020-02-10', 9907.12], ['2020-02-09', 9807.54], ['2020-02-08', 9755.66], ['2020-02-07', 9614.9], ['2020-02-06', 9162.14], ['2020-02-05', 9284.51], ['2020-02-04', 9314.56], ['2020-02-03', 9378.09], ['2020-02-02', 9333.77], ['2020-02-01', 9502.37], ['2020-01-31', 9279.81], ['2020-01-30', 9385.69], ['2020-01-29', 8895.78], ['2020-01-28', 8588.42], ['2020-01-27', 8327.36], ['2020-01-26', 8428.17], ['2020-01-25', 8388.11], ['2020-01-24', 8658.94], ['2020-01-23', 8722.26], ['2020-01-22', 8626.47], ['2020-01-21', 8703.36], ['2020-01-20', 8910.85], ['2020-01-19', 8900.34], ['2020-01-18', 8722.03], ['2020-01-17', 8813.89], ['2020-01-16', 8842.42], ['2020-01-15', 8105.24], ['2020-01-14', 8173.97], ['2020-01-13', 8021.49], ['2020-01-12', 8184.66], ['2020-01-11', 7817.92], ['2020-01-10', 8042.65], ['2020-01-09', 8165.47], ['2020-01-08', 7759.24], ['2020-01-07', 7351.57], ['2020-01-06', 7347.89], ['2020-01-05', 7326.35], ['2020-01-04', 6944.33], ['2020-01-03', 7175.68], ['2020-01-02', 7168.31], ['2020-01-01', 7219.6], ['2019-12-31', 7385.36], ['2019-12-30', 7301.07], ['2019-12-29', 7243.93], ['2019-12-28', 7194.4], ['2019-12-27', 7192.72], ['2019-12-26', 7250.69], ['2019-12-25', 7322.08], ['2019-12-24', 7514.41], ['2019-12-23', 7143.2], ['2019-12-22', 7190.17], ['2019-12-21', 7150.86], ['2019-12-20', 7284.29], ['2019-12-19', 6612.12], ['2019-12-18', 6879.54], ['2019-12-17', 7111.14], ['2019-12-16', 7067.74], ['2019-12-15', 7251.87], ['2019-12-14', 7189.16], ['2019-12-13', 7202.31], ['2019-12-12', 7220.76], ['2019-12-11', 7337.42], ['2019-12-10', 7522.39], ['2019-12-09', 7504.83], ['2019-12-08', 7547.19], ['2019-12-07', 7395.97], ['2019-12-06', 7192.85], ['2019-12-05', 7296.77], ['2019-12-04', 7309.59], ['2019-12-03', 7402.69], ['2019-12-02', 7557.72], ['2019-12-01', 7757.47], ['2019-11-30', 7431.0], ['2019-11-29', 7523.83], ['2019-11-28', 7163.63], ['2019-11-27', 7130.25], ['2019-11-26', 6907.4], ['2019-11-25', 7324.03], ['2019-11-24', 7286.35], ['2019-11-23', 7617.07], ['2019-11-22', 8081.81], ['2019-11-21', 8120.8], ['2019-11-20', 8175.99], ['2019-11-19', 8503.93], ['2019-11-18', 8482.7], ['2019-11-17', 8457.69], ['2019-11-16', 8632.32], ['2019-11-15', 8762.42], ['2019-11-14', 8801.52], ['2019-11-13', 8717.81], ['2019-11-12', 9037.12], ['2019-11-11', 8809.41], ['2019-11-10', 8766.04], ['2019-11-09', 9204.24], ['2019-11-08', 9343.34], ['2019-11-07', 9310.19], ['2019-11-06', 9418.05], ['2019-11-05', 9206.16], ['2019-11-04', 9301.18], ['2019-11-03', 9252.99], ['2019-11-02', 9147.98], ['2019-11-01', 9164.62], ['2019-10-31', 9433.35], ['2019-10-30', 9218.76], ['2019-10-29', 9551.54], ['2019-10-28', 9259.8], ['2019-10-27', 8666.79], ['2019-10-26', 7431.88], ['2019-10-25', 7469.56], ['2019-10-24', 8026.76], ['2019-10-23', 8222.52], ['2019-10-22', 8231.06], ['2019-10-21', 7960.04], ['2019-10-20', 7954.15], ['2019-10-19', 8076.78], ['2019-10-18', 8002.51], ['2019-10-17', 8162.16], ['2019-10-16', 8353.54], ['2019-10-15', 8283.76], ['2019-10-14', 8308.01], ['2019-10-13', 8269.73], ['2019-10-12', 8586.9], ['2019-10-11', 8587.92], ['2019-10-10', 8190.0], ['2019-10-09', 8212.02], ['2019-10-08', 7869.74], ['2019-10-07', 8147.69], ['2019-10-06', 8155.48], ['2019-10-05', 8236.17], ['2019-10-04', 8382.03], ['2019-10-03', 8322.92], ['2019-10-02', 8307.74], ['2019-10-01', 8056.74], ['2019-09-30', 8225.0], ['2019-09-29', 8193.9], ['2019-09-28', 8055.64], ['2019-09-27', 8432.23], ['2019-09-26', 8553.61], ['2019-09-25', 9683.38], ['2019-09-24', 10033.05], ['2019-09-23', 9979.49], ['2019-09-22', 10173.11], ['2019-09-21', 10275.88], ['2019-09-20', 10157.59], ['2019-09-19', 10190.36], ['2019-09-18', 10265.63], ['2019-09-17', 10310.43], ['2019-09-16', 10361.33], ['2019-09-15', 10363.9], ['2019-09-14', 10420.16], ['2019-09-13', 10159.32], ['2019-09-12', 10101.03], ['2019-09-11', 10313.66], ['2019-09-10', 10406.31], ['2019-09-09', 10487.21], ['2019-09-08', 10317.47], ['2019-09-07', 10577.8], ['2019-09-06', 10584.16], ['2019-09-05', 10621.29], ['2019-09-04', 10386.64], ['2019-09-03', 9769.79], ['2019-09-02', 9600.9], ['2019-09-01', 9577.99], ['2019-08-31', 9484.55], ['2019-08-30', 9717.82], ['2019-08-29', 10171.95], ['2019-08-28', 10360.28], ['2019-08-27', 10135.06], ['2019-08-26', 10143.8], ['2019-08-25', 10405.81], ['2019-08-24', 10111.98], ['2019-08-23', 10129.4], ['2019-08-22', 10760.56], ['2019-08-21', 10917.26], ['2019-08-20', 10317.6], ['2019-08-19', 10214.52], ['2019-08-18', 10359.44], ['2019-08-17', 10302.17], ['2019-08-16', 10016.96], ['2019-08-15', 10858.12], ['2019-08-14', 11386.26], ['2019-08-13', 11566.84], ['2019-08-12', 11282.22], ['2019-08-11', 11856.64], ['2019-08-10', 11996.41], ['2019-08-09', 11960.82], ['2019-08-08', 11465.67], ['2019-08-07', 11787.99], ['2019-08-06', 10980.23], ['2019-08-05', 10814.57], ['2019-08-04', 10529.55], ['2019-08-03', 10407.17], ['2019-08-02', 10084.7], ['2019-08-01', 9589.13], ['2019-07-31', 9501.33], ['2019-07-30', 9530.0], ['2019-07-29', 9473.99], ['2019-07-28', 9848.65], ['2019-07-27', 9875.17], ['2019-07-26', 9772.17], ['2019-07-25', 9844.3], ['2019-07-24', 10323.62], ['2019-07-23', 10587.41], ['2019-07-22', 10764.9], ['2019-07-21', 10535.75], ['2019-07-20', 10638.56], ['2019-07-19', 9674.28], ['2019-07-18', 9397.72], ['2019-07-17', 10873.5], ['2019-07-16', 10186.96], ['2019-07-15', 11389.1], ['2019-07-14', 11803.97], ['2019-07-13', 11352.87], ['2019-07-12', 12099.9], ['2019-07-11', 12586.78], ['2019-07-10', 12313.08], ['2019-07-09', 11477.01], ['2019-07-08', 11232.04], ['2019-07-07', 10955.16], ['2019-07-06', 11160.49], ['2019-07-05', 11984.76], ['2019-07-04', 10805.4], ['2019-07-03', 10578.72], ['2019-07-02', 10737.73], ['2019-07-01', 11890.38], ['2019-06-30', 12374.35], ['2019-06-29', 11132.85], ['2019-06-28', 12932.55], ['2019-06-27', 11766.4], ['2019-06-26', 11017.5], ['2019-06-25', 10814.48], ['2019-06-24', 10668.63], ['2019-06-23', 10209.38], ['2019-06-22', 9531.35], ['2019-06-21', 9281.7], ['2019-06-20', 9076.18], ['2019-06-19', 9327.48], ['2019-06-18', 8978.99], ['2019-06-17', 8859.47], ['2019-06-16', 8693.6], ['2019-06-15', 8237.88], ['2019-06-14', 8173.06], ['2019-06-13', 7917.58], ['2019-06-12', 8020.38], ['2019-06-11', 7640.61], ['2019-06-10', 7931.34], ['2019-06-09', 7998.29], ['2019-06-08', 7803.23], ['2019-06-07', 7786.04], ['2019-06-06', 7675.8], ['2019-06-05', 8134.92], ['2019-06-04', 8739.03], ['2019-06-03', 8554.8], ['2019-06-02', 8553.81], ['2019-06-01', 8272.46], ['2019-05-31', 8662.98], ['2019-05-30', 8719.88], ['2019-05-29', 8770.06], ['2019-05-28', 8744.42], ['2019-05-27', 8071.45], ['2019-05-26', 8003.26], ['2019-05-25', 7880.29], ['2019-05-24', 7625.93], ['2019-05-23', 7952.5], ['2019-05-22', 8007.03], ['2019-05-21', 8193.7], ['2019-05-20', 7265.05], ['2019-05-19', 7362.22], ['2019-05-18', 7885.96], ['2019-05-17', 8206.9], ['2019-05-16', 7992.69], ['2019-05-15', 7823.11], ['2019-05-14', 6978.63], ['2019-05-13', 7248.31], ['2019-05-12', 6348.02], ['2019-05-11', 6146.91], ['2019-05-10', 5936.72], ['2019-05-09', 5755.72], ['2019-05-08', 5684.47], ['2019-05-07', 5717.66], ['2019-05-06', 5771.08], ['2019-05-05', 5657.14], ['2019-05-04', 5390.16], ['2019-05-03', 5326.67], ['2019-05-02', 5269.79], ['2019-05-01', 5260.65], ['2019-04-30', 5301.29], ['2019-04-29', 5280.19], ['2019-04-28', 5292.83], ['2019-04-27', 5195.61], ['2019-04-26', 5434.19], ['2019-04-25', 5518.16], ['2019-04-24', 5377.19], ['2019-04-23', 5281.83], ['2019-04-22', 5309.28], ['2019-04-21', 5277.04], ['2019-04-20', 5276.31], ['2019-04-19', 5220.37], ['2019-04-18', 5196.65], ['2019-04-17', 5036.18], ['2019-04-16', 5152.51], ['2019-04-15', 5064.62], ['2019-04-14', 5072.85], ['2019-04-13', 5035.02], ['2019-04-12', 5310.18], ['2019-04-11', 5175.81], ['2019-04-10', 5268.71], ['2019-04-09', 5189.39], ['2019-04-08', 5053.52], ['2019-04-07', 5028.77], ['2019-04-06', 4911.24], ['2019-04-05', 4959.81], ['2019-04-04', 4882.88], ['2019-04-03', 4152.53], ['2019-04-02', 4114.16], ['2019-04-01', 4114.44], ['2019-03-31', 4115.55], ['2019-03-30', 4037.37], ['2019-03-29', 4048.51], ['2019-03-28', 3947.74], ['2019-03-27', 3935.89], ['2019-03-26', 3994.11], ['2019-03-25', 4011.92], ['2019-03-24', 3999.06], ['2019-03-23', 3992.34], ['2019-03-22', 4056.4], ['2019-03-21', 4029.11], ['2019-03-20', 3998.77], ['2019-03-19', 3994.92], ['2019-03-18', 4015.95], ['2019-03-17', 3936.5], ['2019-03-16', 3885.21], ['2019-03-15', 3877.77], ['2019-03-14', 3892.51], ['2019-03-13', 3881.09], ['2019-03-12', 3928.17], ['2019-03-11', 3950.56], ['2019-03-10', 3875.96], ['2019-03-09', 3886.82], ['2019-03-08', 3876.85], ['2019-03-07', 3872.25], ['2019-03-06', 3730.98], ['2019-03-05', 3814.58], ['2019-03-04', 3831.44], ['2019-03-03', 3834.62], ['2019-03-02', 3850.08], ['2019-03-01', 3833.23], ['2019-02-28', 3831.45], ['2019-02-27', 3848.33], ['2019-02-26', 3766.89], ['2019-02-25', 4139.6], ['2019-02-24', 3983.48], ['2019-02-23', 3944.88], ['2019-02-22', 3987.18], ['2019-02-21', 3926.53], ['2019-02-20', 3910.8], ['2019-02-19', 3670.74], ['2019-02-18', 3619.55], ['2019-02-17', 3608.2], ['2019-02-16', 3600.31], ['2019-02-15', 3614.5], ['2019-02-14', 3632.42], ['2019-02-13', 3629.47], ['2019-02-12', 3686.29], ['2019-02-11', 3664.38], ['2019-02-10', 3661.08], ['2019-02-09', 3394.76], ['2019-02-08', 3405.7], ['2019-02-07', 3467.34], ['2019-02-06', 3452.35], ['2019-02-05', 3456.02], ['2019-02-04', 3506.22], ['2019-02-03', 3475.33], ['2019-02-02', 3444.81], ['2019-02-01', 3470.0], ['2019-01-31', 3421.6], ['2019-01-30', 3448.58], ['2019-01-29', 3554.5], ['2019-01-28', 3576.3], ['2019-01-27', 3577.79], ['2019-01-26', 3586.25], ['2019-01-25', 3566.4], ['2019-01-24', 3587.35], ['2019-01-23', 3540.02], ['2019-01-22', 3548.4275], ['2019-01-21', 3620.1275], ['2019-01-20', 3690.52333333], ['2019-01-19', 3632.395], ['2019-01-18', 3619.96416667], ['2019-01-17', 3621.27083333], ['2019-01-16', 3656.785], ['2019-01-15', 3604.1175], ['2019-01-14', 3599.84166667], ['2019-01-13', 3646.34583333], ['2019-01-12', 3656.73583333], ['2019-01-11', 3812.58583333], ['2019-01-10', 4034.13833333], ['2019-01-09', 4035.855], ['2019-01-08', 4036.09333333], ['2019-01-07', 3920.45666667], ['2019-01-06', 3868.4875], ['2019-01-05', 3822.62666667], ['2019-01-04', 3865.7975], ['2019-01-03', 3867.13833333], ['2019-01-02', 3752.27166667], ['2019-01-01', 3791.54583333], ['2018-12-31', 3832.92166667], ['2018-12-30', 3912.28583333], ['2018-12-29', 3743.905], ['2018-12-28', 3747.83916667], ['2018-12-27', 3825.37916667], ['2018-12-26', 3813.88], ['2018-12-25', 4178.59083333], ['2018-12-24', 4027.47833333], ['2018-12-23', 3910.97333333], ['2018-12-22', 4015.60916667], ['2018-12-21', 3982.85083333], ['2018-12-20', 3808.0425], ['2018-12-19', 3567.47], ['2018-12-18', 3392.405], ['2018-12-17', 3271.23833333], ['2018-12-16', 3225.29916667], ['2018-12-15', 3278.37416667], ['2018-12-14', 3406.7625], ['2018-12-13', 3462.04], ['2018-12-12', 3426.19], ['2018-12-11', 3523.96], ['2018-12-10', 3528.80333333], ['2018-12-09', 3435.34], ['2018-12-08', 3405.64333333], ['2018-12-07', 3742.94333333], ['2018-12-06', 3858.34916667], ['2018-12-05', 3961.49333333], ['2018-12-04', 3967.52416667], ['2018-12-03', 4167.54666667], ['2018-12-02', 4116.7775], ['2018-12-01', 4106.87166667], ['2018-11-30', 4263.78333333], ['2018-11-29', 4103.45384615], ['2018-11-28', 3751.66833333], ['2018-11-27', 3920.53666667], ['2018-11-26', 3823.51166667], ['2018-11-25', 4293.84083333], ['2018-11-24', 4309.3375], ['2018-11-23', 4548.7975], ['2018-11-22', 4533.68083333], ['2018-11-21', 4671.97], ['2018-11-20', 5303.9425], ['2018-11-19', 5606.04416667], ['2018-11-18', 5558.24333333], ['2018-11-17', 5596.1925], ['2018-11-16', 5615.18], ['2018-11-15', 6176.155], ['2018-11-14', 6372.06333333], ['2018-11-13', 6401.93666667], ['2018-11-12', 6378.26833333], ['2018-11-11', 6399.03333333], ['2018-11-10', 6411.28083333], ['2018-11-09', 6486.25166667], ['2018-11-08', 6538.79], ['2018-11-07', 6445.35416667], ['2018-11-06', 6436.965], ['2018-11-05', 6391.87333333], ['2018-11-04', 6363.79583333], ['2018-11-03', 6387.67416667], ['2018-11-02', 6342.28083333], ['2018-11-01', 6310.28416667], ['2018-10-31', 6309.45285714], ['2018-10-30', 6382.66833333], ['2018-10-29', 6448.22166667], ['2018-10-28', 6465.9175], ['2018-10-27', 6473.75333333], ['2018-10-26', 6478.0825], ['2018-10-25', 6508.31], ['2018-10-24', 6481.426], ['2018-10-23', 6498.48583333], ['2018-10-22', 6531.60166667], ['2018-10-21', 6488.82583333], ['2018-10-20', 6487.44416667], ['2018-10-19', 6568.04076923], ['2018-10-18', 6596.27615385], ['2018-10-17', 6596.61833333], ['2018-10-16', 6452.57166667], ['2018-10-15', 6299.39916667], ['2018-10-14', 6260.64583333], ['2018-10-13', 6260.53083333], ['2018-10-12', 6248.63583333], ['2018-10-11', 6563.00916667], ['2018-10-10', 6621.71166667], ['2018-10-09', 6618.56769231], ['2018-10-08', 6558.5375], ['2018-10-07', 6581.48666667], ['2018-10-06', 6568.54916667], ['2018-10-05', 6563.62833333], ['2018-10-04', 6470.4025], ['2018-10-03', 6562.64166667], ['2018-10-02', 6590.96833333], ['2018-10-01', 6593.135], ['2018-09-30', 6550.47416667], ['2018-09-29', 6677.3425], ['2018-09-28', 6535.47666667], ['2018-09-27', 6468.63166667], ['2018-09-26', 6412.45916667], ['2018-09-25', 6639.30416667], ['2018-09-24', 6710.445], ['2018-09-23', 6709.3125], ['2018-09-22', 6669.99083333], ['2018-09-21', 6418.56266667], ['2018-09-20', 6335.82666667], ['2018-09-19', 6296.63166667], ['2018-09-18', 6400.60083333], ['2018-09-17', 6480.64416667], ['2018-09-16', 6518.655], ['2018-09-15', 6499.0625], ['2018-09-14', 6450.17923077], ['2018-09-13', 6273.1375], ['2018-09-12', 6296.32083333], ['2018-09-11', 6297.87769231], ['2018-09-10', 6286.42583333], ['2018-09-09', 6366.1075], ['2018-09-08', 6444.80416667], ['2018-09-07', 6433.27166667], ['2018-09-06', 7113.06923077], ['2018-09-05', 7326.8525], ['2018-09-04', 7260.94923077], ['2018-09-03', 7247.93538462], ['2018-09-02', 7100.94666667], ['2018-09-01', 6981.94615385], ['2018-08-31', 6932.6625], ['2018-08-30', 7054.27642857], ['2018-08-29', 7000.04], ['2018-08-28', 6719.26615385], ['2018-08-27', 6673.27416667], ['2018-08-26', 6719.42923077], ['2018-08-25', 6543.64571429], ['2018-08-24', 6434.88166667], ['2018-08-23', 6575.22916667], ['2018-08-22', 6401.24615385], ['2018-08-21', 6434.55916667], ['2018-08-20', 6404.06333333], ['2018-08-19', 6436.72083333], ['2018-08-18', 6476.9], ['2018-08-17', 6342.62923077], ['2018-08-16', 6362.67692308], ['2018-08-15', 6050.9425], ['2018-08-14', 6347.07], ['2018-08-13', 6311.13166667], ['2018-08-12', 6195.65333333], ['2018-08-11', 6396.49466667], ['2018-08-10', 6396.7725], ['2018-08-09', 6450.36692308], ['2018-08-08', 6993.51333333], ['2018-08-07', 6988.07916667], ['2018-08-06', 6998.71833333], ['2018-08-05', 7247.76916667], ['2018-08-04', 7394.49916667], ['2018-08-03', 7593.14916667], ['2018-08-02', 7570.86916667], ['2018-08-01', 7916.80083333], ['2018-07-31', 8143.14833333], ['2018-07-30', 8206.34166667], ['2018-07-29', 8182.25166667], ['2018-07-28', 8025.2575], ['2018-07-27', 8187.32416667], ['2018-07-26', 8251.165], ['2018-07-25', 8112.93], ['2018-07-24', 7689.88416667], ['2018-07-23', 7451.28916667], ['2018-07-22', 7352.49538462], ['2018-07-21', 7428.045], ['2018-07-20', 7396.40166667], ['2018-07-19', 7398.66166667], ['2018-07-18', 6869.91083333], ['2018-07-17', 6514.39083333], ['2018-07-16', 6316.88166667], ['2018-07-15', 6241.0], ['2018-07-14', 6244.3575], ['2018-07-13', 6233.595], ['2018-07-12', 6377.36333333], ['2018-07-11', 6510.79166667], ['2018-07-10', 6723.8725], ['2018-07-09', 6753.55916667], ['2018-07-08', 6594.28166667], ['2018-07-07', 6569.49615385], ['2018-07-06', 6603.37666667], ['2018-07-05', 6593.28916667], ['2018-07-04', 6613.68583333], ['2018-07-03', 6466.06916667], ['2018-07-02', 6374.75416667], ['2018-07-01', 6381.39083333], ['2018-06-30', 5908.7025], ['2018-06-29', 6107.89615385], ['2018-06-28', 6105.295], ['2018-06-27', 6218.595], ['2018-06-26', 6211.4475], ['2018-06-25', 6037.00833333], ['2018-06-24', 6141.60583333], ['2018-06-23', 6332.57333333], ['2018-06-22', 6733.90166667], ['2018-06-21', 6714.71833333], ['2018-06-20', 6737.0], ['2018-06-19', 6713.488], ['2018-06-18', 6464.41166667], ['2018-06-17', 6509.42666667], ['2018-06-16', 6434.088], ['2018-06-15', 6647.01333333], ['2018-06-14', 6315.7], ['2018-06-13', 6535.08166667], ['2018-06-12', 6875.67833333], ['2018-06-11', 6776.88833333], ['2018-06-10', 7564.40833333], ['2018-06-09', 7620.15333333], ['2018-06-08', 7676.27166667], ['2018-06-07', 7655.61333333], ['2018-06-06', 7615.7221421], ['2018-06-05', 7500.27333333], ['2018-06-04', 7699.886], ['2018-06-03', 7645.14666667], ['2018-06-02', 7535.14666667], ['2018-06-01', 7491.434], ['2018-05-31', 7385.395], ['2018-05-30', 7459.87666667], ['2018-05-29', 7130.54166667], ['2018-05-28', 7361.83166667], ['2018-05-27', 7342.90833333], ['2018-05-26', 7445.23833333], ['2018-05-25', 7566.19333333], ['2018-05-24', 7555.74], ['2018-05-23', 8015.58], ['2018-05-22', 8385.55618693], ['2018-05-21', 8507.40666667], ['2018-05-20', 8223.28833333], ['2018-05-19', 8240.055], ['2018-05-18', 8106.11833333], ['2018-05-17', 8340.70333333], ['2018-05-16', 8511.458], ['2018-05-15', 8652.03833333], ['2018-05-14', 8727.65166667], ['2018-05-13', 8484.34666667], ['2018-05-12', 8468.788], ['2018-05-11', 9101.48333333], ['2018-05-10', 9322.04166667], ['2018-05-09', 9228.60833333], ['2018-05-08', 9345.69], ['2018-05-07', 9630.13627668], ['2018-05-06', 9803.30666667], ['2018-05-05', 9710.73], ['2018-05-04', 9639.26833333], ['2018-05-03', 9221.426], ['2018-05-02', 9075.13666667], ['2018-05-01', 9259.57], ['2018-04-30', 9334.28166667], ['2018-04-29', 9326.17333333], ['2018-04-28', 9010.32], ['2018-04-27', 9258.39833333], ['2018-04-26', 8995.50666667], ['2018-04-25', 9555.542], ['2018-04-24', 8933.86166667], ['2018-04-23', 8838.54833333], ['2018-04-22', 8807.205], ['2018-04-21', 8852.71833333], ['2018-04-20', 8254.625], ['2018-04-19', 8164.93742577], ['2018-04-18', 7895.41692596], ['2018-04-17', 8043.80333333], ['2018-04-16', 8340.74833333], ['2018-04-15', 8036.51105141], ['2018-04-14', 7895.40833333], ['2018-04-13', 7847.845], ['2018-04-12', 6926.26666667], ['2018-04-11', 6787.57166667], ['2018-04-10', 6699.27333333], ['2018-04-09', 7017.65666667], ['2018-04-08', 6927.688], ['2018-04-07', 6603.87666667], ['2018-04-06', 6826.51], ['2018-04-05', 6787.76166667], ['2018-04-04', 7410.435], ['2018-04-03', 7035.84833333], ['2018-04-02', 6794.105], ['2018-04-01', 6935.48], ['2018-03-31', 6882.53166667], ['2018-03-30', 7172.28], ['2018-03-29', 7960.38], ['2018-03-28', 7876.195], ['2018-03-27', 8197.54833333], ['2018-03-26', 8617.29666667], ['2018-03-25', 8662.37833333], ['2018-03-24', 8686.82666667], ['2018-03-23', 8690.40833333], ['2018-03-22', 8947.75333333], ['2018-03-21', 8986.94833333], ['2018-03-20', 8412.03333333], ['2018-03-19', 8171.415], ['2018-03-18', 7993.67464364], ['2018-03-17', 8530.402], ['2018-03-16', 8358.12166667], ['2018-03-15', 8151.53166667], ['2018-03-14', 9154.7], ['2018-03-13', 9182.84333333], ['2018-03-12', 9761.39666667], ['2018-03-11', 8746.002], ['2018-03-10', 9089.27833333], ['2018-03-09', 9429.11166667], ['2018-03-08', 10118.058], ['2018-03-07', 10763.1983333], ['2018-03-06', 11595.54], ['2018-03-05', 11430.1816667], ['2018-03-04', 11326.9483333], ['2018-03-03', 11055.815], ['2018-03-02', 11009.3816667], ['2018-03-01', 10370.165], ['2018-02-28', 10763.8833333], ['2018-02-27', 10348.6033333], ['2018-02-26', 9696.59333333], ['2018-02-25', 9697.956], ['2018-02-24', 10162.1166667], ['2018-02-23', 9931.07166667], ['2018-02-22', 10532.7916667], ['2018-02-21', 11390.3916667], ['2018-02-20', 11110.965], ['2018-02-19', 10503.2983333], ['2018-02-18', 10841.9916667], ['2018-02-17', 10127.1616667], ['2018-02-16', 9977.154], ['2018-02-15', 9334.63333333], ['2018-02-14', 8597.7675], ['2018-02-13', 8811.34333333], ['2018-02-12', 8343.455], ['2018-02-11', 8319.87656618], ['2018-02-10', 8535.51666667], ['2018-02-09', 8240.53666667], ['2018-02-08', 8099.95833333], ['2018-02-07', 7685.63333333], ['2018-02-06', 6838.81666667], ['2018-02-05', 8400.64833333], ['2018-02-04', 9076.67833333], ['2018-02-03', 8901.90166667], ['2018-02-02', 9083.25833333], ['2018-02-01', 10125.0133333], ['2018-01-31', 10184.0616667], ['2018-01-30', 11212.655], ['2018-01-29', 11765.71], ['2018-01-28', 11524.7766667], ['2018-01-27', 10969.815], ['2018-01-26', 11214.44], ['2018-01-25', 11282.2583333], ['2018-01-24', 11223.064], ['2018-01-23', 10544.5933333], ['2018-01-22', 11505.228], ['2018-01-21', 12950.7933333], ['2018-01-20', 11422.44], ['2018-01-19', 11345.4233333], ['2018-01-18', 11116.9466667], ['2018-01-17', 11180.9983333], ['2018-01-16', 14012.196], ['2018-01-15', 13852.92], ['2018-01-14', 14499.7733333], ['2018-01-13', 13912.882], ['2018-01-12', 13296.794], ['2018-01-11', 15126.3983333], ['2018-01-10', 14714.2533333], ['2018-01-09', 15265.9066667], ['2018-01-08', 16651.4716667], ['2018-01-07', 17319.198], ['2018-01-06', 17174.12], ['2018-01-05', 15199.355], ['2018-01-04', 15053.2616667], ['2018-01-03', 15005.8566667], ['2018-01-02', 13812.1866667], ['2018-01-01', 14165.575], ['2017-12-31', 13215.574], ['2017-12-30', 14640.14], ['2017-12-29', 14380.5816667], ['2017-12-28', 15589.3216667], ['2017-12-27', 15999.0483333], ['2017-12-26', 14119.0283333], ['2017-12-25', 13949.175], ['2017-12-24', 15360.2616667], ['2017-12-23', 15190.945], ['2017-12-22', 16047.51], ['2017-12-21', 16026.2716667], ['2017-12-20', 17737.1116667], ['2017-12-19', 18961.8566667], ['2017-12-18', 19289.785], ['2017-12-17', 19498.6833333], ['2017-12-16', 17771.9], ['2017-12-15', 16678.892], ['2017-12-14', 16808.3666667], ['2017-12-13', 17276.3933333], ['2017-12-12', 16762.1166667], ['2017-12-11', 14869.805], ['2017-12-10', 15142.8341521], ['2017-12-09', 16007.4366667], ['2017-12-08', 16501.9716667], ['2017-12-07', 13540.98], ['2017-12-06', 11878.4333333], ['2017-12-05', 11584.83], ['2017-12-04', 11332.622], ['2017-12-03', 11071.3683333], ['2017-12-02', 10883.912], ['2017-12-01', 10147.372], ['2017-11-30', 9879.32833333], ['2017-11-29', 9952.50882], ['2017-11-28', 9718.29505], ['2017-11-27', 9284.1438], ['2017-11-26', 8707.40726667], ['2017-11-25', 8250.97833333], ['2017-11-24', 8148.95], ['2017-11-23', 8268.035], ['2017-11-22', 8059.8], ['2017-11-21', 8255.59681667], ['2017-11-20', 8007.65406667], ['2017-11-19', 7817.14038333], ['2017-11-18', 7786.88436667], ['2017-11-17', 7815.0307], ['2017-11-16', 7301.42992], ['2017-11-15', 6635.41263333], ['2017-11-14', 6550.22753333], ['2017-11-13', 5716.30158333], ['2017-11-12', 6362.85103333], ['2017-11-11', 6719.39785], ['2017-11-10', 7158.03706], ['2017-11-09', 7415.87825], ['2017-11-08', 7092.12723333], ['2017-11-07', 6989.07166667], ['2017-11-06', 7377.01236667], ['2017-11-05', 7437.54331667], ['2017-11-04', 7197.72006], ['2017-11-03', 7068.0201], ['2017-11-02', 6665.30668333], ['2017-11-01', 6388.64516667], ['2017-10-31', 6105.87422], ['2017-10-30', 6155.43402], ['2017-10-29', 5776.69695], ['2017-10-28', 5772.50498333], ['2017-10-27', 5893.13841667], ['2017-10-26', 5669.62253333], ['2017-10-25', 5505.82776667], ['2017-10-24', 5876.07986667], ['2017-10-23', 5983.18455], ['2017-10-22', 6020.37168333], ['2017-10-21', 5979.45984], ['2017-10-20', 5727.6335], ['2017-10-19', 5546.1761], ['2017-10-18', 5603.71294], ['2017-10-17', 5711.20586667], ['2017-10-16', 5647.31166667], ['2017-10-15', 5739.43873333], ['2017-10-14', 5563.80656667], ['2017-10-13', 5325.13068333], ['2017-10-12', 4819.48576667], ['2017-10-11', 4782.28], ['2017-10-10', 4777.96781667], ['2017-10-09', 4602.28088333], ['2017-10-08', 4376.19166667], ['2017-10-07', 4345.60333333], ['2017-10-06', 4338.852], ['2017-10-05', 4225.175], ['2017-10-04', 4293.3066], ['2017-10-03', 4386.88375], ['2017-10-02', 4360.72296667], ['2017-10-01', 4335.36831667], ['2017-09-30', 4193.57466667], ['2017-09-29', 4201.98905], ['2017-09-28', 4202.55498333], ['2017-09-27', 3910.30738333], ['2017-09-26', 3942.555], ['2017-09-25', 3703.04065], ['2017-09-24', 3776.3869], ['2017-09-23', 3637.50255], ['2017-09-22', 3658.89818333], ['2017-09-21', 3977.56166667], ['2017-09-20', 3943.41333333], ['2017-09-19', 4093.31666667], ['2017-09-18', 3746.06078333], ['2017-09-17', 3763.62604], ['2017-09-16', 3774.26528333], ['2017-09-15', 3319.63], ['2017-09-14', 3961.27126667], ['2017-09-13', 4219.03661667], ['2017-09-12', 4248.09001667], ['2017-09-11', 4329.955], ['2017-09-10', 4375.55952], ['2017-09-09', 4310.75018333], ['2017-09-08', 4654.6585], ['2017-09-07', 4641.82201667], ['2017-09-06', 4488.72014], ['2017-09-05', 4344.09831667], ['2017-09-04', 4648.15998333], ['2017-09-03', 4580.38748], ['2017-09-02', 4911.74001667], ['2017-09-01', 4748.255], ['2017-08-31', 4594.98785], ['2017-08-30', 4607.98545], ['2017-08-29', 4391.67351667], ['2017-08-28', 4354.30833333], ['2017-08-27', 4360.51331667], ['2017-08-26', 4363.05445], ['2017-08-25', 4340.31671667], ['2017-08-24', 4174.95], ['2017-08-23', 4082.18098333], ['2017-08-22', 4043.722], ['2017-08-21', 4157.95803333], ['2017-08-20', 4222.66221429], ['2017-08-19', 4130.44006667], ['2017-08-18', 4328.72571667], ['2017-08-17', 4360.87687143], ['2017-08-16', 4217.02832857], ['2017-08-15', 4282.992], ['2017-08-14', 4125.54802], ['2017-08-13', 3852.80291429], ['2017-08-12', 3632.50666667], ['2017-08-11', 3424.4042], ['2017-08-10', 3357.32631667], ['2017-08-09', 3457.37433333], ['2017-08-08', 3407.22683333], ['2017-08-07', 3252.56253333], ['2017-08-06', 3218.11501667], ['2017-08-05', 2873.85108333], ['2017-08-04', 2794.11771667], ['2017-08-03', 2693.63398333], ['2017-08-02', 2710.41306667], ['2017-08-01', 2866.43166667], ['2017-07-31', 2745.95541667], ['2017-07-30', 2722.51278571], ['2017-07-29', 2781.63658333], ['2017-07-28', 2647.625], ['2017-07-27', 2495.02858571], ['2017-07-26', 2560.99791667], ['2017-07-25', 2751.82102857], ['2017-07-24', 2725.54971667], ['2017-07-23', 2807.60985714], ['2017-07-22', 2682.1953625], ['2017-07-21', 2898.18841667], ['2017-07-20', 2264.7657], ['2017-07-19', 2320.12225], ['2017-07-18', 2176.6234875], ['2017-07-17', 1931.2143], ['2017-07-16', 2058.9956], ['2017-07-15', 2190.94783333], ['2017-07-14', 2354.78341667], ['2017-07-13', 2385.74857143], ['2017-07-12', 2369.86212857], ['2017-07-11', 2366.17014286], ['2017-07-10', 2536.2389375], ['2017-07-09', 2562.1306625], ['2017-07-08', 2491.20121429], ['2017-07-07', 2609.96775], ['2017-07-06', 2619.187503], ['2017-07-05', 2599.7298375], ['2017-07-04', 2561.22542857], ['2017-07-03', 2561.22542857], ['2017-07-02', 2501.19134286], ['2017-07-01', 2477.641375], ['2017-06-30', 2544.414475], ['2017-06-29', 2585.34918571], ['2017-06-28', 2517.90311429], ['2017-06-27', 2436.45105714], ['2017-06-26', 2512.36628571], ['2017-06-25', 2589.1648875], ['2017-06-24', 2710.41228571], ['2017-06-23', 2727.2880125], ['2017-06-22', 2671.04325], ['2017-06-21', 2754.97825], ['2017-06-20', 2617.2102625], ['2017-06-19', 2507.38925214], ['2017-06-18', 2665.927], ['2017-06-17', 2464.95981429], ['2017-06-16', 2442.48025], ['2017-06-15', 2447.0415625], ['2017-06-14', 2748.18508571], ['2017-06-13', 2657.6750625], ['2017-06-12', 2961.8296125], ['2017-06-11', 2845.37285714], ['2017-06-10', 2827.4913], ['2017-06-09', 2792.9991875], ['2017-06-08', 2664.9208625], ['2017-06-07', 2883.31369664], ['2017-06-06', 2698.3138125], ['2017-06-05', 2516.17314286], ['2017-06-04', 2525.76515847], ['2017-06-03', 2446.14241429], ['2017-06-02', 2399.24267143], ['2017-06-01', 2285.93391429], ['2017-05-31', 2239.20534286], ['2017-05-30', 2275.9307], ['2017-05-29', 2192.9808], ['2017-05-28', 2014.0529625], ['2017-05-27', 2211.97685714], ['2017-05-26', 2387.20628571], ['2017-05-25', 2379.19383333], ['2017-05-24', 2287.7102875], ['2017-05-23', 2090.6623125], ['2017-05-22', 2046.5344625], ['2017-05-21', 2052.9097875], ['2017-05-20', 1961.5204875], ['2017-05-19', 1899.0828875], ['2017-05-18', 1807.4850625], ['2017-05-17', 1739.031975], ['2017-05-16', 1723.1269375], ['2017-05-15', 1776.3165], ['2017-05-14', 1771.9200125], ['2017-05-13', 1720.4785], ['2017-05-12', 1820.9905625], ['2017-05-11', 1762.88625], ['2017-05-10', 1721.28497143], ['2017-05-09', 1640.619225], ['2017-05-08', 1535.86842857], ['2017-05-07', 1560.4102], ['2017-05-06', 1533.33507143], ['2017-05-05', 1508.292125], ['2017-05-04', 1507.57685714], ['2017-05-03', 1452.0762875], ['2017-05-02', 1417.1728125], ['2017-05-01', 1353.0045], ['2017-04-30', 1334.9790375], ['2017-04-29', 1331.29442857], ['2017-04-28', 1345.3539125], ['2017-04-27', 1309.109875], ['2017-04-26', 1279.4146875], ['2017-04-25', 1262.902775], ['2017-04-24', 1257.9881125], ['2017-04-23', 1261.311225], ['2017-04-22', 1258.3614125], ['2017-04-21', 1241.686325], ['2017-04-20', 1217.9300875], ['2017-04-19', 1216.18674286], ['2017-04-18', 1205.634875], ['2017-04-17', 1186.9274125], ['2017-04-16', 1184.88067143], ['2017-04-15', 1185.26005714], ['2017-04-14', 1180.0237125], ['2017-04-13', 1218.92205], ['2017-04-12', 1226.6170375], ['2017-04-11', 1207.744875], ['2017-04-10', 1208.8005], ['2017-04-09', 1181.1498375], ['2017-04-08', 1190.45425], ['2017-04-07', 1196.3079375], ['2017-04-06', 1133.07931429], ['2017-04-05', 1141.6003625], ['2017-04-04', 1141.813], ['2017-04-03', 1099.169125], ['2017-04-02', 1086.92957143], ['2017-04-01', 1079.54931429], ['2017-03-31', 1037.90455], ['2017-03-30', 1040.5755], ['2017-03-29', 1046.127625], ['2017-03-28', 1037.22925], ['2017-03-27', 956.7863125], ['2017-03-26', 959.340085714], ['2017-03-25', 941.919714286], ['2017-03-24', 1038.789], ['2017-03-23', 1028.7268625], ['2017-03-22', 1118.63004286], ['2017-03-21', 1049.0844875], ['2017-03-20', 1029.8008125], ['2017-03-19', 952.2323625], ['2017-03-18', 1091.1718875], ['2017-03-17', 1180.94565714], ['2017-03-16', 1257.399625], ['2017-03-15', 1245.37078571], ['2017-03-14', 1239.816225], ['2017-03-13', 1227.494625], ['2017-03-12', 1179.159875], ['2017-03-11', 1098.61712857], ['2017-03-10', 1192.46914286], ['2017-03-09', 1157.3933], ['2017-03-08', 1238.447], ['2017-03-07', 1275.197375], ['2017-03-06', 1270.9333], ['2017-03-05', 1267.0272], ['2017-03-04', 1285.14], ['2017-03-03', 1259.41081667], ['2017-03-02', 1222.4994], ['2017-03-01', 1187.56528571], ['2017-02-28', 1190.75195], ['2017-02-27', 1175.04975], ['2017-02-26', 1150.60571429], ['2017-02-25', 1174.86625], ['2017-02-24', 1172.01715], ['2017-02-23', 1123.2231875], ['2017-02-22', 1123.78842857], ['2017-02-21', 1084.7550125], ['2017-02-20', 1052.77928571], ['2017-02-19', 1056.6371375], ['2017-02-18', 1055.53685], ['2017-02-17', 1035.208125], ['2017-02-16', 1012.3259875], ['2017-02-15', 1011.78025], ['2017-02-14', 999.877375], ['2017-02-13', 1000.604625], ['2017-02-12', 1008.8466625], ['2017-02-11', 999.1035], ['2017-02-10', 976.103], ['2017-02-09', 1052.3766125], ['2017-02-08', 1050.11], ['2017-02-07', 1024.01375], ['2017-02-06', 1014.837725], ['2017-02-05', 1030.9994125], ['2017-02-04', 1013.027], ['2017-02-03', 1007.6137125], ['2017-02-02', 979.703875], ['2017-02-01', 964.706075], ['2017-01-31', 921.179325], ['2017-01-30', 915.933], ['2017-01-29', 920.31225], ['2017-01-28', 919.27975], ['2017-01-27', 915.95625], ['2017-01-26', 893.045625], ['2017-01-25', 890.320225], ['2017-01-24', 922.0736125], ['2017-01-23', 918.603625], ['2017-01-22', 920.4479], ['2017-01-21', 893.6210875], ['2017-01-20', 895.798875], ['2017-01-19', 874.99], ['2017-01-18', 903.84], ['2017-01-17', 830.5], ['2017-01-16', 822.2], ['2017-01-15', 817.91], ['2017-01-14', 826.29], ['2017-01-13', 803.37], ['2017-01-12', 785.22], ['2017-01-11', 906.05], ['2017-01-10', 894.18], ['2017-01-09', 908.14], ['2017-01-08', 896.83], ['2017-01-07', 883.09], ['2017-01-06', 994.67], ['2017-01-05', 1126.76], ['2017-01-04', 1023.14], ['2017-01-03', 1015.97], ['2017-01-02', 997.72], ['2017-01-01', 959.87], ['2016-12-31', 952.15], ['2016-12-30', 963.38], ['2016-12-29', 967.48], ['2016-12-28', 930.37], ['2016-12-27', 897.33], ['2016-12-26', 886.9], ['2016-12-25', 891.61], ['2016-12-24', 901.31], ['2016-12-23', 860.59], ['2016-12-22', 824.21], ['2016-12-21', 793.09], ['2016-12-20', 789.52], ['2016-12-19', 788.4], ['2016-12-18', 788.7], ['2016-12-17', 781.56], ['2016-12-16', 776.75], ['2016-12-15', 774.89], ['2016-12-14', 778.49], ['2016-12-13', 777.0], ['2016-12-12', 777.0], ['2016-12-11', 773.4], ['2016-12-10', 770.02], ['2016-12-09', 769.72], ['2016-12-08', 766.11], ['2016-12-07', 756.62], ['2016-12-06', 754.63], ['2016-12-05', 764.81], ['2016-12-04', 764.33], ['2016-12-03', 772.43], ['2016-12-02', 752.24], ['2016-12-01', 742.69], ['2016-11-30', 732.71], ['2016-11-29', 733.05], ['2016-11-28', 727.96], ['2016-11-27', 733.67], ['2016-11-26', 739.78], ['2016-11-25', 737.45], ['2016-11-24', 741.63], ['2016-11-23', 748.74], ['2016-11-22', 738.53], ['2016-11-21', 729.06], ['2016-11-20', 750.03], ['2016-11-19', 747.52], ['2016-11-18', 736.96], ['2016-11-17', 736.91], ['2016-11-16', 710.91], ['2016-11-15', 706.46], ['2016-11-14', 701.9], ['2016-11-13', 703.71], ['2016-11-12', 715.45], ['2016-11-11', 713.69], ['2016-11-10', 720.93], ['2016-11-09', 708.97], ['2016-11-08', 703.81], ['2016-11-07', 712.0], ['2016-11-06', 704.79], ['2016-11-05', 703.69], ['2016-11-04', 686.17], ['2016-11-03', 733.33], ['2016-11-02', 728.2], ['2016-11-01', 702.0], ['2016-10-31', 698.0], ['2016-10-30', 714.89], ['2016-10-29', 687.68], ['2016-10-28', 682.22], ['2016-10-27', 672.22], ['2016-10-26', 655.31], ['2016-10-25', 651.39], ['2016-10-24', 653.0], ['2016-10-23', 655.48], ['2016-10-22', 631.92], ['2016-10-21', 630.22], ['2016-10-20', 629.25], ['2016-10-19', 636.29], ['2016-10-18', 638.18], ['2016-10-17', 641.42], ['2016-10-16', 637.94], ['2016-10-15', 639.56], ['2016-10-14', 635.96], ['2016-10-13', 635.01], ['2016-10-12', 639.3], ['2016-10-11', 617.54], ['2016-10-10', 615.65], ['2016-10-09', 618.04], ['2016-10-08', 617.21], ['2016-10-07', 612.08], ['2016-10-06', 612.35], ['2016-10-05', 609.62], ['2016-10-04', 611.85], ['2016-10-03', 610.51], ['2016-10-02', 614.82], ['2016-10-01', 609.39], ['2016-09-30', 606.36], ['2016-09-29', 605.67], ['2016-09-28', 605.96], ['2016-09-27', 608.14], ['2016-09-26', 601.74], ['2016-09-25', 603.88], ['2016-09-24', 604.22], ['2016-09-23', 597.42], ['2016-09-22', 598.88], ['2016-09-21', 609.74], ['2016-09-20', 610.19], ['2016-09-19', 611.58], ['2016-09-18', 608.0], ['2016-09-17', 609.11], ['2016-09-16', 610.38], ['2016-09-15', 612.08], ['2016-09-14', 610.92], ['2016-09-13', 609.67], ['2016-09-12', 608.15], ['2016-09-11', 625.76], ['2016-09-10', 625.07], ['2016-09-09', 627.77], ['2016-09-08', 614.46], ['2016-09-07', 611.09], ['2016-09-06', 608.1], ['2016-09-05', 610.59], ['2016-09-04', 600.88], ['2016-09-03', 576.21], ['2016-09-02', 572.81], ['2016-09-01', 574.82], ['2016-08-31', 578.61], ['2016-08-30', 575.22], ['2016-08-29', 576.53], ['2016-08-28', 571.54], ['2016-08-27', 581.07], ['2016-08-26', 579.86], ['2016-08-25', 582.84], ['2016-08-24', 586.18], ['2016-08-23', 588.97], ['2016-08-22', 582.82], ['2016-08-21', 583.2], ['2016-08-20', 576.65], ['2016-08-19', 576.03], ['2016-08-18', 575.51], ['2016-08-17', 581.11], ['2016-08-16', 570.14], ['2016-08-15', 570.97], ['2016-08-14', 586.43], ['2016-08-13', 587.75], ['2016-08-12', 589.24], ['2016-08-11', 592.18], ['2016-08-10', 588.77], ['2016-08-09', 592.69], ['2016-08-08', 594.27], ['2016-08-07', 588.23], ['2016-08-06', 576.55], ['2016-08-05', 577.31], ['2016-08-04', 565.05], ['2016-08-03', 515.06], ['2016-08-02', 606.32], ['2016-08-01', 628.01], ['2016-07-31', 654.73], ['2016-07-30', 656.88], ['2016-07-29', 655.32], ['2016-07-28', 654.98], ['2016-07-27', 655.06], ['2016-07-26', 654.69], ['2016-07-25', 661.41], ['2016-07-24', 655.84], ['2016-07-23', 651.58], ['2016-07-22', 664.88], ['2016-07-21', 665.64], ['2016-07-20', 672.84], ['2016-07-19', 673.2], ['2016-07-18', 675.45], ['2016-07-17', 662.15], ['2016-07-16', 664.57], ['2016-07-15', 658.83], ['2016-07-14', 657.45], ['2016-07-13', 672.76], ['2016-07-12', 647.98], ['2016-07-11', 648.09], ['2016-07-10', 651.86], ['2016-07-09', 659.55], ['2016-07-08', 636.35], ['2016-07-07', 673.78], ['2016-07-06', 666.48], ['2016-07-05', 676.32], ['2016-07-04', 658.41], ['2016-07-03', 701.71], ['2016-07-02', 675.18], ['2016-07-01', 671.4], ['2016-06-30', 637.95], ['2016-06-29', 646.37], ['2016-06-28', 646.7], ['2016-06-27', 627.39], ['2016-06-26', 664.9], ['2016-06-25', 660.6], ['2016-06-24', 624.54], ['2016-06-23', 587.45], ['2016-06-22', 665.95], ['2016-06-21', 729.19], ['2016-06-20', 760.81], ['2016-06-19', 754.36], ['2016-06-18', 743.91], ['2016-06-17', 762.43], ['2016-06-16', 692.28], ['2016-06-15', 685.24], ['2016-06-14', 702.0665], ['2016-06-13', 664.8465], ['2016-06-12', 594.4399875], ['2016-06-11', 577.549875], ['2016-06-10', 575.2941375], ['2016-06-09', 578.68], ['2016-06-08', 577.54], ['2016-06-07', 584.5], ['2016-06-06', 574.02], ['2016-06-05', 573.74], ['2016-06-04', 568.0], ['2016-06-03', 539.99], ['2016-06-02', 539.47], ['2016-06-01', 531.15], ['2016-05-31', 525.15], ['2016-05-30', 512.16], ['2016-05-29', 522.43], ['2016-05-28', 470.29], ['2016-05-27', 452.49], ['2016-05-26', 447.94], ['2016-05-25', 445.34], ['2016-05-24', 442.17], ['2016-05-23', 438.0], ['2016-05-22', 443.89], ['2016-05-21', 441.25], ['2016-05-20', 441.87], ['2016-05-19', 452.9], ['2016-05-18', 452.89], ['2016-05-17', 454.88], ['2016-05-16', 457.08], ['2016-05-15', 455.65], ['2016-05-14', 456.82], ['2016-05-13', 454.7], ['2016-05-12', 452.51], ['2016-05-11', 450.0], ['2016-05-10', 461.63], ['2016-05-09', 457.82], ['2016-05-08', 459.04], ['2016-05-07', 460.91], ['2016-05-06', 448.51], ['2016-05-05', 447.37], ['2016-05-04', 450.96], ['2016-05-03', 443.9], ['2016-05-02', 451.9], ['2016-05-01', 447.17], ['2016-04-30', 456.16], ['2016-04-29', 450.0], ['2016-04-28', 447.31], ['2016-04-27', 467.98], ['2016-04-26', 463.86], ['2016-04-25', 459.4], ['2016-04-24', 451.29], ['2016-04-23', 447.25], ['2016-04-22', 451.0], ['2016-04-21', 442.34], ['2016-04-20', 435.91], ['2016-04-19', 427.28], ['2016-04-18', 425.71], ['2016-04-17', 429.43], ['2016-04-16', 429.09], ['2016-04-15', 423.76], ['2016-04-14', 424.61], ['2016-04-13', 425.89], ['2016-04-12', 420.69], ['2016-04-11', 421.63], ['2016-04-10', 417.73], ['2016-04-09', 422.2], ['2016-04-08', 420.33], ['2016-04-07', 421.09], ['2016-04-06', 421.77], ['2016-04-05', 418.99], ['2016-04-04', 418.07], ['2016-04-03', 418.07], ['2016-04-02', 416.25], ['2016-04-01', 416.94], ['2016-03-31', 412.35], ['2016-03-30', 414.0], ['2016-03-29', 422.96], ['2016-03-28', 426.8], ['2016-03-27', 415.71], ['2016-03-26', 416.44], ['2016-03-25', 416.47], ['2016-03-24', 417.8], ['2016-03-23', 415.35], ['2016-03-22', 410.49], ['2016-03-21', 410.77], ['2016-03-20', 405.59], ['2016-03-19', 408.87], ['2016-03-18', 418.64], ['2016-03-17', 415.99], ['2016-03-16', 414.78], ['2016-03-15', 414.78], ['2016-03-14', 413.89], ['2016-03-13', 412.44], ['2016-03-12', 421.22], ['2016-03-11', 416.0], ['2016-03-10', 413.1], ['2016-03-09', 412.39], ['2016-03-08', 411.94], ['2016-03-07', 408.86], ['2016-03-06', 401.0], ['2016-03-05', 416.48], ['2016-03-04', 421.44], ['2016-03-03', 429.99], ['2016-03-02', 431.87], ['2016-03-01', 436.18], ['2016-02-29', 433.47], ['2016-02-28', 432.0], ['2016-02-27', 424.34], ['2016-02-26', 421.4], ['2016-02-25', 423.1], ['2016-02-24', 417.0], ['2016-02-23', 438.5], ['2016-02-22', 434.67], ['2016-02-21', 439.48], ['2016-02-20', 418.02], ['2016-02-19', 418.97], ['2016-02-18', 418.04], ['2016-02-17', 406.69], ['2016-02-16', 402.64], ['2016-02-15', 402.38], ['2016-02-14', 388.6], ['2016-02-13', 381.46], ['2016-02-12', 376.75], ['2016-02-11', 378.98], ['2016-02-10', 372.61], ['2016-02-09', 375.8], ['2016-02-08', 373.74], ['2016-02-07', 373.04], ['2016-02-06', 386.49], ['2016-02-05', 385.06], ['2016-02-04', 368.38], ['2016-02-03', 373.48], ['2016-02-02', 372.0], ['2016-02-01', 376.86], ['2016-01-31', 378.24], ['2016-01-30', 377.26], ['2016-01-29', 382.44], ['2016-01-28', 394.45], ['2016-01-27', 394.12], ['2016-01-26', 387.09], ['2016-01-25', 404.75], ['2016-01-24', 388.5], ['2016-01-23', 385.7], ['2016-01-22', 408.0], ['2016-01-21', 408.33], ['2016-01-20', 378.66], ['2016-01-19', 385.28], ['2016-01-18', 385.45], ['2016-01-17', 370.4], ['2016-01-16', 391.62], ['2016-01-15', 431.9], ['2016-01-14', 429.57], ['2016-01-13', 446.66], ['2016-01-12', 447.11], ['2016-01-11', 446.24], ['2016-01-10', 450.15], ['2016-01-09', 447.04], ['2016-01-08', 453.71], ['2016-01-07', 430.75], ['2016-01-06', 431.9], ['2016-01-05', 433.0], ['2016-01-04', 428.13], ['2016-01-03', 433.94], ['2016-01-02', 432.33], ['2016-01-01', 429.34]]\ncollapse None\norder None\ndatabase_id 893\n"
]
],
[
[
"## Calculate what the highest and lowest opening prices were for the stock in this period.",
"_____no_output_____"
]
],
[
[
"p = dict['dataset']['data']\nz= [x[1] for x in p]\nres=[]\nfor val in z:\n if val!= None:\n res.append(val)\nprint(\"The maximum opening value in 2016 & 2020 was \" + str(max(res)))\nprint(\"The minimum opening value in 2016 & 2020 was \" + str(min(res)))",
"The maximum opening value in 2016 & 2020 was 19498.6833333\nThe minimum opening value in 2016 & 2020 was 368.38\n"
],
[
"#load python packages\nimport os\nimport pandas as pd\nimport datetime\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport itertools\nfrom statsmodels.tsa.stattools import adfuller\nfrom statsmodels.graphics.tsaplots import plot_acf, plot_pacf\nfrom statsmodels.tsa.arima_model import ARMA\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import MinMaxScaler\nfrom datetime import datetime, timedelta \nfrom tqdm import tqdm_notebook as tqdm\nplt.style.use('bmh')",
"_____no_output_____"
],
[
"url = url = 'https://www.quandl.com/api/v3/datasets/BCHAIN/MKPRU.csv?api_key=Lq43ztbiWJ73CJUDPiye&start_date=2016-01-01&end_date=2020-5-29&order=asc'",
"_____no_output_____"
],
[
"df= pd.read_csv( url ,index_col = None)\ndf.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"#Cleaning data\ndf.columns= ['DATE' ,'PRICE']\ndf.head()",
"_____no_output_____"
],
[
"#Covert into Datatime\ndf['DATE'] = pd.to_datetime(df['DATE'])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.set_index('DATE',inplace =True)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"## Visualize the Data",
"_____no_output_____"
]
],
[
[
"import statsmodels.api as sm \nfrom statsmodels.tsa.stattools import acf \nfrom statsmodels.tsa.stattools import pacf\nfrom statsmodels.tsa.seasonal import seasonal_decompose",
"_____no_output_____"
],
[
"df.plot(figsize=(17,8), title='Closing Prices')\nplt.xlabel('year')\nplt.ylabel(\"Price in USD\")\nplt.show()",
"_____no_output_____"
],
[
"decomposition = seasonal_decompose(df.PRICE, model='multiplicative',freq = 120) \nfig = plt.figure() \nfig = decomposition.plot() \nfig.set_size_inches(15, 8)",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: FutureWarning: the 'freq'' keyword is deprecated, use 'period' instead\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"print(decomposition.trend)\nprint(decomposition.seasonal)\nprint(decomposition.resid) ",
"DATE\n2016-01-01 NaN\n2016-01-02 NaN\n2016-01-03 NaN\n2016-01-04 NaN\n2016-01-05 NaN\n ..\n2020-05-25 NaN\n2020-05-26 NaN\n2020-05-27 NaN\n2020-05-28 NaN\n2020-05-29 NaN\nName: trend, Length: 1611, dtype: float64\nDATE\n2016-01-01 1.000028\n2016-01-02 1.005914\n2016-01-03 1.001310\n2016-01-04 1.004708\n2016-01-05 0.999895\n ... \n2020-05-25 0.999572\n2020-05-26 0.990073\n2020-05-27 1.019521\n2020-05-28 1.007861\n2020-05-29 1.021951\nName: seasonal, Length: 1611, dtype: float64\nDATE\n2016-01-01 NaN\n2016-01-02 NaN\n2016-01-03 NaN\n2016-01-04 NaN\n2016-01-05 NaN\n ..\n2020-05-25 NaN\n2020-05-26 NaN\n2020-05-27 NaN\n2020-05-28 NaN\n2020-05-29 NaN\nName: resid, Length: 1611, dtype: float64\n"
],
[
"### Testing For Stationarity\nfrom statsmodels.tsa.stattools import adfuller",
"_____no_output_____"
],
[
"test_result=adfuller(df['PRICE'])",
"_____no_output_____"
],
[
"#Ho: It is non stationary\n#H1: It is stationary\n\ndef adfuller_test(PRICE):\n result=adfuller(PRICE)\n labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']\n for value,label in zip(result,labels):\n print(label+' : '+str(value) )\n if result[1] <= 0.05:\n print(\"strong evidence against the null hypothesis(Ho), reject the null hypothesis. Data has no unit root and is stationary\")\n else:\n print(\"weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary \")",
"_____no_output_____"
],
[
"adfuller_test(df['PRICE'])",
"ADF Test Statistic : -1.916871538611568\np-value : 0.3241663041553058\n#Lags Used : 24\nNumber of Observations Used : 1586\nweak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary \n"
]
],
[
[
"## Differencing",
"_____no_output_____"
]
],
[
[
"df['Price Difference'] = df['PRICE'] - df['PRICE'].shift(1)",
"_____no_output_____"
],
[
" df['PRICE'].shift(1)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"## Again test dickey fuller test\nadfuller_test(df['Price Difference'].dropna())",
"ADF Test Statistic : -7.831947221576719\np-value : 6.258939460787339e-12\n#Lags Used : 23\nNumber of Observations Used : 1586\nstrong evidence against the null hypothesis(Ho), reject the null hypothesis. Data has no unit root and is stationary\n"
],
[
"df['Price Difference'].plot(figsize=(16,4), title=\"Daily Changes in Closing Price\")\nplt.ylabel(\"Change in USD\")\nplt.show()",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(df['Price Difference'].dropna(), ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(df['Price Difference'].dropna(), ax=ax2)",
"_____no_output_____"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nmodel = ARIMA(df['PRICE'],order=(5,1,5))\nmodel_fit=model.fit(disp = 0)\n\n",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/base/tsa_model.py:162: ValueWarning: No frequency information was provided, so inferred frequency D will be used.\n % freq, ValueWarning)\n/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/base/tsa_model.py:162: ValueWarning: No frequency information was provided, so inferred frequency D will be used.\n % freq, ValueWarning)\n"
],
[
"print(model_fit.summary())",
" ARIMA Model Results \n==============================================================================\nDep. Variable: D.PRICE No. Observations: 1610\nModel: ARIMA(5, 1, 5) Log Likelihood -11527.972\nMethod: css-mle S.D. of innovations 311.417\nDate: Sat, 13 Jun 2020 AIC 23079.945\nTime: 22:39:51 BIC 23144.553\nSample: 01-02-2016 HQIC 23103.928\n - 05-29-2020 \n=================================================================================\n coef std err z P>|z| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 5.4455 8.685 0.627 0.531 -11.577 22.468\nar.L1.D.PRICE -0.2434 0.154 -1.582 0.114 -0.545 0.058\nar.L2.D.PRICE -0.2300 0.169 -1.362 0.173 -0.561 0.101\nar.L3.D.PRICE -0.0828 0.182 -0.455 0.649 -0.440 0.274\nar.L4.D.PRICE -0.0368 0.169 -0.218 0.828 -0.368 0.294\nar.L5.D.PRICE 0.7458 0.144 5.174 0.000 0.463 1.028\nma.L1.D.PRICE 0.2525 0.163 1.548 0.122 -0.067 0.572\nma.L2.D.PRICE 0.2518 0.178 1.413 0.158 -0.098 0.601\nma.L3.D.PRICE 0.0681 0.193 0.352 0.725 -0.311 0.447\nma.L4.D.PRICE 0.0222 0.177 0.125 0.900 -0.325 0.370\nma.L5.D.PRICE -0.6460 0.145 -4.441 0.000 -0.931 -0.361\n Roots \n=============================================================================\n Real Imaginary Modulus Frequency\n-----------------------------------------------------------------------------\nAR.1 -0.8305 -0.6100j 1.0305 -0.3992\nAR.2 -0.8305 +0.6100j 1.0305 0.3992\nAR.3 0.2562 -0.9942j 1.0267 -0.2099\nAR.4 0.2562 +0.9942j 1.0267 0.2099\nAR.5 1.1979 -0.0000j 1.1979 -0.0000\nMA.1 -0.8518 -0.6341j 1.0619 -0.3981\nMA.2 -0.8518 +0.6341j 1.0619 0.3981\nMA.3 0.2498 -1.0229j 1.0530 -0.2119\nMA.4 0.2498 +1.0229j 1.0530 0.2119\nMA.5 1.2382 -0.0000j 1.2382 -0.0000\n-----------------------------------------------------------------------------\n"
],
[
"\n#Let’s plot the residuals to ensure there are no patterns (that is, look for constant mean and variance).\n# Plot residual errors\nresiduals = pd.DataFrame(model_fit.resid)\nfig, ax = plt.subplots(1,2)\nresiduals.plot(title=\"Residuals\", ax=ax[0])\nresiduals.plot(kind='kde', title='Density', ax=ax[1])\nplt.show()",
"_____no_output_____"
],
[
"#Let’s plot the actuals against the fitted values using plot_predict().\n# Actual vs Fitted\nmodel_fit.plot_predict(dynamic=False)\nplt.show()",
"_____no_output_____"
],
[
"#How to do find the optimal ARIMA model manually using Out-of-Time Cross validation\n#In Out-of-Time cross-validation, you take few steps back in time and forecast into the future to as many steps you took back. Then you compare the forecast against the actuals.\n#To do out-of-time cross-validation, you need to create the training and testing dataset by splitting the time series into 2 contiguous parts in approximately 75:25 ratio or a reasonable proportion based on time frequency of series.\nfrom statsmodels.tsa.stattools import acf\n\n# Create Training and Test\ntrain = df.PRICE[:1285]\ntest = df.PRICE[1285:]\n#train\ntest",
"_____no_output_____"
]
],
[
[
"So how to interpret the plot diagnostics?\n\nTop left: The residual errors seem to fluctuate around a mean of zero and have a uniform variance.\n\nTop Right: The density plot suggest normal distribution with mean zero.\n\nBottom left: All the dots should fall perfectly in line with the red line. Any significant deviations would imply the distribution is skewed.\n\nBottom Right: The Correlogram, aka, ACF plot shows the residual errors are not autocorrelated. Any autocorrelation would imply that there is some pattern in the residual errors which are not explained in the model. So you will need to look for more X’s (predictors) to the model.\n\nOverall, it seems to be a good fit. Let’s forecast.",
"_____no_output_____"
]
],
[
[
"#build the ARIMA model on training dataset, forecast and plot it.\n# Build Model\n# model = ARIMA(train, order=(3,2,1)) \nmodel = ARIMA(train, order=(1,0, 1)) \nfitted = model.fit(disp=-1) \n\n# Forecast\nfc, se, conf = fitted.forecast(326, alpha=0.05) # 95% conf\n\n# Make as pandas series\nfc_series = pd.Series(fc, index=test.index)\nlower_series = pd.Series(conf[:, 0], index=test.index)\nupper_series = pd.Series(conf[:, 1], index=test.index)\n\n# Plot\nplt.figure(figsize=(12,5), dpi=100)\nplt.plot(train, label='training')\nplt.plot(test, label='actual')\nplt.plot(fc_series, label='forecast')\nplt.fill_between(lower_series.index, lower_series, upper_series, \n color='k', alpha=.15)\nplt.title('Forecast vs Actuals')\nplt.legend(loc='upper left', fontsize=8)\nplt.show()",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/base/tsa_model.py:162: ValueWarning: No frequency information was provided, so inferred frequency D will be used.\n % freq, ValueWarning)\n"
],
[
"# Accuracy metrics\ndef forecast_accuracy(forecast, actual):\n mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE\n me = np.mean(forecast - actual) # ME\n mae = np.mean(np.abs(forecast - actual)) # MAE\n mpe = np.mean((forecast - actual)/actual) # MPE\n rmse = np.mean((forecast - actual)**2)**.5 # RMSE\n corr = np.corrcoef(forecast, actual)[0,1] # corr\n mins = np.amin(np.hstack([forecast[:,None], \n actual[:,None]]), axis=1)\n maxs = np.amax(np.hstack([forecast[:,None], \n actual[:,None]]), axis=1)\n minmax = 1 - np.mean(mins/maxs) # minmax\n acf1 = acf(fc-test)[1] # ACF1\n return({'mape':mape, 'me':me, 'mae': mae, \n 'mpe': mpe, 'rmse':rmse, 'acf1':acf1, \n 'corr':corr, 'minmax':minmax})\n\nforecast_accuracy(fc, test.values)",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/stattools.py:572: FutureWarning: fft=True will become the default in a future version of statsmodels. To suppress this warning, explicitly set fft=False.\n FutureWarning\n"
]
],
[
[
"## Auto Arima Forecast in Python",
"_____no_output_____"
]
],
[
[
"!pip install pmdarima ",
"Requirement already satisfied: pmdarima in /usr/local/lib/python3.6/dist-packages (1.6.1)\nRequirement already satisfied: statsmodels>=0.11 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.11.1)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.15.1)\nRequirement already satisfied: Cython<0.29.18,>=0.29 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.29.17)\nRequirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.4.1)\nRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.18.5)\nRequirement already satisfied: urllib3 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.24.3)\nRequirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.0.4)\nRequirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.22.2.post1)\nRequirement already satisfied: patsy>=0.5 in /usr/local/lib/python3.6/dist-packages (from statsmodels>=0.11->pmdarima) (0.5.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pmdarima) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pmdarima) (2.8.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.5->statsmodels>=0.11->pmdarima) (1.12.0)\n"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nimport pmdarima as pm\n\nmodel = pm.auto_arima(df.PRICE, start_p=1, start_q=1,\n test='adf', # use adftest to find optimal 'd'\n max_p=3, max_q=3, # maximum p and q\n m=1, # frequency of series\n d=None, # let model determine 'd'\n seasonal=False, # No Seasonality\n start_P=0, \n D=0, \n trace=True,\n error_action='ignore', \n suppress_warnings=True, \n stepwise=True)\n\nprint(model.summary())",
"Performing stepwise search to minimize aic\nFit ARIMA(1,1,1)x(0,0,0,0) [intercept=True]; AIC=23139.434, BIC=23160.970, Time=0.694 seconds\nFit ARIMA(0,1,0)x(0,0,0,0) [intercept=True]; AIC=23135.434, BIC=23146.202, Time=0.061 seconds\nFit ARIMA(1,1,0)x(0,0,0,0) [intercept=True]; AIC=23137.432, BIC=23153.584, Time=0.064 seconds\nFit ARIMA(0,1,1)x(0,0,0,0) [intercept=True]; AIC=23137.436, BIC=23153.588, Time=0.143 seconds\nFit ARIMA(0,1,0)x(0,0,0,0) [intercept=False]; AIC=23133.904, BIC=23139.288, Time=0.033 seconds\nTotal fit time: 1.006 seconds\n SARIMAX Results \n==============================================================================\nDep. Variable: y No. Observations: 1611\nModel: SARIMAX(0, 1, 0) Log Likelihood -11565.952\nDate: Sat, 13 Jun 2020 AIC 23133.904\nTime: 22:42:19 BIC 23139.288\nSample: 0 HQIC 23135.902\n - 1611 \nCovariance Type: opg \n==============================================================================\n coef std err z P>|z| [0.025 0.975]\n------------------------------------------------------------------------------\nsigma2 1.016e+05 1044.811 97.276 0.000 9.96e+04 1.04e+05\n===================================================================================\nLjung-Box (Q): 191.92 Jarque-Bera (JB): 31148.15\nProb(Q): 0.00 Prob(JB): 0.00\nHeteroskedasticity (H): 63.63 Skew: -0.49\nProb(H) (two-sided): 0.00 Kurtosis: 24.53\n===================================================================================\n\nWarnings:\n[1] Covariance matrix calculated using the outer product of gradients (complex-step).\n"
]
],
[
[
"# Residual plots using stepwise_fit.",
"_____no_output_____"
],
[
"So how to interpret the plot diagnostics?\n\nTop left: The residual errors seem to fluctuate around a mean of zero and have a uniform variance.\n\nTop Right: The density plot suggest normal distribution with mean zero.\n\nBottom left: All the dots should fall perfectly in line with the red line. Any significant deviations would imply the distribution is skewed.\n\nBottom Right: The Correlogram, aka, ACF plot shows the residual errors are not autocorrelated. Any autocorrelation would imply that there is some pattern in the residual errors which are not explained in the model. So you will need to look for more X’s (predictors) to the model.\n\nOverall, it seems to be a good fit. Let’s forecast. **bold text**",
"_____no_output_____"
]
],
[
[
"model.plot_diagnostics(figsize=(7,5))\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Seasonal Difference",
"_____no_output_____"
]
],
[
[
"df['Seasonal First Difference']=df['PRICE']-df['PRICE'].shift(120)",
"_____no_output_____"
],
[
"df.head(140)",
"_____no_output_____"
],
[
"## Again test dickey fuller test\nadfuller_test(df['Seasonal First Difference'].dropna())",
"ADF Test Statistic : -2.921545644346556\np-value : 0.042899287858174595\n#Lags Used : 24\nNumber of Observations Used : 1466\nstrong evidence against the null hypothesis(Ho), reject the null hypothesis. Data has no unit root and is stationary\n"
],
[
"df['Seasonal First Difference'].plot()",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[120:], lags=100, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[120:], lags=40, ax=ax2)",
"_____no_output_____"
]
],
[
[
"## SARIMA",
"_____no_output_____"
]
],
[
[
"import pmdarima as pm\n\n# Seasonal - fit stepwise auto-ARIMA\nsmodel = pm.auto_arima(df['Seasonal First Difference'].dropna(), start_p=1, start_q=1,\n test='adf',\n max_p=3, max_q=3, m=12,\n start_P=0, seasonal=True,\n d=None, D=1, trace=True,\n error_action='ignore', \n suppress_warnings=True, \n stepwise=True)\n\nsmodel.summary()",
"Performing stepwise search to minimize aic\nFit ARIMA(1,0,1)x(0,1,1,12) [intercept=True]; AIC=22368.099, BIC=22394.594, Time=16.145 seconds\nFit ARIMA(0,0,0)x(0,1,0,12) [intercept=True]; AIC=26072.541, BIC=26083.139, Time=0.118 seconds\nFit ARIMA(1,0,0)x(1,1,0,12) [intercept=True]; AIC=22948.477, BIC=22969.673, Time=10.391 seconds\nFit ARIMA(0,0,1)x(0,1,1,12) [intercept=True]; AIC=24664.141, BIC=24685.337, Time=11.736 seconds\nFit ARIMA(0,0,0)x(0,1,0,12) [intercept=False]; AIC=26070.600, BIC=26075.899, Time=0.061 seconds\nFit ARIMA(1,0,1)x(0,1,0,12) [intercept=True]; AIC=23288.727, BIC=23309.923, Time=1.007 seconds\nFit ARIMA(1,0,1)x(1,1,1,12) [intercept=True]; AIC=22369.678, BIC=22401.473, Time=21.285 seconds\nNear non-invertible roots for order (1, 0, 1)(1, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(1,0,1)x(0,1,2,12) [intercept=True]; AIC=22369.617, BIC=22401.412, Time=53.304 seconds\nNear non-invertible roots for order (1, 0, 1)(0, 1, 2, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(1,0,1)x(1,1,0,12) [intercept=True]; AIC=22947.878, BIC=22974.374, Time=11.147 seconds\nFit ARIMA(1,0,1)x(1,1,2,12) [intercept=True]; AIC=22328.644, BIC=22365.738, Time=73.145 seconds\nNear non-invertible roots for order (1, 0, 1)(1, 1, 2, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(1,0,0)x(0,1,1,12) [intercept=True]; AIC=22366.229, BIC=22387.425, Time=8.595 seconds\nNear non-invertible roots for order (1, 0, 0)(0, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(2,0,1)x(0,1,1,12) [intercept=True]; AIC=22370.778, BIC=22402.572, Time=29.315 seconds\nNear non-invertible roots for order (2, 0, 1)(0, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(1,0,2)x(0,1,1,12) [intercept=True]; AIC=22367.320, BIC=22399.114, Time=17.982 seconds\nNear non-invertible roots for order (1, 0, 2)(0, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(0,0,0)x(0,1,1,12) [intercept=True]; AIC=26069.202, BIC=26085.099, Time=1.564 seconds\nFit ARIMA(0,0,2)x(0,1,1,12) [intercept=True]; AIC=24081.033, BIC=24107.528, Time=21.649 seconds\nFit ARIMA(2,0,0)x(0,1,1,12) [intercept=True]; AIC=22368.140, BIC=22394.636, Time=18.236 seconds\nNear non-invertible roots for order (2, 0, 0)(0, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nFit ARIMA(2,0,2)x(0,1,1,12) [intercept=True]; AIC=22363.923, BIC=22401.016, Time=28.784 seconds\nNear non-invertible roots for order (2, 0, 2)(0, 1, 1, 12); setting score to inf (at least one inverse root too close to the border of the unit circle: 1.000)\nTotal fit time: 324.493 seconds\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf938cbabb536fe97c9cb04e4b700197c64af37 | 133 | ipynb | Jupyter Notebook | figures/show_classification_results.ipynb | aiaudit-org/lens2logit | 56f0aa72c82c04bf0cf88ec515515142e7a067d0 | [
"MIT"
] | 5 | 2021-08-30T11:56:30.000Z | 2021-09-20T00:48:06.000Z | figures/show_classification_results.ipynb | aiaudit-org/lens2logit | 56f0aa72c82c04bf0cf88ec515515142e7a067d0 | [
"MIT"
] | 3 | 2021-08-30T11:10:13.000Z | 2021-09-10T12:59:45.000Z | figures/show_classification_results.ipynb | aiaudit-org/lens2logit | 56f0aa72c82c04bf0cf88ec515515142e7a067d0 | [
"MIT"
] | 1 | 2021-08-31T10:40:07.000Z | 2021-08-31T10:40:07.000Z | 33.25 | 75 | 0.887218 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecf940f73374016f064c57f310c350b35354b6b1 | 58,275 | ipynb | Jupyter Notebook | finance/basics/poor-persons-option-backtest-attempt-0.ipynb | pangyuteng/aigonewrong | 98a2c7a172be4664fc372d581cef5f23cf317b51 | [
"MIT"
] | 8 | 2021-01-06T22:04:39.000Z | 2022-02-22T19:38:14.000Z | finance/basics/poor-persons-option-backtest-attempt-0.ipynb | pangyuteng/aigonewrong | 98a2c7a172be4664fc372d581cef5f23cf317b51 | [
"MIT"
] | null | null | null | finance/basics/poor-persons-option-backtest-attempt-0.ipynb | pangyuteng/aigonewrong | 98a2c7a172be4664fc372d581cef5f23cf317b51 | [
"MIT"
] | 5 | 2020-11-21T20:46:19.000Z | 2021-08-08T08:47:19.000Z | 64.179515 | 28,076 | 0.699305 | [
[
[
"# https://twitter.com/aigonewrong/status/1330686480570716160",
"_____no_output_____"
]
],
[
[
"#### Poor persons option backtest\n```\nBelow is an attempt to write a naive backtest that attempts to replicate the performance of a option trading strategy propose by CBOE. The strategy from CBOE is displyed below verbatim.\n```",
"_____no_output_____"
],
[
"\n\n```\n\"\nCboe S&P 500 Iron Condor Index (CNDR)\n\nThe Cboe S&P 500 Condor IndexSM (CNDR) is inspired by the condor option strategy. The objective of a condor option spread is to mine \"out-of-the-money\" option volatility premium with limited risk. A generic condor option spread is short an out-of-the-money straddle and long further out-of-the money call and put that bound the risk of the straddle. \n\nThe CNDR index follows this strategy and sells a butterfly spread of the S&P 500® one-month options (SPX options). More precisely, it tracks the value of a hypothetical portfolio that overlays a butterfly spread of SPX options over one-month Treasury bills . The short SPX straddle is at-the-money and the long SPX call and put are 5% out-of-the-money. to guarantee solvency, the Treasury bills cover ten times the maximum loss of the short butterfly spread. \n\nThe BFLY portfolio is rebalanced monthly, usually at 11 am ET every third Friday after the options in the butterfly spread expire. A new SPX butterfly spread is then sold. \n\nThe CNDR portfolio is rebalanced monthly after the expiration of SPX options, typically 11 am ET every third Friday. New SPX options are then bought and sold.\n\"\nexcerpt from \nhttp://www.cboe.com/index/dashboard/cndr\nhttps://www.cboe.com/publish/micropdf/CBOE-SP500-Iron-Condor-CNDR-Methodology-Paper.pdf\n```\n## *** DISCLAIMER ***\n```\nUntil below backtest pnl performs like CBOE CNDR index, trust it with a grain of salt. Once pnl \"matches\", trust it with one bit.\n```\n",
"_____no_output_____"
]
],
[
[
"import datetime\nimport numpy as np\nimport pandas as pd\nimport yfinance as yf\nfrom py_vollib.ref_python.black_scholes_merton import black_scholes_merton\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"import datetime as dt\nfrom pandas.tseries.holiday import USFederalHolidayCalendar\n\ncal = USFederalHolidayCalendar()\nstart_date = '1980-1-1'\nfuture_date = datetime.datetime.now().date()+datetime.timedelta(days=45)\nend_date = future_date.strftime('%Y-%m-%d')\nprint(start_date,end_date)",
"1980-1-1 2021-01-22\n"
],
[
"def get_business_day(date):\n while date.isoweekday() > 5 or date in cal.holidays():\n date += dt.timedelta(days=1)\n return date\n\nfirst_bday_of_month = [get_business_day(d).date() \n for d in pd.date_range(start_date, end_date, freq='BMS')]\ndef get_business_day(date):\n while date.isoweekday() != 5 or date in cal.holidays():\n date += dt.timedelta(days=1)\n return date.date()\n\nlast_friday_of_month = [get_business_day(d) for d in pd.date_range(start_date, end_date, freq='BM')]\n\nexpiration_date_list = first_bday_of_month",
"_____no_output_____"
],
[
"first_bday_of_month[0],last_friday_of_month[0]",
"_____no_output_____"
],
[
"# download and read CNDR index from CBOE website\ncndr = pd.read_csv('static/CNDR_Data.csv')\ncndr.index=[datetime.datetime.strptime(x,'%m/%d/%Y').date() for x in cndr.time]\ncndr=cndr.drop(columns=['time','volume','open','high','low'])\ncndr=cndr.rename(columns={'close':'cndr'})",
"_____no_output_____"
],
[
"cndr.head()",
"_____no_output_____"
],
[
"# get historical daily price for SPY\nsymbol = '^VIX'\ntick = yf.Ticker(symbol)\nvix_history = tick.history(period=\"max\")\nvix_history = vix_history.drop(\n columns=['Open','High','Low','Volume','Dividends','Stock Splits']\n)\nvix_history = vix_history.rename(columns={'Close':'implied_vol'})\n\nsymbol = 'SPY'\ntick = yf.Ticker(symbol)\nticker_history = tick.history(period=\"max\")\nticker_history = pd.merge(\n ticker_history,vix_history,\n how='left',left_index=True,right_index=True)",
"_____no_output_____"
],
[
"class MyBackTester:\n def __init__(self,initial_deposit=None):\n self.initial_deposit = initial_deposit\n \n def run_backtest(self,ticker_history,days=252*10):\n '''\n ticker_history is the df from `yfinance`.\n '''\n df = pd.DataFrame()\n df['close'] = ticker_history.Close\n df['ret']= np.log(df.close).diff(1)\n df['realized_vol'] = df.ret.rolling(21).std()*np.sqrt(252)*100\n if 'implied_vol' in ticker_history.columns:\n df['implied_vol'] = ticker_history.implied_vol\n else: # use realized volatility as implied (which will be off, should maybe model with a curve)\n df['implied_vol'] = df.realized_vol\n \n df['actual_ret']= df.close.pct_change(45)\n df['ret_sd'] = df.actual_ret.rolling(45).std()\n df['ret_mean'] = df.actual_ret.rolling(45).mean()\n df = df.dropna()\n \n mydf = df.iloc[-1*days:].copy()\n \n start_date = mydf.index[0]\n if self.initial_deposit is None:\n cost, max_loss = IronCondor().compute_price(mydf.iloc[-1:],return_max_loss=True)\n self.initial_deposit = max_loss*10\n \n self.cash = self.initial_deposit\n self.history = [{'date':start_date,'cash':self.initial_deposit}]\n self.positions = {}\n self.closed_positions = []\n \n def myfunc(ser):\n rows = mydf.loc[ser.index]\n row = rows.iloc[-1:]\n # every day manager positions\n self._manage_positions(row)\n return 0\n rol = mydf.close.rolling(window=2)\n _=rol.apply(myfunc, raw=False)\n pdf = pd.DataFrame(self.history)\n pdf.index = pdf.date\n pdf=pdf.drop(columns=['date'])\n return df, pdf\n\n def _manage_positions(self,row):\n todelete = []\n m=0\n if len(self.positions)==0:\n # open position if there are no positions\n strategy = IronCondor()\n if strategy.open_confidence(row) > 0.5:\n strategy.open_position(row,self)\n self.positions[strategy.name]=strategy\n else:\n # manage existing\n for k,v in self.positions.items():\n action = v.manage_position(row,self)\n if action == 'close_position':\n v.close_position(row,self)\n todelete.append(k)\n for k in todelete:\n self.closed_positions.append(self.positions.pop(k))\n\n mytoday = {'date':row.index[-1],'cash':self.cash}\n self.history.append(mytoday)\n\nclass Strategy:\n def __init__(self):\n self.slippage = 0.01 # one way\n self.entry_row = None\n self.exit_row = None\n self.entry_price = None\n self.exit_price = None\n self.multiplier = 1\n def open_confidence(self,row):\n raise NotImplementedError()\n def compute_price(self,row):\n raise NotImplementedError()\n def manage_position(self,row):\n raise NotImplementedError()\n def open_position(self,row,portfolio):\n # positive value is credit\n slippage = (1-self.slippage)\n self.entry_price = slippage*self.compute_price(row)\n justone = False\n if justone:\n self.multiplier = 1\n else:\n # 1 percent of cash value\n m = int((0.01*portfolio.cash)/self.entry_price)\n if m > 1:\n self.multiplier = m\n else:\n self.multiplier = 1\n portfolio.cash+=1*self.multiplier*self.entry_price\n self.entry_row = row\n \n def close_position(self,row,portfolio):\n # negative value is debit.\n slippage = (1+self.slippage)\n self.exit_price = slippage*self.compute_price(row)\n portfolio.cash+=-1*self.multiplier*self.exit_price\n self.exit_row = row\n \nclass IronCondor(Strategy):\n name = 'IronCondor'\n '''\n ref https://www.cboe.com/publish/micropdf/CBOE-SP500-Iron-Condor-CNDR-Methodology-Paper.pdf\n '''\n def open_confidence(self,row):\n return 1\n def _get_dte(self,row):\n if self.entry_row is None:\n '''\n # # option 1. # assuming you always can find a 45 dte contract\n # # easy but far from reality\n # dte = 45\n '''\n # option 2. # locate future contract first business day of following month.\n # still not quite the same as first monday.\n today = row.index[-1].date()\n contract_dates = np.array(expiration_date_list)\n index = np.argmin(np.abs([x.days for x in contract_dates-today]))\n next_contrast_date = contract_dates[index+1]\n dte = (next_contrast_date-today).days\n else:\n dte = 45-(row.index[-1]-self.entry_row.index[-1]).days\n return 1 if dte<=1 else dte\n \n def compute_price(self,row,return_max_loss=False):\n dte = self._get_dte(row)\n time_to_expiry_years = dte/365\n \n underlying_price = row.close.values[-1]\n ret_mean = row.ret_mean.values[-1]\n ret_sd = row.ret_sd.values[-1]\n sigma = row.implied_vol.values[-1]/100\n \n # http://www.cboe.com/index/dashboard/cndr\n\n p1sd_strike = underlying_price*(1+ret_mean+0.7*ret_sd)\n m1sd_strike = underlying_price*(1+ret_mean+0.7*ret_sd)\n\n p2sd_strike = underlying_price*(1+ret_mean-2*ret_sd)\n m2sd_strike = underlying_price*(1+ret_mean-2*ret_sd)\n \n S = underlying_price\n K = p1sd_strike\n q = 0\n t = time_to_expiry_years\n r = 0\n sigma = sigma\n call_1sd_price = black_scholes_merton('c', S, K, t, r, sigma, q)\n \n S = underlying_price\n K = m1sd_strike\n q = 0\n t = time_to_expiry_years\n r = 0\n sigma = sigma\n put_1sd_price = black_scholes_merton('p', S, K, t, r, sigma, q)\n\n S = underlying_price\n K = p2sd_strike\n q = 0\n t = time_to_expiry_years\n r = 0\n sigma = sigma\n call_2sd_price = black_scholes_merton('c', S, K, t, r, sigma, q)\n \n S = underlying_price\n K = m2sd_strike\n q = 0\n t = time_to_expiry_years\n r = 0\n sigma = sigma\n put_2sd_price = black_scholes_merton('p', S, K, t, r, sigma, q)\n \n cost = call_1sd_price+put_1sd_price-call_2sd_price-put_2sd_price\n max_loss = np.max([p2sd_strike-p1sd_strike,m1sd_strike-m2sd_strike])\n if return_max_loss:\n return cost, max_loss\n else:\n return cost\n \n def manage_position(self,row,portfolio):\n dte = self._get_dte(row)\n slippage = (1+self.slippage)\n current_price = slippage*self.compute_price(row)\n myreturn = (self.entry_price-current_price)/self.entry_price\n \n #if dte < 3:\n # return 'close_position'\n #if myreturn > .5:\n # return 'close_position'\n #if myreturn < -1:\n # return 'close_position' \n \n # stop loss at 10% of account value\n slippage = (1+self.slippage)\n current_price = slippage*self.compute_price(row)\n cost = current_price*self.multiplier\n if cost/portfolio.cash > 0.1:\n return 'close_position'\n if row.index[-1] in expiration_date_list:\n return 'close_position'\n return 'maintain_position'\n",
"_____no_output_____"
],
[
"ticker_history.head()",
"_____no_output_____"
],
[
"# backtest iron-condor for past 10 years\nmyportfolio = MyBackTester()\nhdf,pdf = myportfolio.run_backtest(ticker_history,days=252*10)",
"_____no_output_____"
],
[
"hdf.head() # historical price and derived parameters",
"_____no_output_____"
],
[
"pdf.head() # backtested account holding.",
"_____no_output_____"
],
[
"mdf = pd.merge(pdf,cndr,how='left',left_index=True,right_index=True)",
"_____no_output_____"
],
[
"(mdf.cash/mdf.cash.iloc[0]).plot(label='my-cndr')\n(mdf.cndr/mdf.cndr.iloc[0]).plot(label='cboe-cndr')\nplt.legend()\nplt.grid(True)",
"_____no_output_____"
]
],
[
[
"```\nabove kinda visually matches... if this works... great, but I highly doubt if it'll stack up with front-test and other back testers. Thus the above \"poor-persons-option-backtest-attempt-0\" likely is just helpful to serve as the first excercise for understanding how to backtest option trading strategies.\n\nI will actually try implemeting the below, but using option prices derived from BSM model.\nhttps://www.cboe.com/publish/micropdf/CBOE-SP500-Iron-Condor-CNDR-Methodology-Paper.pdf\n\n\ncomplimentary to the above see @QuantConnect to actually backtest with real option data.\na good example by Alex Muci is provided below.\nhttps://www.quantconnect.com/forum/discussion/4478/delta-hedged-straddle/p1\n```",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecf94b8affbe32008316ce9ccac384f24944cebc | 364,921 | ipynb | Jupyter Notebook | samples/colab/mnist_training.ipynb | anthonycanino/iree | be167a62f8872597eac1b72e26b4c62e291bfd5c | [
"Apache-2.0"
] | 1 | 2022-02-12T17:56:47.000Z | 2022-02-12T17:56:47.000Z | samples/colab/mnist_training.ipynb | anthonycanino/iree | be167a62f8872597eac1b72e26b4c62e291bfd5c | [
"Apache-2.0"
] | null | null | null | samples/colab/mnist_training.ipynb | anthonycanino/iree | be167a62f8872597eac1b72e26b4c62e291bfd5c | [
"Apache-2.0"
] | null | null | null | 599.213465 | 288,041 | 0.941154 | [
[
[
"```\nCopyright 2020 The IREE Authors\n\nLicensed under the Apache License v2.0 with LLVM Exceptions.\nSee https://llvm.org/LICENSE.txt for license information.\nSPDX-License-Identifier: Apache-2.0 WITH LLVM-exception\n```\n",
"_____no_output_____"
],
[
"# Training and Executing an MNIST Model with IREE",
"_____no_output_____"
],
[
"## Overview\n\nThis notebook covers installing IREE and using it to train a simple neural network on the MNIST dataset.",
"_____no_output_____"
],
[
"## 1. Install and Import IREE",
"_____no_output_____"
]
],
[
[
"%%capture\n!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases",
"_____no_output_____"
],
[
"# Import IREE's TensorFlow Compiler and Runtime.\nimport iree.compiler.tf\nimport iree.runtime",
"_____no_output_____"
]
],
[
[
"## 2. Import TensorFlow and Other Dependencies",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\ntf.random.set_seed(91)\nnp.random.seed(91)\n\nplt.style.use(\"seaborn-whitegrid\")\nplt.rcParams[\"font.family\"] = \"monospace\"\nplt.rcParams[\"figure.figsize\"] = [8, 4.5]\nplt.rcParams[\"figure.dpi\"] = 150\n\n# Print version information for future notebook users to reference.\nprint(\"TensorFlow version: \", tf.__version__)\nprint(\"Numpy version: \", np.__version__)",
"TensorFlow version: 2.8.0\nNumpy version: 1.21.6\n"
]
],
[
[
"## 3. Load the MNIST Dataset",
"_____no_output_____"
]
],
[
[
"# Keras datasets don't provide metadata.\nNUM_CLASSES = 10\nNUM_ROWS, NUM_COLS = 28, 28\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\n# Reshape into grayscale images:\nx_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))\nx_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))\n\n# Rescale uint8 pixel values into float32 values between 0 and 1:\nx_train = x_train.astype(np.float32) / 255\nx_test = x_test.astype(np.float32) / 255\n\n# IREE doesn't currently support int8 tensors, so we cast them to int32:\ny_train = y_train.astype(np.int32)\ny_test = y_test.astype(np.int32)",
"_____no_output_____"
],
[
"print(\"Sample image from the dataset:\")\nsample_index = np.random.randint(x_train.shape[0])\nplt.figure(figsize=(5, 5))\nplt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap=\"gray\")\nplt.title(f\"Sample #{sample_index}, label: {y_train[sample_index]}\")\nplt.axis(\"off\")\nplt.tight_layout()",
"Sample image from the dataset:\n"
]
],
[
[
"## 4. Create a Simple DNN",
"_____no_output_____"
],
[
"MLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the `tf.TensorSpec`s).",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 32",
"_____no_output_____"
],
[
"class TrainableDNN(tf.Module):\n\n def __init__(self):\n super().__init__()\n\n # Create a Keras model to train.\n inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))\n x = tf.keras.layers.Flatten()(inputs)\n x = tf.keras.layers.Dense(128)(x)\n x = tf.keras.layers.Activation(\"relu\")(x)\n x = tf.keras.layers.Dense(10)(x)\n outputs = tf.keras.layers.Softmax()(x)\n self.model = tf.keras.Model(inputs, outputs)\n\n # Create a loss function and optimizer to use during training.\n self.loss = tf.keras.losses.SparseCategoricalCrossentropy()\n self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)\n \n @tf.function(input_signature=[\n tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs\n ])\n def predict(self, inputs):\n return self.model(inputs, training=False)\n\n # We compile the entire training step by making it a method on the model.\n @tf.function(input_signature=[\n tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs\n tf.TensorSpec([BATCH_SIZE], tf.int32) # labels\n ])\n def learn(self, inputs, labels):\n # Capture the gradients from forward prop...\n with tf.GradientTape() as tape:\n probs = self.model(inputs, training=True)\n loss = self.loss(labels, probs)\n\n # ...and use them to update the model's weights.\n variables = self.model.trainable_variables\n gradients = tape.gradient(loss, variables)\n self.optimizer.apply_gradients(zip(gradients, variables))\n return loss",
"_____no_output_____"
]
],
[
[
"## 5. Compile the Model with IREE",
"_____no_output_____"
],
[
"tf.keras adds a large number of methods to TrainableDNN, and most of them\ncannot be compiled with IREE. To get around this we tell IREE exactly which\nmethods we would like it to compile.",
"_____no_output_____"
]
],
[
[
"exported_names = [\"predict\", \"learn\"]",
"_____no_output_____"
]
],
[
[
"Choose one of IREE's three backends to compile to. (*Note: Using Vulkan requires installing additional drivers.*)",
"_____no_output_____"
]
],
[
[
"backend_choice = \"dylib-llvm-aot (CPU)\" #@param [ \"vmvx (CPU)\", \"dylib-llvm-aot (CPU)\", \"vulkan-spirv (GPU/SwiftShader – requires additional drivers) \" ]\nbackend_choice = backend_choice.split(' ')[0]",
"_____no_output_____"
],
[
"# Compile the TrainableDNN module\n# Note: extra flags are needed to i64 demotion, see https://github.com/google/iree/issues/8644\nvm_flatbuffer = iree.compiler.tf.compile_module(\n TrainableDNN(),\n target_backends=[backend_choice],\n exported_names=exported_names,\n extra_args=[\"--iree-mhlo-demote-i64-to-i32=false\",\n \"--iree-flow-demote-i64-to-i32\"])\ncompiled_model = iree.runtime.load_vm_flatbuffer(\n vm_flatbuffer,\n backend=backend_choice)",
"INFO:tensorflow:Assets written to: /tmp/tmp24zw84yu.sm/assets\n"
]
],
[
[
"## 6. Train the Compiled Model on MNIST",
"_____no_output_____"
],
[
"This compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).",
"_____no_output_____"
]
],
[
[
"#@title Benchmark inference and training\nprint(\"Inference latency:\\n \", end=\"\")\n%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])\nprint(\"Training latancy:\\n \", end=\"\")\n%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])",
"Inference latency:\n 100 loops, best of 5: 1.04 ms per loop\nTraining latancy:\n 100 loops, best of 5: 3.22 ms per loop\n"
],
[
"# Run the core training loop.\nlosses = []\n\nstep = 0\nmax_steps = x_train.shape[0] // BATCH_SIZE\n\nfor batch_start in range(0, x_train.shape[0], BATCH_SIZE):\n if batch_start + BATCH_SIZE > x_train.shape[0]:\n continue\n\n inputs = x_train[batch_start:batch_start + BATCH_SIZE]\n labels = y_train[batch_start:batch_start + BATCH_SIZE]\n\n loss = compiled_model.learn(inputs, labels).to_host()\n losses.append(loss)\n\n step += 1\n print(f\"\\rStep {step:4d}/{max_steps}: loss = {loss:.4f}\", end=\"\")",
"Step 1875/1875: loss = 0.2169"
],
[
"#@title Plot the training results\nimport bottleneck as bn\nsmoothed_losses = bn.move_mean(losses, 32)\nx = np.arange(len(losses))\n\nplt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')\nplt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')\n\nplt.ylim(0)\nplt.legend(frameon=True)\nplt.xlabel(\"training step\")\nplt.ylabel(\"cross-entropy\")\nplt.title(\"training loss\");",
"_____no_output_____"
]
],
[
[
"## 7. Evaluate on Heldout Test Examples",
"_____no_output_____"
]
],
[
[
"#@title Evaluate the network on the test data.\naccuracies = []\n\nstep = 0\nmax_steps = x_test.shape[0] // BATCH_SIZE\n\nfor batch_start in range(0, x_test.shape[0], BATCH_SIZE):\n if batch_start + BATCH_SIZE > x_test.shape[0]:\n continue\n\n inputs = x_test[batch_start:batch_start + BATCH_SIZE]\n labels = y_test[batch_start:batch_start + BATCH_SIZE]\n\n prediction = compiled_model.predict(inputs).to_host()\n prediction = np.argmax(prediction, -1)\n accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)\n\n step += 1\n print(f\"\\rStep {step:4d}/{max_steps}\", end=\"\")\nprint()\n\naccuracy = np.mean(accuracies)\nprint(f\"Test accuracy: {accuracy:.3f}\")",
"Step 312/312\nTest accuracy: 0.904\n"
],
[
"#@title Display inference predictions on a random selection of heldout data\nrows = 4\ncolumns = 4\nimages_to_display = rows * columns\nassert BATCH_SIZE >= images_to_display\n\nrandom_index = np.arange(x_test.shape[0])\nnp.random.shuffle(random_index)\nx_test = x_test[random_index]\ny_test = y_test[random_index]\n\npredictions = compiled_model.predict(x_test[:BATCH_SIZE]).to_host()\npredictions = np.argmax(predictions, -1)\n\nfig, axs = plt.subplots(rows, columns)\n\nfor i, ax in enumerate(np.ndarray.flatten(axs)):\n ax.imshow(x_test[i, :, :, 0])\n color = \"#000000\" if predictions[i] == y_test[i] else \"#ff7f0e\"\n ax.set_xlabel(f\"prediction={predictions[i]}\", color=color)\n ax.grid(False)\n ax.set_yticks([])\n ax.set_xticks([])\n\nfig.tight_layout()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecf95dc30136be22175b894df9e7f195751e951c | 32,337 | ipynb | Jupyter Notebook | Model backlog/Inference/279-tweet-inference-5fold-roberta-avg-last4-custom.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction | 0a775abe9a92c4bc2db957519c523be7655df8d8 | [
"MIT"
] | 11 | 2020-06-17T07:30:20.000Z | 2022-03-25T16:56:01.000Z | Model backlog/Inference/279-tweet-inference-5fold-roberta-avg-last4-custom.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction | 0a775abe9a92c4bc2db957519c523be7655df8d8 | [
"MIT"
] | null | null | null | Model backlog/Inference/279-tweet-inference-5fold-roberta-avg-last4-custom.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction | 0a775abe9a92c4bc2db957519c523be7655df8d8 | [
"MIT"
] | null | null | null | 34.110759 | 149 | 0.417355 | [
[
[
"## Dependencies",
"_____no_output_____"
]
],
[
[
"import json, glob\nfrom tweet_utility_scripts import *\nfrom tweet_utility_preprocess_roberta_scripts_aux import *\nfrom transformers import TFRobertaModel, RobertaConfig\nfrom tokenizers import ByteLevelBPETokenizer\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.models import Model",
"_____no_output_____"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')\n\nprint('Test samples: %s' % len(test))\ndisplay(test.head())",
"Test samples: 3534\n"
]
],
[
[
"# Model parameters",
"_____no_output_____"
]
],
[
[
"input_base_path = '/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/'\nwith open(input_base_path + 'config.json') as json_file:\n config = json.load(json_file)\n\nconfig",
"_____no_output_____"
],
[
"vocab_path = input_base_path + 'vocab.json'\nmerges_path = input_base_path + 'merges.txt'\nbase_path = '/kaggle/input/qa-transformers/roberta/'\n\n# vocab_path = base_path + 'roberta-base-vocab.json'\n# merges_path = base_path + 'roberta-base-merges.txt'\nconfig['base_model_path'] = base_path + 'roberta-base-tf_model.h5'\nconfig['config_path'] = base_path + 'roberta-base-config.json'\n\nmodel_path_list = glob.glob(input_base_path + '*.h5')\nmodel_path_list.sort()\n\nprint('Models to predict:')\nprint(*model_path_list, sep='\\n')",
"Models to predict:\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_1.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_2.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_3.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_4.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_5.h5\n"
]
],
[
[
"# Tokenizer",
"_____no_output_____"
]
],
[
[
"tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, \n lowercase=True, add_prefix_space=True)",
"_____no_output_____"
]
],
[
[
"# Pre process",
"_____no_output_____"
]
],
[
[
"test['text'].fillna('', inplace=True)\ntest['text'] = test['text'].apply(lambda x: x.lower())\ntest['text'] = test['text'].apply(lambda x: x.strip())\n\nx_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)",
"_____no_output_____"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)\n\ndef model_fn(MAX_LEN):\n input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')\n attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')\n \n base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')\n _, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})\n \n h12 = hidden_states[-1]\n h11 = hidden_states[-2]\n h10 = hidden_states[-3]\n h09 = hidden_states[-4]\n \n avg_hidden = layers.Average()([h12, h11, h10, h09])\n\n logits = layers.Dense(2, use_bias=False, name='qa_outputs')(avg_hidden)\n \n start_logits, end_logits = tf.split(logits, 2, axis=-1)\n start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')\n end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')\n \n model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])\n \n return model",
"_____no_output_____"
]
],
[
[
"# Make predictions",
"_____no_output_____"
]
],
[
[
"NUM_TEST_IMAGES = len(test)\ntest_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))\ntest_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))\n\nfor model_path in model_path_list:\n print(model_path)\n model = model_fn(config['MAX_LEN'])\n model.load_weights(model_path)\n \n test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'])) \n test_start_preds += test_preds[0]\n test_end_preds += test_preds[1]",
"/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_1.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_2.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_3.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_4.h5\n/kaggle/input/279-tweet-train-5fold-roberta-avg-last4-custom-los/model_fold_5.h5\n"
]
],
[
[
"# Post process",
"_____no_output_____"
]
],
[
[
"test['start'] = test_start_preds.argmax(axis=-1)\ntest['end'] = test_end_preds.argmax(axis=-1)\n\ntest['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)\n\n# Post-process\ntest[\"selected_text\"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)\ntest['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)\ntest['selected_text'].fillna(test['text'], inplace=True)",
"_____no_output_____"
]
],
[
[
"# Visualize predictions",
"_____no_output_____"
]
],
[
[
"test['text_len'] = test['text'].apply(lambda x : len(x))\ntest['label_len'] = test['selected_text'].apply(lambda x : len(x))\ntest['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))\ntest['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))\ntest['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))\ntest['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))\ntest['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)\n\ndisplay(test.head(10))\ndisplay(test.describe())",
"_____no_output_____"
]
],
[
[
"# Test set predictions",
"_____no_output_____"
]
],
[
[
"submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')\nsubmission['selected_text'] = test['selected_text']\nsubmission.to_csv('submission.csv', index=False)\nsubmission.head(10)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecf97521f0f4b8b9cba266052b46440ddd00a212 | 7,042 | ipynb | Jupyter Notebook | gretel/create_synthetic_data_from_csv_or_df/blueprint.ipynb | robertlombardo/gretel-blueprints | 43191178afb43082b24f2ef846fda4256aafdb11 | [
"Apache-2.0"
] | 30 | 2020-10-27T20:00:24.000Z | 2022-02-07T13:21:25.000Z | gretel/create_synthetic_data_from_csv_or_df/blueprint.ipynb | robertlombardo/gretel-blueprints | 43191178afb43082b24f2ef846fda4256aafdb11 | [
"Apache-2.0"
] | 14 | 2021-04-29T20:49:34.000Z | 2022-01-24T19:06:48.000Z | gretel/create_synthetic_data_from_csv_or_df/blueprint.ipynb | robertlombardo/gretel-blueprints | 43191178afb43082b24f2ef846fda4256aafdb11 | [
"Apache-2.0"
] | 18 | 2021-01-05T09:25:39.000Z | 2022-03-19T14:58:34.000Z | 27.400778 | 328 | 0.596137 | [
[
[
"# Create a synthetic version of your own CSV or DataFrame\n\nThis blueprint utilizes Gretel's premium SDKs to create a synthetic version of your own data. Our SDKs create automatic data validators to help ensure the data generated has the same semantics as the source data. Additionally, the SDKs do automatic header clustering to help maintain statistical relations between columns.",
"_____no_output_____"
]
],
[
[
"!pip install -U \"gretel-client<0.8.0\" gretel-synthetics pandas",
"_____no_output_____"
],
[
"# Load your Gretel API key. You can acquire this from the Gretel Console \n# @ https://console.gretel.cloud\n\nimport pandas as pd\nfrom gretel_client import get_cloud_client\n\npd.set_option('max_colwidth', None)\n\nclient = get_cloud_client(prefix=\"api\", api_key=\"prompt\")\nclient.install_packages()",
"_____no_output_____"
],
[
"# Load and preview dataset\n\nimport pandas as pd\n\ndataset_path = 'https://gretel-public-website.s3-us-west-2.amazonaws.com/datasets/healthcare-analytics-vidhya/train_data.csv'\nnrows = 10000 # We will use this later when generating data\ntraining_df = pd.read_csv(dataset_path, nrows=nrows)\nprint(training_df.head())",
"_____no_output_____"
],
[
"# Create the Gretel Synthetics Training / Model Configuration\n#\n# Gretel now offers Configuration Templates that provide starting points for a variety\n# of training data characteristics.\n#\n# You may browse the options here: https://github.com/gretelai/gretel-blueprints/tree/main/config_templates/gretel/synthetics\n#\n# The helper function below will fetch the configuration based on the filename *WITHOUT the file extension*\n\nfrom pathlib import Path\n\ncheckpoint_dir = str(Path.cwd() / \"checkpoints-synthetics\")\n\ntry:\n from gretel_client import get_synthetics_config\n \n # NOTE: Replace the \"default\" param with any of the configuration filenames (minus extension)\n #\n # https://github.com/gretelai/gretel-blueprints/tree/main/config_templates/gretel/synthetics\n #\n # example: get_synthetics_config(\"low-record-count\")\n\n config_template = get_synthetics_config(\"default\")\n print(f\"Loaded config: {config_template}\")\nexcept ImportError:\n print(\"ERROR: Could not load remote template, using default params. Please ensure you have the latest gretel-client installed.\")\n config_template = {\"epochs\": 100}\n \n\nconfig_template[\"checkpoint_dir\"] = checkpoint_dir\n\n# Set or update any custom parameters here\n \nconfig_template[\"overwrite\"] = True",
"_____no_output_____"
],
[
"# Capture transient import errors in Google Colab\n\ntry:\n from gretel_helpers.synthetics import SyntheticDataBundle\nexcept FileNotFoundError:\n from gretel_helpers.synthetics import SyntheticDataBundle",
"_____no_output_____"
],
[
"# Create a Gretel Synthetic Data Bundle\n\nfrom gretel_helpers.synthetics import create_df, SyntheticDataBundle\n\nmodel = SyntheticDataBundle(\n training_df=training_df,\n delimiter=None, # if ``None``, it will try and automatically be detected, otherwise you can set it\n auto_validate=True, # build record validators that learn per-column, these are used to ensure generated records have the same composition as the original\n synthetic_config=config_template, # the config for Synthetics\n)",
"_____no_output_____"
],
[
"model.build()",
"_____no_output_____"
],
[
"model.train()",
"_____no_output_____"
],
[
"# num_lines: how many rows to generate\n# max_invalid: the number of rows that do not pass semantic validation, if this number is exceeded, training will\n# stop\nmodel.generate(num_lines=nrows, max_invalid=nrows)",
"_____no_output_____"
],
[
"model.get_synthetic_df()",
"_____no_output_____"
],
[
"# Generate report that shows the statistical performance between the training and synthetic data\nimport IPython\n\nreport_path = './report.html'\nmodel.generate_report(report_path=report_path)\nIPython.display.HTML(filename=report_path)",
"_____no_output_____"
],
[
"# Optionally save your model\n\nmodel.save(\"my_model.tar.gz\")",
"_____no_output_____"
],
[
"# Save synthetic dataframe locally and to a private Gretel project \n\ndf = model.get_synthetic_df()\ndf.to_csv('synthetic-data.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecf98ad3a2fb542d78a67a8f1d59f33d1557993f | 95,788 | ipynb | Jupyter Notebook | pim_draw_plot_copy.ipynb | 7bvcxz/PIM-Simulator | 5abe9cd8b4368c29caa62c75be365901c31c4569 | [
"MIT"
] | 9 | 2021-11-29T09:06:28.000Z | 2022-03-30T15:15:36.000Z | pim_draw_plot_copy.ipynb | 7bvcxz/PIM_Func_Sim | a1a23846e6719ffb6f0222432c1c75311d0a5104 | [
"MIT"
] | null | null | null | pim_draw_plot_copy.ipynb | 7bvcxz/PIM_Func_Sim | a1a23846e6719ffb6f0222432c1c75311d0a5104 | [
"MIT"
] | 2 | 2022-03-07T11:50:07.000Z | 2022-03-25T14:53:45.000Z | 360.105263 | 24,314 | 0.928248 | [
[
[
"import matplotlib.pyplot as plt\n\nwith open(\"out_dsim.txt\", \"r\") as f:\n lines = f.readlines()\n\nread_clocks = []\nwrite_clocks = []\nread_addrs = []\nwrite_addrs = []\n\nfor line in lines:\n log = line.split()\n if len(log) == 3 and log[0].isnumeric():\n if log[1] == 'read':\n read_clocks.append(int(log[0], 16))\n read_addrs.append(int(log[2], 16))\n else:\n write_clocks.append(int(log[0], 16))\n write_addrs.append(int(log[2], 16))\n\nf, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))\n\nax1.plot(read_clocks, read_addrs, marker='+', markersize=1, linestyle=\"None\", label=\"read\")\nax1.plot(write_clocks, write_addrs, marker='+', markersize=1, linestyle=\"None\", label=\"write\")\nax2.plot(read_clocks, read_addrs, marker='+', markersize=1, linestyle=\"None\", label=\"read\")\nax2.plot(write_clocks, write_addrs, marker='+', markersize=1, linestyle=\"None\", label=\"write\")\nplt.legend()\n\nax1.set_ylim(0xffe00000, 0x100000000)\nax2.set_ylim(-10000, 500000)\n\nplt.show()\n\n",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nwith open(\"out_g.txt\", \"r\") as f:\n lines = f.readlines()\n\nclocks = []\naddrs = []\n\nfor line in lines:\n log = line.split()\n if len(log) == 2 and log[0].isnumeric():\n clocks.append(int(log[0][:-1]))\n addrs.append(int(log[1], 16))\n\nplt.plot(clocks, addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nplt.legend()\n\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nwith open(\"out3.txt\", \"r\") as f:\n lines = f.readlines()\n\nread_clocks = []\nwrite_clocks = []\nread_addrs = []\nwrite_addrs = []\n\nfor line in lines:\n log = line.split()\n if len(log) == 14:\n if log[2] == \"Read\":\n read_clocks.append(int(log[0][:-1]))\n read_addrs.append(int(log[10], 16)-0x140000000)\n elif log[2] == \"Write\":\n write_clocks.append(int(log[0][:-1]))\n write_addrs.append(int(log[10], 16)-0x140000000)\n\nf, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))\n\nax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nplt.legend()\nax1.set_ylim(0xffe00000, 0x100000000)\nax2.set_ylim(-10000, 500000)\n\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nwith open(\"out4.txt\", \"r\") as f:\n lines = f.readlines()\n\nread_clocks = []\nwrite_clocks = []\nread_addrs = []\nwrite_addrs = []\n\nfor line in lines:\n log = line.split()\n if len(log) >= 13:\n if log[2] == \"Read\":\n read_clocks.append(int(log[0][:-1]))\n read_addrs.append(int(log[10], 16)-0x140000000)\n elif log[2] == \"Write\":\n write_clocks.append(int(log[0][:-1]))\n write_addrs.append(int(log[10], 16)-0x140000000)\n\nf, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))\n\nax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nplt.legend()\nax1.set_ylim(0xffe00000, 0x100000000)\nax2.set_ylim(-10000, 500000)\n\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nwith open(\"out.txt\", \"r\") as f:\n lines = f.readlines()\n\nread_clocks = []\nwrite_clocks = []\nread_addrs = []\nwrite_addrs = []\n\nfor line in lines:\n log = line.split()\n if len(log) >= 13:\n if log[2] == \"Read\":\n read_clocks.append(int(log[0][:-1]))\n read_addrs.append(int(log[10], 16)-0x140000000)\n elif log[2] == \"Write\":\n write_clocks.append(int(log[0][:-1]))\n write_addrs.append(int(log[10], 16)-0x140000000)\n\nf, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))\n\nax1.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax1.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nax2.plot(read_clocks, read_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"read\")\nax2.plot(write_clocks, write_addrs, marker='+', markersize=2, linestyle=\"None\", label=\"write\")\nplt.legend()\nax1.set_ylim(0xffe00000, 0x100000000)\nax2.set_ylim(-10000, 500000)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecf98bd2a49bcf8addae4c75b4cddb8dec001448 | 921,381 | ipynb | Jupyter Notebook | 4 jigsaw/simple-eda-text-preprocessing-jigsaw.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | 2 | 2020-01-25T08:31:14.000Z | 2022-03-23T18:24:03.000Z | 4 jigsaw/simple-eda-text-preprocessing-jigsaw.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 4 jigsaw/simple-eda-text-preprocessing-jigsaw.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 339.491894 | 198,140 | 0.908248 | [
[
[
"Many thanks to the following kagglers and their great kernels:\n\n@Andrew Lukyanenko, https://www.kaggle.com/artgor/toxicity-eda-model-interpretation-and-more\n\n@Eike Dehling: https://www.kaggle.com/eikedehling/feature-engineering\n\n@Jagan: https://www.kaggle.com/jagangupta/stop-the-s-toxic-comments-eda\n\n@Theo Viel: https://www.kaggle.com/theoviel/improve-your-score-with-some-text-preprocessing\n\n@Aditya Soni: https://www.kaggle.com/adityaecdrid/public-version-text-cleaning-vocab-65\n\n@Guillaume Martin: https://www.kaggle.com/gemartin/load-data-reduce-memory-usage\n\n@Shujian Liu: https://www.kaggle.com/shujian/test-the-difficulty-of-this-classification-tasks\n\nThanks @kotakota1110 for his suggestion in Time Series part.\n",
"_____no_output_____"
],
[
"**Content**\n\n* Text Features heatmap\n\n* Weighted toxic comments & different identities\n\n* Identities & Comment Labels.\n\n* Time Series Toxicity with Race, Religion, Sexual orientation, Gender and Disability (updated April 18, weighted the data again)\n\n* What happened in Jan 2017? (updated April 14)\n\n* Which Time are People More Toxic? (updated April 16)\n\n* Words Frequented and Toxic_Mask\n\n* Text Processing (updated April 21)\n\n* Memory Reducing (updated April 22)\n\n* Test the Difficulty of the Task (updated April 24)\n\n-----To be added",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom wordcloud import WordCloud ,STOPWORDS\nfrom PIL import Image\ntrain = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv')\ntest = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv')\nsub = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/sample_submission.csv')",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"train.isnull().sum(), test.isnull().sum()",
"_____no_output_____"
]
],
[
[
"FE: Some features might have relations with Toxicity, like capitals letters in the text, punctuations in the texts. Add the new features into the training set.",
"_____no_output_____"
]
],
[
[
"train['total_length'] = train['comment_text'].apply(len)\ntrain['capitals'] = train['comment_text'].apply(lambda comment: sum(1 for c in comment if c.isupper()))\ntrain['caps_vs_length'] = train.apply(lambda row: float(row['capitals'])/float(row['total_length']),axis=1)\ntrain['num_exclamation_marks'] = train['comment_text'].apply(lambda comment: comment.count('!'))\ntrain['num_question_marks'] = train['comment_text'].apply(lambda comment: comment.count('?'))\ntrain['num_punctuation'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '.,;:'))\ntrain['num_symbols'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '*&$%'))\ntrain['num_words'] = train['comment_text'].apply(lambda comment: len(comment.split()))\ntrain['num_unique_words'] = train['comment_text'].apply(lambda comment: len(set(w for w in comment.split())))\ntrain['words_vs_unique'] = train['num_unique_words'] / train['num_words']\ntrain['num_smilies'] = train['comment_text'].apply(lambda comment: sum(comment.count(w) for w in (':-)', ':)', ';-)', ';)')))",
"_____no_output_____"
],
[
"features = ('total_length', 'capitals', 'caps_vs_length', 'num_exclamation_marks','num_question_marks', 'num_punctuation', 'num_words', 'num_unique_words','words_vs_unique', 'num_smilies', 'num_symbols')\ncolumns = ('target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat', 'funny', 'wow', 'sad', 'likes', 'disagree', 'sexual_explicit','identity_annotator_count', 'toxicity_annotator_count')\nrows = [{c:train[f].corr(train[c]) for c in columns} for f in features]\ntrain_correlations = pd.DataFrame(rows, index=features)",
"_____no_output_____"
]
],
[
[
"Let's see the correlations between new features and targets.",
"_____no_output_____"
]
],
[
[
"train_correlations",
"_____no_output_____"
]
],
[
[
"Correlations between new features and targets in heatmap:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10, 6))\nsns.set(font_scale=1)\nax = sns.heatmap(train_correlations, vmin=-0.1, vmax=0.1, center=0.0)",
"_____no_output_____"
]
],
[
[
"Percent of toxic comments related to different identities, using target and popolation amount of each identity as weights:",
"_____no_output_____"
]
],
[
[
"demographics = train.loc[:, ['target']+list(train)[slice(8,32)]].dropna()\nweighted_toxic = demographics.iloc[:, 1:].multiply(demographics.iloc[:, 0], axis=\"index\").sum()/demographics.iloc[:, 1:][demographics.iloc[:, 1:]>0].count()\nweighted_toxic = weighted_toxic.sort_values(ascending=False)\nplt.figure(figsize=(30,20))\nsns.set(font_scale=3)\nax = sns.barplot(x = weighted_toxic.values, y = weighted_toxic.index, alpha=0.8)\nplt.ylabel('Demographics')\nplt.xlabel('Weighted Toxic')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Meanwhile, we can check the correlations between identities and the comment labels.",
"_____no_output_____"
]
],
[
[
"identities = tuple(train.iloc[:, 8:32])\nrows = [{c:train[f].corr(train[c]) for c in columns} for f in identities]\npoptoxicity_correlations = pd.DataFrame(rows, index=identities)",
"_____no_output_____"
],
[
"poptoxicity_correlations",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 8))\nsns.set(font_scale=1)\nax = sns.heatmap(poptoxicity_correlations, vmin=-0.1, vmax=0.1, center=0.0)",
"_____no_output_____"
]
],
[
[
"We can also check the Time Series for Toxicity with different identities:\n\n(Thanks again for @kotakota1110's suggestion. Now we are using \"target\" and \"identity data amount\" to weight the data twice, which make more sense.)",
"_____no_output_____"
]
],
[
[
"withdate = train.loc[:, ['created_date', 'target']+list(train)[slice(8,32)]].dropna()\nraceweighted = withdate.iloc[:, 2:]/withdate.iloc[:, 2:].sum()\nrace_target_weighted = raceweighted.multiply(withdate.iloc[:, 1], axis=\"index\")\nrace_target_weighted['created_date'] = pd.to_datetime(withdate['created_date']).values.astype('datetime64[M]')\nweighted_demo = race_target_weighted.groupby(['created_date']).sum().sort_index()",
"_____no_output_____"
],
[
"import plotly\nimport plotly.plotly as py\nimport cufflinks as cf\nimport plotly.graph_objs as go\nplotly.tools.set_credentials_file(username='13217', api_key='FG6itEaCMouvPJVR7DlI')\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\ninit_notebook_mode(connected=True)",
"_____no_output_____"
],
[
"weighted_demo[['white', 'asian', 'black', 'jewish', 'latino', 'other_race_or_ethnicity']].iplot(title = 'Time Series Toxicity & Race', filename='Time Series Toxicity & Race' )\n\n# Click on the legend to change display. Double click for single identity.",
"_____no_output_____"
],
[
"weighted_demo[['atheist', 'buddhist', 'christian', 'hindu', 'muslim', 'other_religion']].iplot(title = 'Time Series Toxicity & Religion', filename='Time Series Toxicity & Religion')\n\n# Click on the legend to change display. Double click for single identity.",
"_____no_output_____"
],
[
"weighted_demo[['heterosexual', 'homosexual_gay_or_lesbian', 'bisexual', 'other_sexual_orientation']].iplot(title = 'Time Series Toxicity & Sexual Orientation', filename='Time Series Toxicity & Sexual Orientation')\n\n# Click on the legend to change display. Double click for single identity.",
"_____no_output_____"
],
[
"weighted_demo[['male', 'female', 'transgender', 'other_gender']].iplot(title = 'Time Series Toxicity & Gender', filename='Time Series Toxicity & Gender')\n\n# Click on the legend to change display. Double click for single identity.",
"_____no_output_____"
],
[
"weighted_demo[['physical_disability', 'intellectual_or_learning_disability', 'psychiatric_or_mental_illness', 'other_disability']].iplot(title = 'Time Series Toxicity & Disability', filename='Time Series Toxicity & Disability')\n\n# Click on the legend to change display. Double click for single identity.",
"_____no_output_____"
]
],
[
[
"When plotting these charts, I found that most data have a peak around Jan 2017. A bit curious. Let's check what's different between Jan 2017 and other time.",
"_____no_output_____"
]
],
[
[
"alldate_toxicity = train[train['target'] >= 0.5].loc[:, ['created_date', 'target', 'comment_text']].dropna()\nalldate_toxicity['created_date'] = pd.to_datetime(alldate_toxicity['created_date']).values.astype('datetime64[M]')\njan_2017_toxicity = alldate_toxicity[alldate_toxicity['created_date'] == '2017-01-01']\n\nfrom nltk.corpus import stopwords\ndef check_frequency(data = alldate_toxicity['comment_text'], n = 20):\n stop = stopwords.words('english')\n data = data.apply(lambda x: \" \".join(x.lower() for x in x.split()))\n data = data.str.replace('[^\\w\\s]','')\n data = data.apply(lambda x: \" \".join(x for x in x.split() if x not in stop))\n freq = pd.Series(' '.join(data).split()).value_counts()[:n]\n return freq\n\ntop_10_toxicity_othertime = check_frequency(data = alldate_toxicity[alldate_toxicity['created_date'] != '2017-01-01']['comment_text'], n = 10)\ntop_10_toxicity_jan_2017 = check_frequency(data = jan_2017_toxicity['comment_text'], n = 10)",
"_____no_output_____"
]
],
[
[
"Which toxicity related word appears Top 10 in jan_2017, but not in other time Top 10?",
"_____no_output_____"
]
],
[
[
"top_10_toxicity_jan_2017.index.difference(top_10_toxicity_othertime.index)",
"_____no_output_____"
]
],
[
[
"None of them... All the same... Then let's theck their frequency",
"_____no_output_____"
]
],
[
[
"percent_toxicity_othertime = top_10_toxicity_othertime/alldate_toxicity[alldate_toxicity['created_date'] != '2017-01-01']['comment_text'].str.split().str.len().sum()\npercent_toxicity_jan_2017 = top_10_toxicity_jan_2017/jan_2017_toxicity['comment_text'].str.split().str.len().sum()\ntop_toxicity = pd.concat([percent_toxicity_jan_2017, percent_toxicity_othertime], axis=1, sort=False)\ntop_toxicity.columns = ['Jan_2017', 'Other_Time']\ntop_toxicity['Difference'] = top_toxicity['Jan_2017'] - top_toxicity['Other_Time']",
"_____no_output_____"
],
[
"top_toxicity.head(30)",
"_____no_output_____"
],
[
"import plotly.graph_objs as go\ntrace1 = go.Bar(\n x=top_toxicity.index,\n y=top_toxicity['Jan_2017'],\n name='Jan_2017'\n)\ntrace2 = go.Bar(\n x=top_toxicity.index,\n y=top_toxicity['Other_Time'],\n name='Other_Time'\n)\n\ndata = [trace2, trace1]\nlayout = go.Layout(\n barmode='group'\n)\nlayout = go.Layout(yaxis=dict(tickformat=\".2%\"))\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig, title = 'Top Toxicity Comarision', filename='top_toxicity_comarision')",
"_____no_output_____"
]
],
[
[
"After checking the whole time series, I'm also curious about, Which Time are People More Toxic?",
"_____no_output_____"
]
],
[
[
"train['datetime64'] = pd.to_datetime(train['created_date']).values.astype('datetime64[h]')\ntrain['hour'] = train['datetime64'].dt.hour\nall_comments_by_hour = train['target'].groupby(train['hour']).sum().sort_index()/train['target'].groupby(train['hour']).sum().sum()\ntoxic_comments_by_hour = train[train['target'] >= 0.5]['target'].groupby(train['hour']).sum().sort_index()/train[train['target'] >= 0.5]['target'].groupby(train['hour']).sum().sum()\ncomments_hour_check = pd.concat([all_comments_by_hour, toxic_comments_by_hour], axis=1, sort=False)\ncomments_hour_check.columns = ['all_comments', 'toxic_comments']",
"_____no_output_____"
],
[
"labels = ['Midnight', 'Morning', 'Noon', 'Evening', 'Midnight']\ntickvals = ['0', '6', '12', '18', comments_hour_check.index.max()]\n\ntrace1 = go.Scatter(\n x=comments_hour_check.index,\n y=comments_hour_check['all_comments'],\n name = 'comment percent per H',\n line = dict(\n color = ('rgb(22, 96, 167)'),\n width = 1)\n)\ntrace2 = go.Scatter(\n x=comments_hour_check.index,\n y=comments_hour_check['toxic_comments'],\n name = 'toxic comment percent per H',\n line = dict(\n color = ('rgb(205, 12, 24)'),\n width = 1,)\n)\n\ntrace3 = go.Bar(\n x=comments_hour_check.index,\n y=comments_hour_check['toxic_comments']-comments_hour_check['all_comments'],\n name = 'More Toxic Comment Ratio'\n)\n\ndata = [trace1, trace2, trace3]\n\nlayout = go.Layout(yaxis=dict(tickformat=\".2%\"),\n title = 'Which Time are People More Toxic',\n xaxis=go.layout.XAxis(\n ticktext=labels, \n tickvals=tickvals\n ),\n )\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig,filename='Which Time are People More Toxic')",
"_____no_output_____"
]
],
[
[
"Moreover, we can do something fun, digging into the text with WordCloud. Let's check the Words frequented in Toxic Comments.",
"_____no_output_____"
]
],
[
[
"def toxicwordcloud(subset=train[train.target>0.7], title = \"Words Frequented\", picture = \"../input/imagesforkernal/anger.png\"):\n stopword=set(STOPWORDS)\n toxic_mask=np.array(Image.open(picture))\n toxic_mask=toxic_mask[:,:,1]\n text=subset.comment_text.values\n wc= WordCloud(background_color=\"black\",max_words=4000,mask=toxic_mask,stopwords=stopword)\n wc.generate(\" \".join(text))\n plt.figure(figsize=(8,8))\n plt.xticks([])\n plt.yticks([])\n plt.axis('off')\n plt.title(title, fontsize=20)\n plt.imshow(wc.recolor(colormap= 'gist_earth' , random_state=244), alpha=0.98)",
"_____no_output_____"
],
[
"toxicwordcloud(picture = \"../input/imagesforkernal/toxic-sign.png\")",
"_____no_output_____"
],
[
"toxicwordcloud(subset = train[(train['female'] >0)&(train['target']>0.8)],title = \"Words Frequented - Female Related\", picture = \"../input/imagesforkernal/anger.png\")",
"_____no_output_____"
],
[
"toxicwordcloud(subset = train[(train['insult'] >0.8)&(train['target']>0.8)],title = \"Words Frequented - Insult Related\", picture = \"../input/imagesforkernal/biohazard-symbol.png\")",
"_____no_output_____"
]
],
[
[
"Some simple clasic text precessing and generating the new dataset",
"_____no_output_____"
]
],
[
[
"import operator \nimport re\nimport gensim",
"/opt/conda/lib/python3.6/site-packages/smart_open/ssh.py:34: UserWarning:\n\nparamiko missing, opening SSH/SCP/SFTP paths will be disabled. `pip install paramiko` to suppress\n\n"
],
[
"train = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv')\ntest = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv')",
"_____no_output_____"
],
[
"# Due to the memory limit, here we only are using glove, while if you have a better machine, you can also load crawl and other embeddings\n\ndf = pd.concat([train.iloc[:, [0,2]] ,test.iloc[:, :2]])\nglove = '../input/glove840b300dtxt/glove.840B.300d.txt'\n# crawl = '../input/fasttext-crawl-300d-2m/crawl-300d-2M.vec'\n \ndef load_embed(file):\n def get_coefs(word,*arr): \n return word, np.asarray(arr, dtype='float32')\n if file == '../input/fasttext-crawl-300d-2m/crawl-300d-2M.vec':\n embeddings_index = gensim.models.KeyedVectors.load_word2vec_format(crawl)\n else:\n embeddings_index = dict(get_coefs(*o.split(\" \")) for o in open(file, encoding='latin'))\n return embeddings_index",
"_____no_output_____"
],
[
"print(\"Extracting GloVe embedding\")\nembed_glove = load_embed(glove)\n# print(\"Extracting Crawl embedding\")\n# embed_crawl = load_embed(crawl)",
"Extracting GloVe embedding\n"
],
[
"def build_vocab(texts):\n sentences = texts.apply(lambda x: x.split()).values\n vocab = {}\n for sentence in sentences:\n for word in sentence:\n try:\n vocab[word] += 1\n except KeyError:\n vocab[word] = 1\n return vocab\n\nvocab = build_vocab(df['comment_text'])\n\ndef check_coverage(vocab, embeddings_index):\n known_words = {}\n unknown_words = {}\n nb_known_words = 0\n nb_unknown_words = 0\n for word in vocab.keys():\n try:\n known_words[word] = embeddings_index[word]\n nb_known_words += vocab[word]\n except:\n unknown_words[word] = vocab[word]\n nb_unknown_words += vocab[word]\n pass\n\n print('Found embeddings for {:.2%} of vocab'.format(len(known_words) / len(vocab)))\n print('Found embeddings for {:.2%} of all text'.format(nb_known_words / (nb_known_words + nb_unknown_words)))\n unknown_words = sorted(unknown_words.items(), key=operator.itemgetter(1))[::-1]\n\n return unknown_words",
"_____no_output_____"
],
[
"print(\"Glove : \")\noov_glove = check_coverage(vocab, embed_glove)\n# print(\"Crawl : \")\n# oov_crawl = check_coverage(vocab, embed_crawl)",
"Glove : \nFound embeddings for 15.52% of vocab\nFound embeddings for 89.61% of all text\n"
],
[
"df['lowered_comment'] = df['comment_text'].apply(lambda x: x.lower())\nvocab_low = build_vocab(df['lowered_comment'])\nprint(\"Glove : \")\noov_glove = check_coverage(vocab_low, embed_glove)\n# print(\"Crawl : \")\n# oov_crawl = check_coverage(vocab_low, embed_crawl)",
"Glove : \nFound embeddings for 11.77% of vocab\nFound embeddings for 89.33% of all text\n"
],
[
"def add_lower(embedding, vocab):\n count = 0\n for word in vocab:\n if word in embedding and word.lower() not in embedding: \n embedding[word.lower()] = embedding[word]\n count += 1\n print(f\"Added {count} words to embedding\")\n \nprint(\"Glove : \")\nadd_lower(embed_glove, vocab)\n# oov_glove = check_coverage(vocab_low, embed_glove)\n# print(\"Crawl : \")\n# add_lower(embed_crawl, vocab)\n# oov_crawl = check_coverage(vocab_low, embed_crawl)\n\n# Check Result\noov_glove[:10]",
"Glove : \nAdded 25061 words to embedding\n"
]
],
[
[
"The following contraction_mapping is borrowed from @Aditya Soni. Credit goes to https://www.kaggle.com/adityaecdrid/public-version-text-cleaning-vocab-65",
"_____no_output_____"
]
],
[
[
"contraction_mapping = {\n \"Trump's\" : 'trump is',\"'cause\": 'because',',cause': 'because',';cause': 'because',\"ain't\": 'am not','ain,t': 'am not',\n 'ain;t': 'am not','ain´t': 'am not','ain’t': 'am not',\"aren't\": 'are not',\n 'aren,t': 'are not','aren;t': 'are not','aren´t': 'are not','aren’t': 'are not',\"can't\": 'cannot',\"can't've\": 'cannot have','can,t': 'cannot','can,t,ve': 'cannot have',\n 'can;t': 'cannot','can;t;ve': 'cannot have',\n 'can´t': 'cannot','can´t´ve': 'cannot have','can’t': 'cannot','can’t’ve': 'cannot have',\n \"could've\": 'could have','could,ve': 'could have','could;ve': 'could have',\"couldn't\": 'could not',\"couldn't've\": 'could not have','couldn,t': 'could not','couldn,t,ve': 'could not have','couldn;t': 'could not',\n 'couldn;t;ve': 'could not have','couldn´t': 'could not',\n 'couldn´t´ve': 'could not have','couldn’t': 'could not','couldn’t’ve': 'could not have','could´ve': 'could have',\n 'could’ve': 'could have',\"didn't\": 'did not','didn,t': 'did not','didn;t': 'did not','didn´t': 'did not',\n 'didn’t': 'did not',\"doesn't\": 'does not','doesn,t': 'does not','doesn;t': 'does not','doesn´t': 'does not',\n 'doesn’t': 'does not',\"don't\": 'do not','don,t': 'do not','don;t': 'do not','don´t': 'do not','don’t': 'do not',\n \"hadn't\": 'had not',\"hadn't've\": 'had not have','hadn,t': 'had not','hadn,t,ve': 'had not have','hadn;t': 'had not',\n 'hadn;t;ve': 'had not have','hadn´t': 'had not','hadn´t´ve': 'had not have','hadn’t': 'had not','hadn’t’ve': 'had not have',\"hasn't\": 'has not','hasn,t': 'has not','hasn;t': 'has not','hasn´t': 'has not','hasn’t': 'has not',\n \"haven't\": 'have not','haven,t': 'have not','haven;t': 'have not','haven´t': 'have not','haven’t': 'have not',\"he'd\": 'he would',\n \"he'd've\": 'he would have',\"he'll\": 'he will',\n \"he's\": 'he is','he,d': 'he would','he,d,ve': 'he would have','he,ll': 'he will','he,s': 'he is','he;d': 'he would',\n 'he;d;ve': 'he would have','he;ll': 'he will','he;s': 'he is','he´d': 'he would','he´d´ve': 'he would have','he´ll': 'he will',\n 'he´s': 'he is','he’d': 'he would','he’d’ve': 'he would have','he’ll': 'he will','he’s': 'he is',\"how'd\": 'how did',\"how'll\": 'how will',\n \"how's\": 'how is','how,d': 'how did','how,ll': 'how will','how,s': 'how is','how;d': 'how did','how;ll': 'how will',\n 'how;s': 'how is','how´d': 'how did','how´ll': 'how will','how´s': 'how is','how’d': 'how did','how’ll': 'how will',\n 'how’s': 'how is',\"i'd\": 'i would',\"i'll\": 'i will',\"i'm\": 'i am',\"i've\": 'i have','i,d': 'i would','i,ll': 'i will',\n 'i,m': 'i am','i,ve': 'i have','i;d': 'i would','i;ll': 'i will','i;m': 'i am','i;ve': 'i have',\"isn't\": 'is not',\n 'isn,t': 'is not','isn;t': 'is not','isn´t': 'is not','isn’t': 'is not',\"it'd\": 'it would',\"it'll\": 'it will',\"It's\":'it is',\n \"it's\": 'it is','it,d': 'it would','it,ll': 'it will','it,s': 'it is','it;d': 'it would','it;ll': 'it will','it;s': 'it is','it´d': 'it would','it´ll': 'it will','it´s': 'it is',\n 'it’d': 'it would','it’ll': 'it will','it’s': 'it is',\n 'i´d': 'i would','i´ll': 'i will','i´m': 'i am','i´ve': 'i have','i’d': 'i would','i’ll': 'i will','i’m': 'i am',\n 'i’ve': 'i have',\"let's\": 'let us','let,s': 'let us','let;s': 'let us','let´s': 'let us',\n 'let’s': 'let us',\"ma'am\": 'madam','ma,am': 'madam','ma;am': 'madam',\"mayn't\": 'may not','mayn,t': 'may not','mayn;t': 'may not',\n 'mayn´t': 'may not','mayn’t': 'may not','ma´am': 'madam','ma’am': 'madam',\"might've\": 'might have','might,ve': 'might have','might;ve': 'might have',\"mightn't\": 'might not','mightn,t': 'might not','mightn;t': 'might not','mightn´t': 'might not',\n 'mightn’t': 'might not','might´ve': 'might have','might’ve': 'might have',\"must've\": 'must have','must,ve': 'must have','must;ve': 'must have',\n \"mustn't\": 'must not','mustn,t': 'must not','mustn;t': 'must not','mustn´t': 'must not','mustn’t': 'must not','must´ve': 'must have',\n 'must’ve': 'must have',\"needn't\": 'need not','needn,t': 'need not','needn;t': 'need not','needn´t': 'need not','needn’t': 'need not',\"oughtn't\": 'ought not','oughtn,t': 'ought not','oughtn;t': 'ought not',\n 'oughtn´t': 'ought not','oughtn’t': 'ought not',\"sha'n't\": 'shall not','sha,n,t': 'shall not','sha;n;t': 'shall not',\"shan't\": 'shall not',\n 'shan,t': 'shall not','shan;t': 'shall not','shan´t': 'shall not','shan’t': 'shall not','sha´n´t': 'shall not','sha’n’t': 'shall not',\n \"she'd\": 'she would',\"she'll\": 'she will',\"she's\": 'she is','she,d': 'she would','she,ll': 'she will',\n 'she,s': 'she is','she;d': 'she would','she;ll': 'she will','she;s': 'she is','she´d': 'she would','she´ll': 'she will',\n 'she´s': 'she is','she’d': 'she would','she’ll': 'she will','she’s': 'she is',\"should've\": 'should have','should,ve': 'should have','should;ve': 'should have',\n \"shouldn't\": 'should not','shouldn,t': 'should not','shouldn;t': 'should not','shouldn´t': 'should not','shouldn’t': 'should not','should´ve': 'should have',\n 'should’ve': 'should have',\"that'd\": 'that would',\"that's\": 'that is','that,d': 'that would','that,s': 'that is','that;d': 'that would',\n 'that;s': 'that is','that´d': 'that would','that´s': 'that is','that’d': 'that would','that’s': 'that is',\"there'd\": 'there had',\n \"there's\": 'there is','there,d': 'there had','there,s': 'there is','there;d': 'there had','there;s': 'there is',\n 'there´d': 'there had','there´s': 'there is','there’d': 'there had','there’s': 'there is',\n \"they'd\": 'they would',\"they'll\": 'they will',\"they're\": 'they are',\"they've\": 'they have',\n 'they,d': 'they would','they,ll': 'they will','they,re': 'they are','they,ve': 'they have','they;d': 'they would','they;ll': 'they will','they;re': 'they are',\n 'they;ve': 'they have','they´d': 'they would','they´ll': 'they will','they´re': 'they are','they´ve': 'they have','they’d': 'they would','they’ll': 'they will',\n 'they’re': 'they are','they’ve': 'they have',\"wasn't\": 'was not','wasn,t': 'was not','wasn;t': 'was not','wasn´t': 'was not',\n 'wasn’t': 'was not',\"we'd\": 'we would',\"we'll\": 'we will',\"we're\": 'we are',\"we've\": 'we have','we,d': 'we would','we,ll': 'we will',\n 'we,re': 'we are','we,ve': 'we have','we;d': 'we would','we;ll': 'we will','we;re': 'we are','we;ve': 'we have',\n \"weren't\": 'were not','weren,t': 'were not','weren;t': 'were not','weren´t': 'were not','weren’t': 'were not','we´d': 'we would','we´ll': 'we will',\n 'we´re': 'we are','we´ve': 'we have','we’d': 'we would','we’ll': 'we will','we’re': 'we are','we’ve': 'we have',\"what'll\": 'what will',\"what're\": 'what are',\"what's\": 'what is',\n \"what've\": 'what have','what,ll': 'what will','what,re': 'what are','what,s': 'what is','what,ve': 'what have','what;ll': 'what will','what;re': 'what are',\n 'what;s': 'what is','what;ve': 'what have','what´ll': 'what will',\n 'what´re': 'what are','what´s': 'what is','what´ve': 'what have','what’ll': 'what will','what’re': 'what are','what’s': 'what is',\n 'what’ve': 'what have',\"where'd\": 'where did',\"where's\": 'where is','where,d': 'where did','where,s': 'where is','where;d': 'where did',\n 'where;s': 'where is','where´d': 'where did','where´s': 'where is','where’d': 'where did','where’s': 'where is',\n \"who'll\": 'who will',\"who's\": 'who is','who,ll': 'who will','who,s': 'who is','who;ll': 'who will','who;s': 'who is',\n 'who´ll': 'who will','who´s': 'who is','who’ll': 'who will','who’s': 'who is',\"won't\": 'will not','won,t': 'will not','won;t': 'will not',\n 'won´t': 'will not','won’t': 'will not',\"wouldn't\": 'would not','wouldn,t': 'would not','wouldn;t': 'would not','wouldn´t': 'would not',\n 'wouldn’t': 'would not',\"you'd\": 'you would',\"you'll\": 'you will',\"you're\": 'you are','you,d': 'you would','you,ll': 'you will',\n 'you,re': 'you are','you;d': 'you would','you;ll': 'you will',\n 'you;re': 'you are','you´d': 'you would','you´ll': 'you will','you´re': 'you are','you’d': 'you would','you’ll': 'you will','you’re': 'you are',\n '´cause': 'because','’cause': 'because',\"you've\": \"you have\",\"could'nt\": 'could not',\n \"havn't\": 'have not',\"here’s\": \"here is\",'i\"\"m': 'i am',\"i'am\": 'i am',\"i'l\": \"i will\",\"i'v\": 'i have',\"wan't\": 'want',\"was'nt\": \"was not\",\"who'd\": \"who would\",\n \"who're\": \"who are\",\"who've\": \"who have\",\"why'd\": \"why would\",\"would've\": \"would have\",\"y'all\": \"you all\",\"y'know\": \"you know\",\"you.i\": \"you i\",\n \"your'e\": \"you are\",\"arn't\": \"are not\",\"agains't\": \"against\",\"c'mon\": \"common\",\"doens't\": \"does not\",'don\"\"t': \"do not\",\"dosen't\": \"does not\",\n \"dosn't\": \"does not\",\"shoudn't\": \"should not\",\"that'll\": \"that will\",\"there'll\": \"there will\",\"there're\": \"there are\",\n \"this'll\": \"this all\",\"u're\": \"you are\", \"ya'll\": \"you all\",\"you'r\": \"you are\",\"you’ve\": \"you have\",\"d'int\": \"did not\",\"did'nt\": \"did not\",\"din't\": \"did not\",\"dont't\": \"do not\",\"gov't\": \"government\",\n \"i'ma\": \"i am\",\"is'nt\": \"is not\",\"‘I\":'I',\n 'ᴀɴᴅ':'and','ᴛʜᴇ':'the','ʜᴏᴍᴇ':'home','ᴜᴘ':'up','ʙʏ':'by','ᴀᴛ':'at','…and':'and','civilbeat':'civil beat',\\\n 'TrumpCare':'Trump care','Trumpcare':'Trump care', 'OBAMAcare':'Obama care','ᴄʜᴇᴄᴋ':'check','ғᴏʀ':'for','ᴛʜɪs':'this','ᴄᴏᴍᴘᴜᴛᴇʀ':'computer',\\\n 'ᴍᴏɴᴛʜ':'month','ᴡᴏʀᴋɪɴɢ':'working','ᴊᴏʙ':'job','ғʀᴏᴍ':'from','Sᴛᴀʀᴛ':'start','gubmit':'submit','CO₂':'carbon dioxide','ғɪʀsᴛ':'first',\\\n 'ᴇɴᴅ':'end','ᴄᴀɴ':'can','ʜᴀᴠᴇ':'have','ᴛᴏ':'to','ʟɪɴᴋ':'link','ᴏғ':'of','ʜᴏᴜʀʟʏ':'hourly','ᴡᴇᴇᴋ':'week','ᴇɴᴅ':'end','ᴇxᴛʀᴀ':'extra',\\\n 'Gʀᴇᴀᴛ':'great','sᴛᴜᴅᴇɴᴛs':'student','sᴛᴀʏ':'stay','ᴍᴏᴍs':'mother','ᴏʀ':'or','ᴀɴʏᴏɴᴇ':'anyone','ɴᴇᴇᴅɪɴɢ':'needing','ᴀɴ':'an','ɪɴᴄᴏᴍᴇ':'income',\\\n 'ʀᴇʟɪᴀʙʟᴇ':'reliable','ғɪʀsᴛ':'first','ʏᴏᴜʀ':'your','sɪɢɴɪɴɢ':'signing','ʙᴏᴛᴛᴏᴍ':'bottom','ғᴏʟʟᴏᴡɪɴɢ':'following','Mᴀᴋᴇ':'make',\\\n 'ᴄᴏɴɴᴇᴄᴛɪᴏɴ':'connection','ɪɴᴛᴇʀɴᴇᴛ':'internet','financialpost':'financial post', 'ʜaᴠᴇ':' have ', 'ᴄaɴ':' can ', 'Maᴋᴇ':' make ', 'ʀᴇʟɪaʙʟᴇ':' reliable ', 'ɴᴇᴇᴅ':' need ',\n 'ᴏɴʟʏ':' only ', 'ᴇxᴛʀa':' extra ', 'aɴ':' an ', 'aɴʏᴏɴᴇ':' anyone ', 'sᴛaʏ':' stay ', 'Sᴛaʀᴛ':' start', 'SHOPO':'shop',\n }",
"_____no_output_____"
],
[
"def known_contractions(embed):\n known = []\n for contract in contraction_mapping:\n if contract in embed:\n known.append(contract)\n return known\n\nprint(\"- Known Contractions -\")\nprint(\" Glove :\")\nprint(known_contractions(embed_glove))\n# print(\" Crawl :\")\n# print(known_contractions(embed_crawl))",
"- Known Contractions -\n Glove :\n[\"'cause\", \"can't\", \"didn't\", \"doesn't\", \"don't\", \"i'd\", \"i'll\", \"i'm\", \"i've\", \"It's\", \"it's\", \"ma'am\", \"that's\", \"you'll\", \"you're\", 'you.i', \"c'mon\", \"d'int\"]\n"
],
[
"def clean_contractions(text, mapping):\n specials = [\"’\", \"‘\", \"´\", \"`\"]\n for s in specials:\n text = text.replace(s, \"'\")\n text = ' '.join([mapping[t] if t in mapping else t for t in text.split(\" \")])\n return text\n\ndf['treated_comment'] = df['lowered_comment'].apply(lambda x: clean_contractions(x, contraction_mapping))\n\nvocab = build_vocab(df['treated_comment'])\n\nprint(\"Glove : \")\noov_glove = check_coverage(vocab, embed_glove)\n# print(\"Crawl : \")\n# oov_paragram = check_coverage(vocab, embed_crawl)",
"Glove : \nFound embeddings for 13.51% of vocab\nFound embeddings for 90.40% of all text\n"
],
[
"punct = \"/-'?!.,#$%\\'()*+-/:;<=>@[\\\\]^_`{|}~\" + '\"\"“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\\×™√²—–&'\n\ndef unknown_punct(embed, punct):\n unknown = ''\n for p in punct:\n if p not in embed:\n unknown += p\n unknown += ' '\n return unknown\n\nprint(\"Glove :\")\nprint(unknown_punct(embed_glove, punct))\n# print(\"Crawl :\")\n# print(unknown_punct(embed_crawl, punct))",
"Glove :\n“ ” ’ ∞ θ ÷ α • à − β ∅ ³ π ‘ ₹ ´ ° £ € × ™ √ ² — – \n"
],
[
"punct_mapping = {\"‘\": \"'\", \"₹\": \"e\", \"´\": \"'\", \"°\": \"\", \"€\": \"e\", \"™\": \"tm\", \"√\": \" sqrt \", \"×\": \"x\", \"²\": \"2\", \"—\": \"-\", \"–\": \"-\", \"’\": \"'\", \"_\": \"-\", \"`\": \"'\", '“': '\"', '”': '\"', '“': '\"', \"£\": \"e\", '∞': 'infinity', 'θ': 'theta', '÷': '/', 'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi', }\n\ndef clean_special_chars(text, punct, mapping):\n for p in mapping:\n text = text.replace(p, mapping[p])\n for p in punct:\n text = text.replace(p, f' {p} ')\n specials = {'\\u200b': ' ', '…': ' ... ', '\\ufeff': '', 'करना': '', 'है': ''} # Other special characters that I have to deal with in last\n for s in specials:\n text = text.replace(s, specials[s])\n return text\n\ndf['treated_comment'] = df['treated_comment'].apply(lambda x: clean_special_chars(x, punct, punct_mapping))\nvocab = build_vocab(df['treated_comment'])\n\nprint(\"Glove : \")\noov_glove = check_coverage(vocab, embed_glove)\n# print(\"Crawl : \")\n# oov_paragram = check_coverage(vocab, embed_crawl)",
"Glove : \nFound embeddings for 54.21% of vocab\nFound embeddings for 99.73% of all text\n"
],
[
"oov_glove[:10]",
"_____no_output_____"
],
[
"mispell_dict = {'SB91':'senate bill','tRump':'trump','utmterm':'utm term','FakeNews':'fake news','Gʀᴇat':'great','ʙᴏᴛtoᴍ':'bottom','washingtontimes':'washington times','garycrum':'gary crum','htmlutmterm':'html utm term','RangerMC':'car','TFWs':'tuition fee waiver','SJWs':'social justice warrior','Koncerned':'concerned','Vinis':'vinys','Yᴏᴜ':'you','Trumpsters':'trump','Trumpian':'trump','bigly':'big league','Trumpism':'trump','Yoyou':'you','Auwe':'wonder','Drumpf':'trump','utmterm':'utm term','Brexit':'british exit','utilitas':'utilities','ᴀ':'a', '😉':'wink','😂':'joy','😀':'stuck out tongue', 'theguardian':'the guardian','deplorables':'deplorable', 'theglobeandmail':'the globe and mail', 'justiciaries': 'justiciary','creditdation': 'Accreditation','doctrne':'doctrine','fentayal': 'fentanyl','designation-': 'designation','CONartist' : 'con-artist','Mutilitated' : 'Mutilated','Obumblers': 'bumblers','negotiatiations': 'negotiations','dood-': 'dood','irakis' : 'iraki','cooerate': 'cooperate','COx':'cox','racistcomments':'racist comments','envirnmetalists': 'environmentalists',}",
"_____no_output_____"
],
[
"def correct_spelling(x, dic):\n for word in dic.keys():\n x = x.replace(word, dic[word])\n return x\n\ndf['treated_comment'] = df['treated_comment'].apply(lambda x: correct_spelling(x, mispell_dict))\n\nvocab = build_vocab(df['treated_comment'])\n\nprint(\"Glove : \")\noov_glove = check_coverage(vocab, embed_glove)\n# print(\"Crawl : \")\n# oov_paragram = check_coverage(vocab, embed_crawl)",
"Glove : \nFound embeddings for 54.21% of vocab\nFound embeddings for 99.73% of all text\n"
],
[
"train['comment_text'] = df['treated_comment'][:1804874]\ntest['comment_text'] = df['treated_comment'][1804874:]",
"_____no_output_____"
],
[
"def reduce_mem_usage(df):\n \"\"\" iterate through all the columns of a dataframe and modify the data type\n to reduce memory usage. \n \"\"\"\n start_mem = df.memory_usage().sum() / 1024**2\n print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))\n \n for col in df.columns:\n col_type = df[col].dtype\n \n if col_type != object:\n c_min = df[col].min()\n c_max = df[col].max()\n if str(col_type)[:3] == 'int':\n if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:\n df[col] = df[col].astype(np.int8)\n elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:\n df[col] = df[col].astype(np.int16)\n elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:\n df[col] = df[col].astype(np.int32)\n elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:\n df[col] = df[col].astype(np.int64) \n else:\n if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:\n df[col] = df[col].astype(np.float16)\n elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:\n df[col] = df[col].astype(np.float32)\n else:\n df[col] = df[col].astype(np.float64)\n else:\n df[col] = df[col].astype('category')\n\n end_mem = df.memory_usage().sum() / 1024**2\n print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))\n print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))\n \n return df",
"_____no_output_____"
],
[
"print('-' * 80)\nprint('train')\ntrain = reduce_mem_usage(train)\n\nprint('-' * 80)\nprint('test')\ntest = reduce_mem_usage(test)",
"--------------------------------------------------------------------------------\ntrain\nMemory usage of dataframe is 619.65 MB\nMemory usage after optimization is: 350.86 MB\nDecreased by 43.4%\n--------------------------------------------------------------------------------\ntest\nMemory usage of dataframe is 1.49 MB\nMemory usage after optimization is: 3.98 MB\nDecreased by -168.1%\n"
]
],
[
[
"Test the Difficulty of this Classification Tasks.\n(Borrowed from\n\nKernel: https://www.kaggle.com/shujian/test-the-difficulty-of-this-classification-tasks\n\nPaper: https://arxiv.org/abs/1811.01910\n\nCode: https://github.com/Wluper/edm)",
"_____no_output_____"
]
],
[
[
"!pip install edm",
"Collecting edm\r\n Downloading https://files.pythonhosted.org/packages/42/4b/ab24f5d58fa155a9cb9388681ec90fed70bc1ff4efea6b3964827354f601/edm-0.0.4.tar.gz\r\nRequirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from edm) (1.16.3)\r\nBuilding wheels for collected packages: edm\r\n Building wheel for edm (setup.py) ... \u001b[?25l-\b \bdone\r\n\u001b[?25h Stored in directory: /tmp/.cache/pip/wheels/f5/9b/f6/778ef88e921a1c1ef7b9d04d1af501a7014d89e941c3301c56\r\nSuccessfully built edm\r\nInstalling collected packages: edm\r\nSuccessfully installed edm-0.0.4\r\n\u001b[33mYou are using pip version 19.0.3, however version 19.1.1 is available.\r\nYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\r\n"
],
[
"df = train.sample(frac=0.003)\nsents = df[\"comment_text\"].values\nlabels = df[\"target\"].values\nfrom edm import report\nprint(report.get_difficulty_report(sents, labels))",
"----> Building bag of words representations...\n[-------------------------------] : 5414 of 5415, 100.0% : Est. 0.0 mins Remaining\r\n----> Done.\n----> Getting difficulty metrics...\n----> Done.\n----> Getting generic statistics...\n----> Done.\n\n\nDataset Size 5415 -\nVocab Size 20377 -\nNumber of Classes 210 -\nMean Items Per Class 25.785714285714285 -\nMin. Items in a Class 1 \u001b[91mEXTREMELY LOW\u001b[0m\nAverage Sentence Length 318.3405355493998 -\nDistinct Words : Total Words 0.07180284082300002 \u001b[92mGOOD\u001b[0m\nClass Imbalance 1.8031130457723257 \u001b[91mVERY HIGH\u001b[0m\nClass Diversity 1.467011623272505 \u001b[94mSOMEWHAT HIGH\u001b[0m\nMax. Hellinger Similarity 0.9253487508471283 \u001b[91mVERY HIGH\u001b[0m\nMutual Information 0.038380264515979534 \u001b[92mGOOD\u001b[0m\nDifficulty 4.305656525230939 \u001b[93mHIGH\u001b[0m\n\n\n\n"
],
[
"train.to_pickle(\"train.pkl\")\ntest.to_pickle(\"test.pkl\")\ntrain.to_csv('train_cleaned.csv', index=None)\ntest.to_csv('test_cleaned.csv', index=None)",
"_____no_output_____"
]
],
[
[
"**To be continued...**",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecf99bcc4d2a428ba9d162cf1da04c42650cb4a0 | 15,390 | ipynb | Jupyter Notebook | Lab2/convolutional_neural_networks.ipynb | mancinimassimiliano/DeepLearningLab | 775ab10f9e894ed542c498aae088701ea6528940 | [
"MIT"
] | 7 | 2019-04-01T09:33:00.000Z | 2022-01-24T15:30:21.000Z | Lab2/convolutional_neural_networks.ipynb | mancinimassimiliano/DeepLearningLab | 775ab10f9e894ed542c498aae088701ea6528940 | [
"MIT"
] | null | null | null | Lab2/convolutional_neural_networks.ipynb | mancinimassimiliano/DeepLearningLab | 775ab10f9e894ed542c498aae088701ea6528940 | [
"MIT"
] | 4 | 2019-04-08T07:54:18.000Z | 2021-04-14T10:19:28.000Z | 36.469194 | 625 | 0.505198 | [
[
[
"<a href=\"https://colab.research.google.com/github/mancinimassimiliano/DeepLearningLab/blob/master/convolutional_neural_networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Introduction\n\n## Lab2: Train a Convolutional Neural Network (CNN).\n\nIn this Lab session we will learn how to train a CNN from scratch for classifying MNIST digits.",
"_____no_output_____"
]
],
[
[
"# import necessary libraries\nimport torch\nimport torchvision\nfrom torchvision import transforms as T\nimport torch.nn.functional as F",
"_____no_output_____"
]
],
[
[
"### Define LeNet\n\n\n\nHere we are going to define our first CNN which is **LeNet** in this case. To construct a LeNet we will be using some convolutional layers followed by some fully-connected layers. The convolutional layers can be simply defined using `torch.nn.Conv2d` module of `torch.nn` package. Details can be found [here](https://pytorch.org/docs/stable/nn.html#conv2d). Moreover, we will use pooling operation to reduce the size of convolutional feature maps. For this case we are going to use `torch.nn.functional.max_pool2d`. Details about maxpooling can be found [here](https://pytorch.org/docs/stable/nn.html#max-pool2d)\n\nDifferently from our previous Lab, we will use a Rectified Linear Units (ReLU) as activation function with the help of `torch.nn.functional.relu`, replacing `torch.nn.Sigmoid`. Details about ReLU can be found [here](https://pytorch.org/docs/stable/nn.html#id26).",
"_____no_output_____"
]
],
[
[
"class LeNet(torch.nn.Module):\n def __init__(self):\n super(LeNet, self).__init__()\n \n # input channel = ?, output channels = ?, kernel size = ?\n # input image size = (?, ?), image output size = (?, ?)\n # TODO\n \n # input channel = ?, output channels = ?, kernel size = ?\n # input image size = (?, ?), output image size = (?, ?)\n # TODO\n \n # input dim = ? ( H x W x C), output dim = ?\n # TODO\n \n # input dim = ?, output dim = ?\n # TODO\n \n # input dim = ?, output dim = ?\n # TODO\n \n def forward(self, x):\n \n # TODO\n # Max Pooling with kernel size = ?\n # output size = (?, ?)\n # TODO\n \n # TODO\n # Max Pooling with kernel size = ?\n # output size = (?, ?)\n # TODO\n \n # flatten the feature maps into a long vector\n x = x.view(x.shape[0], -1)\n \n # TODO\n \n # TODO\n \n # TODO\n \n return x",
"_____no_output_____"
]
],
[
[
"### Define cost function",
"_____no_output_____"
]
],
[
[
"def get_cost_function():\n cost_function = torch.nn.CrossEntropyLoss()\n return cost_function",
"_____no_output_____"
]
],
[
[
"### Define the optimizer",
"_____no_output_____"
]
],
[
[
"def get_optimizer(net, lr, wd, momentum):\n optimizer = torch.optim.SGD(net.parameters(), lr=lr, weight_decay=wd, momentum=momentum)\n return optimizer",
"_____no_output_____"
]
],
[
[
"### Train and test functions",
"_____no_output_____"
]
],
[
[
"def test(net, data_loader, cost_function, device='cuda:0'):\n samples = 0.\n cumulative_loss = 0.\n cumulative_accuracy = 0.\n\n net.eval() # Strictly needed if network contains layers which has different behaviours between train and test\n with torch.no_grad():\n for batch_idx, (inputs, targets) in enumerate(data_loader):\n # Load data into GPU\n inputs = inputs.to(device)\n targets = targets.to(device)\n \n # Forward pass\n outputs = net(inputs)\n\n # Apply the loss\n loss = cost_function(outputs, targets)\n\n # Better print something\n samples+=inputs.shape[0]\n cumulative_loss += loss.item() # Note: the .item() is needed to extract scalars from tensors\n _, predicted = outputs.max(1)\n cumulative_accuracy += predicted.eq(targets).sum().item()\n\n return cumulative_loss/samples, cumulative_accuracy/samples*100\n\n\ndef train(net,data_loader,optimizer,cost_function, device='cuda:0'):\n samples = 0.\n cumulative_loss = 0.\n cumulative_accuracy = 0.\n\n \n net.train() # Strictly needed if network contains layers which has different behaviours between train and test\n for batch_idx, (inputs, targets) in enumerate(data_loader):\n # Load data into GPU\n inputs = inputs.to(device)\n targets = targets.to(device)\n \n # Forward pass\n outputs = net(inputs)\n\n # Apply the loss\n loss = cost_function(outputs,targets)\n\n # Reset the optimizer\n \n # Backward pass\n loss.backward()\n \n # Update parameters\n optimizer.step()\n \n optimizer.zero_grad()\n\n # Better print something, no?\n samples+=inputs.shape[0]\n cumulative_loss += loss.item()\n _, predicted = outputs.max(1)\n cumulative_accuracy += predicted.eq(targets).sum().item()\n\n return cumulative_loss/samples, cumulative_accuracy/samples*100",
"_____no_output_____"
]
],
[
[
"### Define the function that fetches a data loader that is then used during iterative training.\n\nWe will learn a new thing in this function as how to Normalize the inputs given to the network.\n\n***Why Normalization is needed***? \n\nTo have nice and stable training of the network it is recommended to normalize the network inputs between \\[-1, 1\\]. \n\n***How it can be done***? \n\nThis can be simply done using `torchvision.transforms.Normalize()` transform. Details can be found [here](https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Normalize).",
"_____no_output_____"
]
],
[
[
"def get_data(batch_size, test_batch_size=256):\n \n # Prepare data transformations and then combine them sequentially\n transform = list()\n transform.append(T.ToTensor()) # converts Numpy to Pytorch Tensor\n transform.append(T.Normalize(mean=[0.5], std=[0.5])) # Normalizes the Tensors between [-1, 1]\n transform = T.Compose(transform) # Composes the above transformations into one.\n\n # Load data\n full_training_data = torchvision.datasets.MNIST('./data', train=True, transform=transform, download=True) \n test_data = torchvision.datasets.MNIST('./data', train=False, transform=transform, download=True) \n \n\n # Create train and validation splits\n num_samples = len(full_training_data)\n training_samples = int(num_samples*0.5+1)\n validation_samples = num_samples - training_samples\n\n training_data, validation_data = torch.utils.data.random_split(full_training_data, [training_samples, validation_samples])\n\n # Initialize dataloaders\n train_loader = torch.utils.data.DataLoader(training_data, batch_size, shuffle=True)\n val_loader = torch.utils.data.DataLoader(validation_data, test_batch_size, shuffle=False)\n test_loader = torch.utils.data.DataLoader(test_data, test_batch_size, shuffle=False)\n \n return train_loader, val_loader, test_loader",
"_____no_output_____"
]
],
[
[
"### Wrapping everything up\n\nFinally, we need a main function which initializes everything + the needed hyperparameters and loops over multiple epochs (printing the results).",
"_____no_output_____"
]
],
[
[
"'''\nInput arguments\n batch_size: Size of a mini-batch\n device: GPU where you want to train your network\n weight_decay: Weight decay co-efficient for regularization of weights\n momentum: Momentum for SGD optimizer\n epochs: Number of epochs for training the network\n'''\n\ndef main(batch_size=128, \n device='cuda:0', \n learning_rate=0.01, \n weight_decay=0.000001, \n momentum=0.9, \n epochs=50):\n \n train_loader, val_loader, test_loader = get_data(batch_size)\n \n # TODO for defining LeNet-5 \n \n optimizer = get_optimizer(net, learning_rate, weight_decay, momentum)\n \n cost_function = get_cost_function()\n\n print('Before training:')\n train_loss, train_accuracy = test(net, train_loader, cost_function)\n val_loss, val_accuracy = test(net, val_loader, cost_function)\n test_loss, test_accuracy = test(net, test_loader, cost_function)\n\n print('\\t Training loss {:.5f}, Training accuracy {:.2f}'.format(train_loss, train_accuracy))\n print('\\t Validation loss {:.5f}, Validation accuracy {:.2f}'.format(val_loss, val_accuracy))\n print('\\t Test loss {:.5f}, Test accuracy {:.2f}'.format(test_loss, test_accuracy))\n print('-----------------------------------------------------')\n\n for e in range(epochs):\n train_loss, train_accuracy = train(net, train_loader, optimizer, cost_function)\n val_loss, val_accuracy = test(net, val_loader, cost_function)\n print('Epoch: {:d}'.format(e+1))\n print('\\t Training loss {:.5f}, Training accuracy {:.2f}'.format(train_loss, train_accuracy))\n print('\\t Validation loss {:.5f}, Validation accuracy {:.2f}'.format(val_loss, val_accuracy))\n print('-----------------------------------------------------')\n\n print('After training:')\n train_loss, train_accuracy = test(net, train_loader, cost_function)\n val_loss, val_accuracy = test(net, val_loader, cost_function)\n test_loss, test_accuracy = test(net, test_loader, cost_function)\n\n print('\\t Training loss {:.5f}, Training accuracy {:.2f}'.format(train_loss, train_accuracy))\n print('\\t Validation loss {:.5f}, Validation accuracy {:.2f}'.format(val_loss, val_accuracy))\n print('\\t Test loss {:.5f}, Test accuracy {:.2f}'.format(test_loss, test_accuracy))\n print('-----------------------------------------------------')",
"_____no_output_____"
]
],
[
[
"Lets train!",
"_____no_output_____"
]
],
[
[
"main()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.