hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7244cfa493a4ce706a0aec9399a3e74dbd9725c
9,618
ipynb
Jupyter Notebook
guide/08-using-geoprocessing-tools/advanced-concepts.ipynb
randybondsjr/arcgis-python-api
916c1d3e9a3a67497f69ca86421a16e320a3a807
[ "Apache-2.0" ]
1,299
2016-06-12T15:58:47.000Z
2022-03-29T08:09:49.000Z
guide/08-using-geoprocessing-tools/advanced-concepts.ipynb
zhutf509/arcgis-python-api
0e16082cc2eb89ad45f306a96a186ee695f59041
[ "Apache-2.0" ]
999
2016-08-10T13:46:15.000Z
2022-03-31T23:29:53.000Z
guide/08-using-geoprocessing-tools/advanced-concepts.ipynb
zhutf509/arcgis-python-api
0e16082cc2eb89ad45f306a96a186ee695f59041
[ "Apache-2.0" ]
967
2016-06-12T06:03:41.000Z
2022-03-29T12:50:48.000Z
42.9375
1,629
0.649199
[ [ [ "# Advanced concepts\n\nIn this section of the guide, we will observe advanced concepts such as configuring environment settings, using logging and exception handling while executing geoprocessing tools.\n\n## Geoprocessing environment\n\nGeoprocessing environment settings are additional settings that affect geoprocessing tools. These settings provide a powerful way to ensure geoprocessing is performed in a controlled environment where you decide things such as the processing extent that limits processing to a specific geographic area, a coordinate system for all output geodatasets, or whether Z or M values should be included with the geoprocessing results.\n\nOnly certain environment settings will be used by a given geoprocessing tool. To determine the environments that a tool will use, consult the tool's documentation.\n\nThe `arcgis.env` module has globals to store such environment settings that are common among all geoprocessing tools. The code snippet below sets the spatial reference of the output geometries to 4326 (WGS84) :", "_____no_output_____" ] ], [ [ "arcgis.env.out_spatial_reference = 4326", "_____no_output_____" ] ], [ [ "## Logging and error handling\n\nGeoprocessing tools log informative, warning and error messages using [logging facility](https://docs.python.org/2/library/logging.html) for Python. These messages include such information as the following:\n\n* When the operation started and ended\n* The parameter values used\n* General information about the operation's progress (information message)\n* Warnings of potential problems (warning message)\n* Errors that cause the tool to stop execution (error message)\n\nAll communication between tools and users is done with these log messages. Messages are categorized by severity and are logged at these different levels:\n\n<table><colgroup width=\"*\"></colgroup><colgroup width=\"*\"></colgroup><thead><tr><th colspan=\"1\">Log Level</th><th colspan=\"1\">Description</th></tr></thead><tbody class=\"align-middle\"><tr class=\"align-middle\"><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-ACBE6F11-DB3C-4303-8B8A-380A07DE16B3\">logging.INFO</p></td><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-FA78BBF7-744D-4F11-89F9-76811E225ADD\">An informative message is information about a tool execution. It is never used to indicate problems. Only general information, such as a tool's progress, what time a tool started or completed, output data characteristics, or tool results, is found in informative messages.</p></td></tr><tr class=\"align-middle\"><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-E96CF12A-F3FD-4EB8-B47F-F6B4360FD062\">logging.WARN</p></td><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-D776E225-477F-462C-8112-77F7E1A7967C\">Warning messages are generated when a tool experiences a situation that may cause a problem during its execution or when the result may not be what you expect. For example, defining a coordinate system for a dataset that already has a coordinate system defined generates a warning. You can take action when a warning returns, such as canceling the tool's execution or making another parameter choice.</p></td></tr><tr class=\"align-middle\"><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-EABB68CA-9063-4AD8-849A-4F3CCD02E7BB\">logging.ERROR</p></td><td rowspan=\"1\" colspan=\"1\"><p id=\"GUID-B44B04A0-42DB-4B3E-8B3A-95A4248F8451\">Error messages indicate a critical event that prevented a tool from executing.\nErrors are generated when one or more parameters have invalid\nvalues, or when a critical execution process or routine has failed.</p></td></tr></tbody></table>\n\nThe code snippet below sets the log level to logging.INFO, which will cause all messages of severity INFO or higher to be logged and displayed:", "_____no_output_____" ] ], [ [ "import logging\nimport sys\n\n# Create logger\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\n\n# Create STDERR handler\nhandler = logging.StreamHandler(sys.stderr)\n\n# Set STDERR handler as the only handler \nlogger.handlers = [handler]", "_____no_output_____" ] ], [ [ "### Verbose logging to standard output\n\nTo ease development, messages from geoprocessing tools are printed to stdout in addition to stderr. This behavior is controlled by the `arcgis.env.verbose` global and can be turned off by setting it to `False`. In any case, all geoprocessing messages are available through Python logging module.", "_____no_output_____" ] ], [ [ "arcgis.env.verbose = False ", "_____no_output_____" ] ], [ [ "## Error handling\n\nGeoprocessing tools may encounter error and fail for various reasons. The function for the tool throws a Python Exception to report such errors and failures. To handle the possibility of a tool failing to execute, wrap it's invocation in a try/except block.\n\nThe code snippet below wraps the execution of the tool within a try/except block and reports an error in case the tool fails:", "_____no_output_____" ] ], [ [ "geosurtools = import_toolbox('http://tps.geosur.info/arcgis/rest/services/Models/GeoSUR_ElevationDerivatives/GPServer')", "_____no_output_____" ], [ "try:\n geosurtools.slope_classificaiton()\nexcept Exception as e:\n print('The tool encountered an error')", "Submitted.\nExecuting...\nExecuting (Slope Classificaiton): SlopeClss2 \"Feature Set\" \"GMTED 30 arc-second (Median)\" Degree 3.0E-5 5,10,20,30,40,50\nStart Time: Sat Nov 26 03:26:53 2016\nExecuting (Slope Classificaiton): SlopeClss2 \"Feature Set\" \"GeoSUR_ElevationDerivatives\\GMTED 30 arc-second (Median)\" Degree 3.0E-5 5,10,20,30,40,50\nStart Time: Sat Nov 26 03:26:53 2016\nExecuting (TPS Slope Classificaiton): TPSSlopeClass2 \"Feature Set\" \"GeoSUR_ElevationDerivatives\\GMTED 30 arc-second (Median)\" Degree 3.0E-5 5,10,20,30,40,50\nStart Time: Sat Nov 26 03:26:54 2016\nRunning script TPSSlopeClass2...\n\nD:\\geosur_ags\\open\\gp\\sharedtools\\scripts\n\n\nD:\\geosur_ags\\open\\gp\\sharedtools\\scripts\\tps_slopeclss.xml\n\nmaxPixels: 120000000\n\nSpatial Reference Type: Geographic\nRaster Input: GeoSUR_ElevationDerivatives\\GMTED 30 arc-second (Median)\nCell Size x:0.008333 y:0.008333\nPixel Width: -1\nPixel Height: -1\nPixels to be processed: 1\n\n(1.00 - 120000000.00) / 120000000 \nFailed to process input extent. Verify that client has a defined coordinate system.\nFailed to process input extent. Verify that client has a defined coordinate system.\nCompleted script TPSSlopeClass2...\nFailed to execute (TPS Slope Classificaiton).\nFailed to execute (TPS Slope Classificaiton).\nFailed at Sat Nov 26 03:26:54 2016 (Elapsed Time: 0.23 seconds)\nFailed to execute (Slope Classificaiton).\nFailed to execute (Slope Classificaiton).\nFailed at Sat Nov 26 03:26:54 2016 (Elapsed Time: 0.33 seconds)\nFailed to execute (Slope Classificaiton).\nFailed to execute (Slope Classificaiton).\nFailed at Sat Nov 26 03:26:54 2016 (Elapsed Time: 0.36 seconds)\nFailed.\n" ] ], [ [ "## Creating Geoprocessing tools\n\nRefer to [this documentation](http://pro.arcgis.com/en/pro-app/arcpy/geoprocessing_and_python/a-quick-tour-of-creating-tools-in-python.htm) for a quick tour of creating Geoprocessing tools using Python.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7245b91b9895839dcadd9c701c0d619d735e1a4
43,490
ipynb
Jupyter Notebook
WebScraping_Steam.ipynb
aatirkirmani/WebscrapingSteamReviews
7f0084bedbe093afb9e223be9eb13567a00ccf9d
[ "MIT" ]
null
null
null
WebScraping_Steam.ipynb
aatirkirmani/WebscrapingSteamReviews
7f0084bedbe093afb9e223be9eb13567a00ccf9d
[ "MIT" ]
null
null
null
WebScraping_Steam.ipynb
aatirkirmani/WebscrapingSteamReviews
7f0084bedbe093afb9e223be9eb13567a00ccf9d
[ "MIT" ]
null
null
null
44.287169
303
0.465969
[ [ [ "\"\"\"This notebook will run on cloud notebooks only. For running \nthis locally, watch the comments below in the cell after the \nselenium installation cell.\"\"\"\nprint(\"Final Notebook\")", "Final Notebook\n" ], [ "#Importing the required libraries\n\n#Importing the requests library to make the HTTP get() requests.\nimport requests\n\n#To create and work with DataFrame\nimport pandas as pd\n\n#Library to scrape the HTML content\nfrom bs4 import BeautifulSoup\n\n#Importing time library, (to be used much later)\nimport time\n\n#Importing the regular expression library\nimport re\nfrom re import search", "_____no_output_____" ], [ "#Setting the main url\nmain_url = \"https://store.steampowered.com/games/\"\n\n#To get a response object from the URL\nresponse = requests.get(main_url)\n\n#Return the response object(html) as text\nwebsite_html = response.text\n\n#Creating a BeautifulSoup object, and using 'html.parser' to work with html text.\nsoup = BeautifulSoup(website_html, 'html.parser')", "_____no_output_____" ], [ "\"\"\"Getting all the names of the games and their respective links present on \nthe main_url, However it contains all the html tags present in between\"\"\"\n\nnames_get = soup.find_all(name = \"div\", class_ = \"tab_item_name\")\nlinks_get = soup.find_all(name = \"a\", class_ = \"tab_item\")", "_____no_output_____" ], [ "#Creating empty list of names and lists\nnames = []\nlinks = []\n\n\"\"\"Several links on the main page are for bundles and do not contain\nreviews and other data, so filtering them out.\nThese bundles are not individual games, rather a bundle of games'new \nand previous versions along with MODs and packs.\nFor individual games \"\".com/app\" is present in the link\"\"\"\n\nsubstr = str(\".com/app\")\n\n\"\"\"Using search() from re library to include links \nand names of individual games only.\"\"\"\n\nfor name, link in zip(names_get,links_get):\n if search(substr, str(link)):\n names.append(name.getText())\n links.append(link.get(\"href\"))", "_____no_output_____" ], [ "#Saving data for top 5 only as per project requirements.\n\ntop_5_links = links[:5]\ntop_5_names = names[:5]", "_____no_output_____" ], [ "#Creating empty list for number_od_positive_reviews developers and publishers\nnum_pos_rev = []\ndevelopers = []\npublishers = []\n\n\"\"\"Creating game_req dictionary which will contain requirement for every game\n requirements dictionary will be the value for each game in the game_req and\n will be intitalized to emplty dictionary in every iteration of loop\"\"\"\nrequirements = {}\ngame_req = {}\n\n\nfor game_name, link in zip(top_5_names, top_5_links):\n\n \"\"\"Opening each link in the top_5_links and creating a \n BS object for each of the links\"\"\"\n\n game_response = requests.get(link)\n game_page_html = game_response.text\n game_soup = BeautifulSoup(game_page_html, 'html.parser')\n\n\n \n #Locating postive reviews\n rev_loc = game_soup.find(name = \"span\", class_ = \"responsive_hidden\")\n\n \"\"\"Creating an empty string to contain the number_of_postive_reviews\n Since the postive reviews are enclosed in paratheses, the loop will iterate\n the text and concatenate the digits only.\n The try-except has been frequently used here to account for errors\n arising out of absense of data for some games.\"\"\"\n rev_number = \"\"\n try:\n for num in rev_loc.getText():\n if num.isdigit():\n rev_number+=num\n except:\n rev_number = \"\"\n num_pos_rev.append(rev_number)\n\n\n\n \"\"\"Finding the developer name for each game\"\"\"\n try:\n dev_loc = game_soup.find(name = \"div\", class_ = \"summary column\", id = \"developers_list\")\n dev_find = dev_loc.find(\"a\")\n dev = dev_find.getText()\n developers.append(dev)\n except:\n developers.append(\"\")\n\n\n\n \"\"\"Finding the publisher name for each game\"\"\"\n try:\n pub_loc = game_soup.find(name = \"div\", class_ = \"dev_row\")\n pub_find = pub_loc.find(\"a\")\n pub = pub_find.getText()\n publishers.append(pub)\n except:\n publishers.append(\"\")\n\n\n\n \"\"\"Finding the requirements for each game\"\"\"\n try:\n req_loc = game_soup.find(name = \"div\", class_ = \"sysreq_contents\")\n req_find = req_loc.find(\"ul\")\n except:\n pass\n\n \"\"\"req_cat will contain the category of requirements, \n for example: OS, DirectX, since every game has different\n categories of requirements listed, some have soundcard,\n additional notes and much more\"\"\"\n req_cat = []\n\n \"\"\"The req_val will contain the corresponding value for each category of\n the requiremetnts\"\"\"\n req_val = []\n\n #Locating the requirements\n for req_text in req_find.find_all(\"li\"):\n \n #Emptying the requirements dictionary as it is updated for each game\n requirements = {}\n\n\n req_cat.append(req_text.text.split(\":\")[0])\n \"\"\"For some games, the value for a particular requirement category is absent\n for example, for some OS is not listed. Therefore using the try-except\n to account for errors arising out of absence of value.\"\"\"\n try:\n req_val.append(req_text.text.split(\":\")[1])\n except:\n req_val.append(\"\")\n \n\n for key, val in zip(req_cat, req_val):\n requirements[key] = val\n \n #Updating requirements for each game into game_req dictionary\n game_req[game_name] = requirements", "_____no_output_____" ], [ "\"\"\"Creating an empty dataframe and populating it with the data\nscraped till now\"\"\"\ntemp_data_1 = pd.DataFrame()\ntemp_data_1[\"Name\"] = top_5_names\ntemp_data_1[\"Link\"] = top_5_links\ntemp_data_1[\"Developer\"] = developers\ntemp_data_1[\"Publisher\"] = publishers\ntemp_data_1[\"Number of Positive Reviews\"] = num_pos_rev", "_____no_output_____" ], [ "\"\"\"Created a dataframe for game_req and saved its transpose in game_req_df.\nTransposed the dataframe to be able to merge with the previously created \ndataframe(temp_data_1)\"\"\"\ngame_req_df = pd.DataFrame(game_req).T\ngame_req_df.reset_index(inplace = True)\ngame_req_df.rename(columns={\"index\":\"Name\"}, inplace = True)\n\n#Merged dataframe saved as temp_data_2\ntemp_data_2 = temp_data_1.merge(game_req_df, how = \"left\", on = \"Name\")", "_____no_output_____" ], [ "#data without reviews\ntemp_data_2", "_____no_output_____" ], [ "#Installing selenium, chromium and its driver\n\"\"\"The reason for using selenium is that scraping the reviews is not\npossible using BeautifulSoup only, as BS can only get static content from the \nwebsites. On the otherhand, Selenium is provides a way to scrape the dynamic \ncontent. Here reviews are pulled from the profiles of each user and displayed \non the game page\"\"\"\n!apt update\n!apt install chromium-chromedriver\n!pip install selenium\n\nfrom selenium import webdriver\n\n# Setting option to headless to be able to use from Colab\noptions = webdriver.ChromeOptions()\noptions.add_argument('--headless')\noptions.add_argument('--no-sandbox')\noptions.add_argument('--disable-dev-shm-usage')", "\u001b[33m\r0% [Working]\u001b[0m\r \rIgn:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\n\u001b[33m\r0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com (91.18\u001b[0m\r \rIgn:2 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\n\u001b[33m\r0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com (91.18\u001b[0m\r \rGet:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [697 B]\nHit:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release\nGet:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B]\nGet:6 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]\nGet:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\nHit:9 http://archive.ubuntu.com/ubuntu bionic InRelease\nGet:10 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]\nGet:11 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\nIgn:12 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages\nGet:12 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [695 kB]\nGet:13 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ Packages [62.9 kB]\nHit:14 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease\nGet:15 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\nHit:16 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease\nGet:17 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,258 kB]\nGet:18 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]\nGet:19 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [26.7 kB]\nGet:20 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,420 kB]\nGet:21 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,786 kB]\nGet:22 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [505 kB]\nGet:23 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [39.5 kB]\nGet:24 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [537 kB]\nGet:25 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,694 kB]\nGet:26 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,194 kB]\nGet:27 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [914 kB]\nGet:28 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [44.1 kB]\nFetched 13.5 MB in 4s (3,090 kB/s)\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\n94 packages can be upgraded. Run 'apt list --upgradable' to see them.\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following additional packages will be installed:\n chromium-browser chromium-browser-l10n chromium-codecs-ffmpeg-extra\nSuggested packages:\n webaccounts-chromium-extension unity-chromium-extension\nThe following NEW packages will be installed:\n chromium-browser chromium-browser-l10n chromium-chromedriver\n chromium-codecs-ffmpeg-extra\n0 upgraded, 4 newly installed, 0 to remove and 94 not upgraded.\nNeed to get 86.0 MB of archives.\nAfter this operation, 298 MB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 chromium-codecs-ffmpeg-extra amd64 91.0.4472.101-0ubuntu0.18.04.1 [1,124 kB]\nGet:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 chromium-browser amd64 91.0.4472.101-0ubuntu0.18.04.1 [76.1 MB]\nGet:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 chromium-browser-l10n all 91.0.4472.101-0ubuntu0.18.04.1 [3,937 kB]\nGet:4 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 chromium-chromedriver amd64 91.0.4472.101-0ubuntu0.18.04.1 [4,837 kB]\nFetched 86.0 MB in 5s (16.0 MB/s)\nSelecting previously unselected package chromium-codecs-ffmpeg-extra.\n(Reading database ... 160837 files and directories currently installed.)\nPreparing to unpack .../chromium-codecs-ffmpeg-extra_91.0.4472.101-0ubuntu0.18.04.1_amd64.deb ...\nUnpacking chromium-codecs-ffmpeg-extra (91.0.4472.101-0ubuntu0.18.04.1) ...\nSelecting previously unselected package chromium-browser.\nPreparing to unpack .../chromium-browser_91.0.4472.101-0ubuntu0.18.04.1_amd64.deb ...\nUnpacking chromium-browser (91.0.4472.101-0ubuntu0.18.04.1) ...\nSelecting previously unselected package chromium-browser-l10n.\nPreparing to unpack .../chromium-browser-l10n_91.0.4472.101-0ubuntu0.18.04.1_all.deb ...\nUnpacking chromium-browser-l10n (91.0.4472.101-0ubuntu0.18.04.1) ...\nSelecting previously unselected package chromium-chromedriver.\nPreparing to unpack .../chromium-chromedriver_91.0.4472.101-0ubuntu0.18.04.1_amd64.deb ...\nUnpacking chromium-chromedriver (91.0.4472.101-0ubuntu0.18.04.1) ...\nSetting up chromium-codecs-ffmpeg-extra (91.0.4472.101-0ubuntu0.18.04.1) ...\nSetting up chromium-browser (91.0.4472.101-0ubuntu0.18.04.1) ...\nupdate-alternatives: using /usr/bin/chromium-browser to provide /usr/bin/x-www-browser (x-www-browser) in auto mode\nupdate-alternatives: using /usr/bin/chromium-browser to provide /usr/bin/gnome-www-browser (gnome-www-browser) in auto mode\nSetting up chromium-chromedriver (91.0.4472.101-0ubuntu0.18.04.1) ...\nSetting up chromium-browser-l10n (91.0.4472.101-0ubuntu0.18.04.1) ...\nProcessing triggers for man-db (2.8.3-2ubuntu0.1) ...\nProcessing triggers for hicolor-icon-theme (0.17-2) ...\nProcessing triggers for mime-support (3.60ubuntu1) ...\nProcessing triggers for libc-bin (2.27-3ubuntu1.2) ...\n/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link\n\nCollecting selenium\n Downloading selenium-3.141.0-py2.py3-none-any.whl (904 kB)\n\u001b[K |████████████████████████████████| 904 kB 8.6 MB/s \n\u001b[?25hRequirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from selenium) (1.24.3)\nInstalling collected packages: selenium\nSuccessfully installed selenium-3.141.0\n" ], [ "#To run locally, set executable_path to the path of chrome driver on local machine\ndriver = webdriver.Chrome(options=options)", "_____no_output_____" ], [ "#Creating a dictionary to contain reviews for each game\ngame_reviews = {}\nfor name, link in zip(top_5_names,top_5_links):\n reviews = []\n\n #Using selenium driver to obtain the webpage content\n driver.get(link)\n \"\"\"Using selenium to scroll down slowly to the document end.\n The loading of reviews takes a bit time and to make sure they are\n loaded, the page is scrolled down slowly, using time.sleep()\"\"\"\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n time.sleep(5)\n\n #Obtaining the page source using the webdriver.\n html = driver.page_source\n\n #Creating a BS object\n soup = BeautifulSoup(html)\n\n #Finding the reviews that are located in review_box\n rb = soup.find_all(name = \"div\", class_ = \"review_box\")\n\n #Using a counter 'c' to obtain only 10 reviews\n c = 0\n for i in rb:\n rev = i.find(name = \"div\", class_ = \"content\")\n\n #Using try-except to account for errors where reviews are not present\n try:\n reviews.append(str(rev.text).replace(\"\\t\",\"\").replace(\"\\n\",\"\"))\n except:\n reviews.append(str(\"Null\"))\n c = c + 1\n if c == 10:\n break\n #Updating reviews for each game in thr game_reviews dictionary\n game_reviews[name] = reviews", "_____no_output_____" ], [ "\"\"\"Since some games contain less than 10 reviews, \n Null is imputed for the rest, to make dataframe creation easy.\nFor example, if a game has 4 reviews, one Null is imputed and so on.\"\"\"\n\nfor key, val in game_reviews.items():\n missing_num = 10 - len(val)\n # print(number)\n if missing_num > 0:\n while missing_num > 0:\n val.append(\"Null\")\n missing_num = missing_num - 1", "_____no_output_____" ], [ "\"\"\"Creating a dataframe temp_data_3 which contains reviews for each \ngame. Each review contitutes one column.\"\"\"\ntemp_data_3 = pd.DataFrame(game_reviews).T\ntemp_data_3.reset_index(inplace = True)\ntemp_data_3.rename(columns={\"index\":\"Name\", 0:\"Review 1\", \n 1:\"Review 2\", 2:\"Review 3\", 3: \"Review 4\", \n 4:\"Review 5\", 5:\"Review 6\",6:\"Review 7\",\n 7:\"Review 8\",8:\"Review 9\",\n 9:\"Review 10\"}, inplace = True)", "_____no_output_____" ], [ "\"\"\"The dataframe created in previous step is merged with temp_data_2\nwhich was created earlier containing the rest of the required data.\nThe 'data' dataframe is the final dataframe required\"\"\"\ndata = temp_data_2.merge(temp_data_3, how = \"left\", on = \"Name\")", "_____no_output_____" ], [ "\"\"\"If \"Requires a 64-bit processor and operating system\" is present in the columns,\nit is an additional requirement for some games, If a game has no such requirement,\nNaN is present in the rows for such games. For games which has this requirement,\nthere is an empty cell because this is only a text written in the requirement\nbox with no value. In the data cleaning steps, we can replace empty cell with 1\nand NaNs with 0s\"\"\"\ndata", "_____no_output_____" ], [ "data.to_csv(\"/content/game_data.csv\")", "_____no_output_____" ], [ "\"\"\"Additional Notes:\nThe code is scalable and I've successfully scraped the data for all the \ngames (around 60) on the page, we can use a loop to iterate through more \npages.\"\"\"\n", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e72466ab6770cc94ccfd22faf122ce94b3912f5a
97,653
ipynb
Jupyter Notebook
notebooks/MNIST.ipynb
Acedev003/digit_recognizer
bcf00cb2c7ed3bda57195f6c83fb4e32746ced2a
[ "MIT" ]
null
null
null
notebooks/MNIST.ipynb
Acedev003/digit_recognizer
bcf00cb2c7ed3bda57195f6c83fb4e32746ced2a
[ "MIT" ]
null
null
null
notebooks/MNIST.ipynb
Acedev003/digit_recognizer
bcf00cb2c7ed3bda57195f6c83fb4e32746ced2a
[ "MIT" ]
null
null
null
264.642276
43,657
0.903925
[ [ [ "###Import tensorflow and download MNIST data", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "_____no_output_____" ], [ "data = tf.keras.datasets.mnist\n\n(train,train_labels),(test,test_labels) = data.load_data()", "_____no_output_____" ] ], [ [ "####Apply data augmentation using ImageDataGenerator and plot few samples ", "_____no_output_____" ] ], [ [ "data_gen = tf.keras.preprocessing.image.ImageDataGenerator(\n rotation_range=10,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n rescale=1/255\n )\n\ntrain_gen = data_gen.flow(train.reshape(train.shape[0],28,28,1),train_labels,)\ntest_gen = data_gen.flow(test.reshape(test.shape[0],28,28,1),test_labels)", "_____no_output_____" ], [ "import math\nimport matplotlib.pyplot as plt\n\n\nt = next(train_gen)\n\nimg_count = len(t[0])\nprint(f\"{img_count} images in batch 1 of training data \\n\\n\" )\n\n\nfig = plt.figure(figsize=(10, 10))\ncolumns = int(math.sqrt(img_count))\nrows = int(math.sqrt(img_count))\nfor i in range(1, columns*rows +1):\n img = t[0][i-1].reshape(28,28)\n fig.add_subplot(rows, columns, i)\n plt.imshow(img)\n plt.title(str(t[1][i-1]))\n plt.axis(\"off\")\nplt.show()", "32 images in batch 1 of training data \n\n\n" ] ], [ [ "#### Define layers, compile and then fit model", "_____no_output_____" ] ], [ [ "layers = [\n tf.keras.layers.Conv2D(64,(3,3),input_shape=(28,28,1),activation = tf.nn.relu),\n tf.keras.layers.MaxPool2D(2,2),\n tf.keras.layers.Conv2D(64,(3,3),activation = tf.nn.relu),\n tf.keras.layers.MaxPool2D(2,2),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(units = 512,activation = tf.nn.relu),\n tf.keras.layers.Dense(units = 10, activation = tf.nn.softmax)\n ]", "_____no_output_____" ], [ "model = tf.keras.models.Sequential(layers)", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n conv2d_2 (Conv2D) (None, 26, 26, 64) 640 \n \n max_pooling2d_2 (MaxPooling (None, 13, 13, 64) 0 \n 2D) \n \n conv2d_3 (Conv2D) (None, 11, 11, 64) 36928 \n \n max_pooling2d_3 (MaxPooling (None, 5, 5, 64) 0 \n 2D) \n \n flatten_1 (Flatten) (None, 1600) 0 \n \n dense_2 (Dense) (None, 512) 819712 \n \n dense_3 (Dense) (None, 10) 5130 \n \n=================================================================\nTotal params: 862,410\nTrainable params: 862,410\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001), loss = 'sparse_categorical_crossentropy' , metrics=['accuracy'])", "_____no_output_____" ], [ "early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=5)", "_____no_output_____" ], [ "history = model.fit(train_gen,epochs=25,validation_data=test_gen,verbose = 1,callbacks=[early_stop])", "_____no_output_____" ] ], [ [ "#### Plot train and validation accuracies", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nloss = history.history['loss']\nval_loss = history.history['val_loss'] \naccuracy = history.history['accuracy']\nval_accuracy = history.history['val_accuracy'] \n\nplt.figure(figsize=(14, 4))\n\nplt.subplot(1, 2, 1)\nplt.title('Loss')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.plot(loss, label='Training set')\nplt.plot(val_loss, label='Test set', linestyle='--')\n\nplt.legend()\nplt.grid(linestyle='--', linewidth=1, alpha=0.5)\n\nplt.subplot(1, 2, 2)\nplt.title('Accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.plot(accuracy, label='Training set')\nplt.plot(val_accuracy, label='Test set', linestyle='--')\n\nplt.legend()\nplt.grid(linestyle='--', linewidth=1, alpha=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "#### Save model in keras Format, convert to tfjs model and zip the files", "_____no_output_____" ] ], [ [ "model.save(\"model.h5\")", "_____no_output_____" ], [ "!pip install tensorflowjs\n!tensorflowjs_converter --input_format keras model.h5 model", "_____no_output_____" ], [ "!zip model.zip model/*", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7248817416d8c367e7b357e096cf4b53cce752a
205,631
ipynb
Jupyter Notebook
Resources/csv-to-html.ipynb
mattbjensen/Web-Design-Challenge
e9b92297c1ceb6731bd0237f5951af4b5ea0782b
[ "ADSL" ]
null
null
null
Resources/csv-to-html.ipynb
mattbjensen/Web-Design-Challenge
e9b92297c1ceb6731bd0237f5951af4b5ea0782b
[ "ADSL" ]
null
null
null
Resources/csv-to-html.ipynb
mattbjensen/Web-Design-Challenge
e9b92297c1ceb6731bd0237f5951af4b5ea0782b
[ "ADSL" ]
null
null
null
30.226518
54
0.277215
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "# Read the csv file in\ndf = pd.read_csv( 'cities.csv')", "_____no_output_____" ], [ "# Save to file\ndf.to_html( 'data.html', index=False)", "_____no_output_____" ], [ "# Assign to string\ntable = df.to_html()\nprint( table)", "<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>City</th>\n <th>Lat</th>\n <th>Lng</th>\n <th>Max Temp</th>\n <th>Humidity</th>\n <th>Cloudiness</th>\n <th>Wind Speed</th>\n <th>Country</th>\n <th>Date Time</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>Äänekoski</td>\n <td>62.6946</td>\n <td>25.8180</td>\n <td>49.8</td>\n <td>90</td>\n <td>99</td>\n <td>3.2</td>\n <td>FI</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>1</th>\n <td>Āmli</td>\n <td>20.2833</td>\n <td>73.0167</td>\n <td>83.4</td>\n <td>81</td>\n <td>7</td>\n <td>5.1</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>2</th>\n <td>Abu Dhabi</td>\n <td>24.4667</td>\n <td>54.3667</td>\n <td>87.8</td>\n <td>84</td>\n <td>0</td>\n <td>8.3</td>\n <td>AE</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>3</th>\n <td>Acarí</td>\n <td>-15.4311</td>\n <td>-74.6158</td>\n <td>71.5</td>\n <td>61</td>\n <td>98</td>\n <td>4.5</td>\n <td>PE</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>4</th>\n <td>Acaraú</td>\n <td>-2.8856</td>\n <td>-40.1200</td>\n <td>79.7</td>\n <td>74</td>\n <td>11</td>\n <td>14.9</td>\n <td>BR</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>5</th>\n <td>Afytos</td>\n <td>40.1000</td>\n <td>23.4333</td>\n <td>64.0</td>\n <td>70</td>\n <td>81</td>\n <td>6.9</td>\n <td>GR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>6</th>\n <td>Aginskoye</td>\n <td>51.1000</td>\n <td>114.5300</td>\n <td>52.7</td>\n <td>64</td>\n <td>49</td>\n <td>5.8</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>7</th>\n <td>Ahipara</td>\n <td>-35.1667</td>\n <td>173.1667</td>\n <td>58.6</td>\n <td>64</td>\n <td>17</td>\n <td>1.8</td>\n <td>NZ</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>8</th>\n <td>Airai</td>\n <td>-8.9266</td>\n <td>125.4092</td>\n <td>57.6</td>\n <td>90</td>\n <td>44</td>\n <td>3.2</td>\n <td>TL</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>9</th>\n <td>Aklavik</td>\n <td>68.2191</td>\n <td>-135.0107</td>\n <td>46.4</td>\n <td>71</td>\n <td>75</td>\n <td>9.2</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>10</th>\n <td>Albany</td>\n <td>42.6001</td>\n <td>-73.9662</td>\n <td>69.3</td>\n <td>80</td>\n <td>96</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>11</th>\n <td>Albertville</td>\n <td>34.2676</td>\n <td>-86.2089</td>\n <td>77.7</td>\n <td>61</td>\n <td>40</td>\n <td>4.6</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>12</th>\n <td>Alghero</td>\n <td>40.5589</td>\n <td>8.3181</td>\n <td>66.3</td>\n <td>94</td>\n <td>15</td>\n <td>2.3</td>\n <td>IT</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>13</th>\n <td>Alice Springs</td>\n <td>-23.7000</td>\n <td>133.8833</td>\n <td>53.2</td>\n <td>82</td>\n <td>90</td>\n <td>8.4</td>\n <td>AU</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>14</th>\n <td>Alofi</td>\n <td>-19.0595</td>\n <td>-169.9187</td>\n <td>76.9</td>\n <td>94</td>\n <td>90</td>\n <td>5.8</td>\n <td>NU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>15</th>\n <td>Aloleng</td>\n <td>16.1307</td>\n <td>119.7824</td>\n <td>81.3</td>\n <td>79</td>\n <td>97</td>\n <td>11.7</td>\n <td>PH</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>16</th>\n <td>Alta</td>\n <td>69.9689</td>\n <td>23.2717</td>\n <td>51.1</td>\n <td>66</td>\n <td>7</td>\n <td>3.4</td>\n <td>NO</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>17</th>\n <td>Alta Floresta</td>\n <td>-9.8756</td>\n <td>-56.0861</td>\n <td>76.1</td>\n <td>65</td>\n <td>6</td>\n <td>4.3</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>18</th>\n <td>Altay</td>\n <td>47.8667</td>\n <td>88.1167</td>\n <td>67.9</td>\n <td>43</td>\n <td>48</td>\n <td>3.5</td>\n <td>CN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>19</th>\n <td>Alyangula</td>\n <td>-13.8483</td>\n <td>136.4192</td>\n <td>67.9</td>\n <td>100</td>\n <td>51</td>\n <td>9.6</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>20</th>\n <td>Amapá</td>\n <td>1.0000</td>\n <td>-52.0000</td>\n <td>73.9</td>\n <td>98</td>\n <td>13</td>\n <td>2.2</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>21</th>\n <td>Amga</td>\n <td>60.8953</td>\n <td>131.9608</td>\n <td>64.8</td>\n <td>49</td>\n <td>14</td>\n <td>1.1</td>\n <td>RU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>22</th>\n <td>Anadyr</td>\n <td>64.7500</td>\n <td>177.4833</td>\n <td>37.0</td>\n <td>60</td>\n <td>20</td>\n <td>13.0</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>23</th>\n <td>Anchorage</td>\n <td>61.2181</td>\n <td>-149.9003</td>\n <td>55.5</td>\n <td>64</td>\n <td>90</td>\n <td>5.8</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>24</th>\n <td>Ancud</td>\n <td>-41.8697</td>\n <td>-73.8203</td>\n <td>50.9</td>\n <td>90</td>\n <td>100</td>\n <td>10.6</td>\n <td>CL</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>25</th>\n <td>Antofagasta</td>\n <td>-23.6500</td>\n <td>-70.4000</td>\n <td>60.1</td>\n <td>80</td>\n <td>52</td>\n <td>2.4</td>\n <td>CL</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>26</th>\n <td>Ōdachō-ōda</td>\n <td>35.1833</td>\n <td>132.5000</td>\n <td>65.2</td>\n <td>97</td>\n <td>100</td>\n <td>9.0</td>\n <td>JP</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>27</th>\n <td>Ürümqi</td>\n <td>43.8010</td>\n <td>87.6005</td>\n <td>78.0</td>\n <td>24</td>\n <td>0</td>\n <td>3.8</td>\n <td>CN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>28</th>\n <td>Aquiraz</td>\n <td>-3.9014</td>\n <td>-38.3911</td>\n <td>80.7</td>\n <td>75</td>\n <td>0</td>\n <td>7.9</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>29</th>\n <td>Arraial do Cabo</td>\n <td>-22.9661</td>\n <td>-42.0278</td>\n <td>74.3</td>\n <td>83</td>\n <td>0</td>\n <td>6.7</td>\n <td>BR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>30</th>\n <td>Arrondissement de Montbrison</td>\n <td>45.6667</td>\n <td>4.0833</td>\n <td>63.7</td>\n <td>88</td>\n <td>22</td>\n <td>5.1</td>\n <td>FR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>31</th>\n <td>Atambua</td>\n <td>-9.1061</td>\n <td>124.8925</td>\n <td>71.3</td>\n <td>90</td>\n <td>16</td>\n <td>4.4</td>\n <td>ID</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>32</th>\n <td>Atar</td>\n <td>20.5169</td>\n <td>-13.0499</td>\n <td>94.7</td>\n <td>23</td>\n <td>92</td>\n <td>15.1</td>\n <td>MR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>33</th>\n <td>Atuona</td>\n <td>-9.8000</td>\n <td>-139.0333</td>\n <td>79.3</td>\n <td>82</td>\n <td>48</td>\n <td>18.4</td>\n <td>PF</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>34</th>\n <td>Avarua</td>\n <td>-21.2078</td>\n <td>-159.7750</td>\n <td>80.7</td>\n <td>74</td>\n <td>20</td>\n <td>12.7</td>\n <td>CK</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>35</th>\n <td>Aykhal</td>\n <td>66.0000</td>\n <td>111.5000</td>\n <td>52.8</td>\n <td>58</td>\n <td>100</td>\n <td>5.7</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>36</th>\n <td>Bāfq</td>\n <td>31.6128</td>\n <td>55.4107</td>\n <td>90.9</td>\n <td>10</td>\n <td>0</td>\n <td>3.0</td>\n <td>IR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>37</th>\n <td>Bāneh</td>\n <td>35.9975</td>\n <td>45.8853</td>\n <td>56.9</td>\n <td>31</td>\n <td>1</td>\n <td>4.4</td>\n <td>IR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>38</th>\n <td>Bārmer</td>\n <td>25.7500</td>\n <td>71.3833</td>\n <td>84.3</td>\n <td>60</td>\n <td>0</td>\n <td>8.2</td>\n <td>IN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>39</th>\n <td>Babahoyo</td>\n <td>-1.8167</td>\n <td>-79.5167</td>\n <td>82.1</td>\n <td>71</td>\n <td>69</td>\n <td>4.7</td>\n <td>EC</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>40</th>\n <td>Bairiki Village</td>\n <td>1.3292</td>\n <td>172.9752</td>\n <td>86.0</td>\n <td>74</td>\n <td>40</td>\n <td>9.9</td>\n <td>KI</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>41</th>\n <td>Balaipungut</td>\n <td>1.0500</td>\n <td>101.2833</td>\n <td>79.2</td>\n <td>97</td>\n <td>97</td>\n <td>2.5</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>42</th>\n <td>Bambous Virieux</td>\n <td>-20.3428</td>\n <td>57.7575</td>\n <td>71.9</td>\n <td>88</td>\n <td>40</td>\n <td>17.1</td>\n <td>MU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>43</th>\n <td>Banda Aceh</td>\n <td>5.5577</td>\n <td>95.3222</td>\n <td>80.0</td>\n <td>84</td>\n <td>96</td>\n <td>8.1</td>\n <td>ID</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>44</th>\n <td>Baraki Barak</td>\n <td>33.9675</td>\n <td>68.9486</td>\n <td>68.8</td>\n <td>23</td>\n <td>0</td>\n <td>2.5</td>\n <td>AF</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>45</th>\n <td>Barrow</td>\n <td>71.2906</td>\n <td>-156.7887</td>\n <td>33.8</td>\n <td>75</td>\n <td>90</td>\n <td>17.3</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>46</th>\n <td>Bata</td>\n <td>1.8639</td>\n <td>9.7658</td>\n <td>78.7</td>\n <td>100</td>\n <td>20</td>\n <td>4.6</td>\n <td>GQ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>47</th>\n <td>Bathsheba</td>\n <td>13.2167</td>\n <td>-59.5167</td>\n <td>83.0</td>\n <td>61</td>\n <td>20</td>\n <td>17.0</td>\n <td>BB</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>48</th>\n <td>Bay Roberts</td>\n <td>47.5999</td>\n <td>-53.2648</td>\n <td>67.0</td>\n <td>54</td>\n <td>77</td>\n <td>5.0</td>\n <td>CA</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>49</th>\n <td>Belaya Gora</td>\n <td>68.5333</td>\n <td>146.4167</td>\n <td>68.1</td>\n <td>25</td>\n <td>80</td>\n <td>14.2</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>50</th>\n <td>Belleville</td>\n <td>44.1788</td>\n <td>-77.3705</td>\n <td>69.9</td>\n <td>77</td>\n <td>75</td>\n <td>3.0</td>\n <td>CA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>51</th>\n <td>Beloha</td>\n <td>-25.1667</td>\n <td>45.0500</td>\n <td>67.1</td>\n <td>90</td>\n <td>100</td>\n <td>5.6</td>\n <td>MG</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>52</th>\n <td>Belomorsk</td>\n <td>64.5232</td>\n <td>34.7668</td>\n <td>57.9</td>\n <td>66</td>\n <td>18</td>\n <td>6.7</td>\n <td>RU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>53</th>\n <td>Belyy Yar</td>\n <td>53.6039</td>\n <td>91.3903</td>\n <td>62.5</td>\n <td>77</td>\n <td>0</td>\n <td>4.0</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>54</th>\n <td>Bengkulu</td>\n <td>-3.8004</td>\n <td>102.2655</td>\n <td>76.6</td>\n <td>90</td>\n <td>91</td>\n <td>2.8</td>\n <td>ID</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>55</th>\n <td>Bensonville</td>\n <td>6.4461</td>\n <td>-10.6125</td>\n <td>77.7</td>\n <td>90</td>\n <td>100</td>\n <td>4.2</td>\n <td>LR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>56</th>\n <td>Berbérati</td>\n <td>4.2612</td>\n <td>15.7922</td>\n <td>69.9</td>\n <td>95</td>\n <td>94</td>\n <td>2.3</td>\n <td>CF</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>57</th>\n <td>Berbera</td>\n <td>10.4396</td>\n <td>45.0143</td>\n <td>87.4</td>\n <td>71</td>\n <td>64</td>\n <td>11.5</td>\n <td>SO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>58</th>\n <td>Beringovskiy</td>\n <td>63.0500</td>\n <td>179.3167</td>\n <td>35.4</td>\n <td>82</td>\n <td>80</td>\n <td>4.2</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>59</th>\n <td>Berlevåg</td>\n <td>70.8578</td>\n <td>29.0864</td>\n <td>51.3</td>\n <td>65</td>\n <td>6</td>\n <td>13.1</td>\n <td>NO</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>60</th>\n <td>Bethanien</td>\n <td>-26.5020</td>\n <td>17.1583</td>\n <td>52.5</td>\n <td>52</td>\n <td>0</td>\n <td>7.6</td>\n <td>NaN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>61</th>\n <td>Bethel</td>\n <td>41.3712</td>\n <td>-73.4140</td>\n <td>68.2</td>\n <td>86</td>\n <td>100</td>\n <td>7.5</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>62</th>\n <td>Bilibino</td>\n <td>68.0546</td>\n <td>166.4372</td>\n <td>57.8</td>\n <td>34</td>\n <td>12</td>\n <td>1.7</td>\n <td>RU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>63</th>\n <td>Bilma</td>\n <td>18.6853</td>\n <td>12.9164</td>\n <td>94.2</td>\n <td>11</td>\n <td>88</td>\n <td>3.6</td>\n <td>NE</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>64</th>\n <td>Bima</td>\n <td>-8.4667</td>\n <td>118.7167</td>\n <td>73.7</td>\n <td>94</td>\n <td>67</td>\n <td>3.9</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>65</th>\n <td>Bintulu</td>\n <td>3.1667</td>\n <td>113.0333</td>\n <td>81.0</td>\n <td>94</td>\n <td>40</td>\n <td>2.3</td>\n <td>MY</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>66</th>\n <td>Black River</td>\n <td>18.0264</td>\n <td>-77.8487</td>\n <td>84.0</td>\n <td>72</td>\n <td>67</td>\n <td>4.5</td>\n <td>JM</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>67</th>\n <td>Blatnica</td>\n <td>44.4864</td>\n <td>17.8233</td>\n <td>52.2</td>\n <td>84</td>\n <td>45</td>\n <td>4.3</td>\n <td>BA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>68</th>\n <td>Bluff</td>\n <td>-46.6000</td>\n <td>168.3333</td>\n <td>49.8</td>\n <td>84</td>\n <td>97</td>\n <td>14.2</td>\n <td>NZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>69</th>\n <td>Boende</td>\n <td>-0.2167</td>\n <td>20.8667</td>\n <td>72.5</td>\n <td>88</td>\n <td>26</td>\n <td>2.1</td>\n <td>CD</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>70</th>\n <td>Boma</td>\n <td>7.0805</td>\n <td>-2.1697</td>\n <td>75.2</td>\n <td>94</td>\n <td>32</td>\n <td>3.6</td>\n <td>GH</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>71</th>\n <td>Bontang</td>\n <td>0.1333</td>\n <td>117.5000</td>\n <td>79.9</td>\n <td>84</td>\n <td>99</td>\n <td>3.0</td>\n <td>ID</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>72</th>\n <td>Boueni</td>\n <td>-12.9025</td>\n <td>45.0761</td>\n <td>77.0</td>\n <td>72</td>\n <td>32</td>\n <td>5.6</td>\n <td>YT</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>73</th>\n <td>Brae</td>\n <td>60.3964</td>\n <td>-1.3530</td>\n <td>51.7</td>\n <td>92</td>\n <td>41</td>\n <td>13.7</td>\n <td>GB</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>74</th>\n <td>Bredasdorp</td>\n <td>-34.5322</td>\n <td>20.0403</td>\n <td>47.5</td>\n <td>87</td>\n <td>100</td>\n <td>4.7</td>\n <td>ZA</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>75</th>\n <td>Brigantine</td>\n <td>39.4101</td>\n <td>-74.3646</td>\n <td>71.2</td>\n <td>86</td>\n <td>1</td>\n <td>16.3</td>\n <td>US</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>76</th>\n <td>Broome</td>\n <td>42.2506</td>\n <td>-75.8330</td>\n <td>69.4</td>\n <td>87</td>\n <td>90</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>77</th>\n <td>Brownfield</td>\n <td>33.1812</td>\n <td>-102.2744</td>\n <td>77.0</td>\n <td>33</td>\n <td>99</td>\n <td>3.8</td>\n <td>US</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>78</th>\n <td>Bubaque</td>\n <td>11.2833</td>\n <td>-15.8333</td>\n <td>78.5</td>\n <td>76</td>\n <td>57</td>\n <td>14.7</td>\n <td>GW</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>79</th>\n <td>Buchanan</td>\n <td>5.8808</td>\n <td>-10.0467</td>\n <td>80.5</td>\n <td>83</td>\n <td>96</td>\n <td>6.4</td>\n <td>LR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>80</th>\n <td>Busselton</td>\n <td>-33.6500</td>\n <td>115.3333</td>\n <td>47.1</td>\n <td>83</td>\n <td>11</td>\n <td>6.1</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>81</th>\n <td>Butaritari</td>\n <td>3.0707</td>\n <td>172.7902</td>\n <td>83.1</td>\n <td>76</td>\n <td>92</td>\n <td>12.2</td>\n <td>KI</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>82</th>\n <td>Byron Bay</td>\n <td>-28.6500</td>\n <td>153.6167</td>\n <td>60.4</td>\n <td>71</td>\n <td>0</td>\n <td>10.8</td>\n <td>AU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>83</th>\n <td>Córdoba</td>\n <td>-31.4135</td>\n <td>-64.1811</td>\n <td>63.5</td>\n <td>63</td>\n <td>0</td>\n <td>8.9</td>\n <td>AR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>84</th>\n <td>Cabedelo</td>\n <td>-6.9811</td>\n <td>-34.8339</td>\n <td>77.7</td>\n <td>77</td>\n <td>23</td>\n <td>11.3</td>\n <td>BR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>85</th>\n <td>Cabo San Lucas</td>\n <td>22.8909</td>\n <td>-109.9124</td>\n <td>88.4</td>\n <td>40</td>\n <td>1</td>\n <td>13.8</td>\n <td>MX</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>86</th>\n <td>Călăraşi</td>\n <td>47.2544</td>\n <td>28.3081</td>\n <td>59.0</td>\n <td>88</td>\n <td>100</td>\n <td>7.9</td>\n <td>MD</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>87</th>\n <td>Camalú</td>\n <td>30.8500</td>\n <td>-116.0667</td>\n <td>78.4</td>\n <td>63</td>\n <td>0</td>\n <td>8.3</td>\n <td>MX</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>88</th>\n <td>Cap Malheureux</td>\n <td>-19.9842</td>\n <td>57.6142</td>\n <td>74.5</td>\n <td>75</td>\n <td>25</td>\n <td>6.0</td>\n <td>MU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>89</th>\n <td>Cape Town</td>\n <td>-33.9258</td>\n <td>18.4232</td>\n <td>50.4</td>\n <td>100</td>\n <td>0</td>\n <td>2.3</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>90</th>\n <td>Caravelas</td>\n <td>-17.7125</td>\n <td>-39.2481</td>\n <td>73.1</td>\n <td>86</td>\n <td>98</td>\n <td>5.5</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>91</th>\n <td>Carnarvon</td>\n <td>-24.8667</td>\n <td>113.6333</td>\n <td>51.9</td>\n <td>93</td>\n <td>1</td>\n <td>10.4</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>92</th>\n <td>Carutapera</td>\n <td>-1.1950</td>\n <td>-46.0200</td>\n <td>77.5</td>\n <td>89</td>\n <td>18</td>\n <td>7.3</td>\n <td>BR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>93</th>\n <td>Casper</td>\n <td>42.8666</td>\n <td>-106.3131</td>\n <td>86.7</td>\n <td>15</td>\n <td>1</td>\n <td>5.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>94</th>\n <td>Castro</td>\n <td>-24.7911</td>\n <td>-50.0119</td>\n <td>61.1</td>\n <td>87</td>\n <td>32</td>\n <td>1.2</td>\n <td>BR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>95</th>\n <td>Cayenne</td>\n <td>4.9333</td>\n <td>-52.3333</td>\n <td>81.1</td>\n <td>82</td>\n <td>90</td>\n <td>1.0</td>\n <td>GF</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>96</th>\n <td>Cedar City</td>\n <td>37.6775</td>\n <td>-113.0619</td>\n <td>91.6</td>\n <td>6</td>\n <td>1</td>\n <td>11.5</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>97</th>\n <td>Celestún</td>\n <td>20.8667</td>\n <td>-90.4000</td>\n <td>83.6</td>\n <td>70</td>\n <td>46</td>\n <td>15.5</td>\n <td>MX</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>98</th>\n <td>Chalmette</td>\n <td>29.9427</td>\n <td>-89.9634</td>\n <td>81.1</td>\n <td>65</td>\n <td>1</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>99</th>\n <td>Champerico</td>\n <td>14.3000</td>\n <td>-91.9167</td>\n <td>90.0</td>\n <td>65</td>\n <td>8</td>\n <td>6.4</td>\n <td>GT</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>100</th>\n <td>Cherskiy</td>\n <td>68.7500</td>\n <td>161.3000</td>\n <td>66.2</td>\n <td>28</td>\n <td>53</td>\n <td>7.0</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>101</th>\n <td>Chesma</td>\n <td>53.8111</td>\n <td>60.6533</td>\n <td>49.7</td>\n <td>94</td>\n <td>100</td>\n <td>7.8</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>102</th>\n <td>Chicama</td>\n <td>-7.8447</td>\n <td>-79.1469</td>\n <td>64.9</td>\n <td>70</td>\n <td>51</td>\n <td>8.1</td>\n <td>PE</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>103</th>\n <td>Chifeng</td>\n <td>42.2683</td>\n <td>118.9636</td>\n <td>56.3</td>\n <td>60</td>\n <td>76</td>\n <td>11.5</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>104</th>\n <td>Chokurdakh</td>\n <td>70.6333</td>\n <td>147.9167</td>\n <td>57.3</td>\n <td>58</td>\n <td>100</td>\n <td>2.5</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>105</th>\n <td>Chortomlyk</td>\n <td>47.6232</td>\n <td>34.1420</td>\n <td>56.0</td>\n <td>91</td>\n <td>100</td>\n <td>15.9</td>\n <td>UA</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>106</th>\n <td>Chui</td>\n <td>-33.6971</td>\n <td>-53.4616</td>\n <td>52.2</td>\n <td>64</td>\n <td>40</td>\n <td>7.1</td>\n <td>UY</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>107</th>\n <td>Cidreira</td>\n <td>-30.1811</td>\n <td>-50.2056</td>\n <td>59.9</td>\n <td>80</td>\n <td>0</td>\n <td>6.9</td>\n <td>BR</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>108</th>\n <td>Clarksdale</td>\n <td>34.2001</td>\n <td>-90.5709</td>\n <td>82.3</td>\n <td>51</td>\n <td>75</td>\n <td>6.7</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>109</th>\n <td>Clyde River</td>\n <td>70.4692</td>\n <td>-68.5914</td>\n <td>28.6</td>\n <td>93</td>\n <td>90</td>\n <td>13.8</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>110</th>\n <td>Cockburn Town</td>\n <td>21.4612</td>\n <td>-71.1419</td>\n <td>79.8</td>\n <td>80</td>\n <td>74</td>\n <td>18.8</td>\n <td>TC</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>111</th>\n <td>Codrington</td>\n <td>-38.2667</td>\n <td>141.9667</td>\n <td>51.2</td>\n <td>70</td>\n <td>64</td>\n <td>17.6</td>\n <td>AU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>112</th>\n <td>College</td>\n <td>64.8569</td>\n <td>-147.8028</td>\n <td>67.1</td>\n <td>31</td>\n <td>75</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>113</th>\n <td>Comodoro Rivadavia</td>\n <td>-45.8667</td>\n <td>-67.5000</td>\n <td>44.5</td>\n <td>87</td>\n <td>0</td>\n <td>2.3</td>\n <td>AR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>114</th>\n <td>Concepción del Oro</td>\n <td>24.6333</td>\n <td>-101.4167</td>\n <td>66.6</td>\n <td>50</td>\n <td>96</td>\n <td>4.8</td>\n <td>MX</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>115</th>\n <td>Consolación del Sur</td>\n <td>22.5047</td>\n <td>-83.5136</td>\n <td>86.1</td>\n <td>58</td>\n <td>93</td>\n <td>11.8</td>\n <td>CU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>116</th>\n <td>Constitución</td>\n <td>-35.3333</td>\n <td>-72.4167</td>\n <td>58.3</td>\n <td>90</td>\n <td>100</td>\n <td>8.1</td>\n <td>CL</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>117</th>\n <td>Cootamundra</td>\n <td>-34.6500</td>\n <td>148.0333</td>\n <td>48.9</td>\n <td>93</td>\n <td>100</td>\n <td>7.7</td>\n <td>AU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>118</th>\n <td>Coquimbo</td>\n <td>-29.9533</td>\n <td>-71.3436</td>\n <td>56.9</td>\n <td>94</td>\n <td>90</td>\n <td>1.6</td>\n <td>CL</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>119</th>\n <td>Coyhaique</td>\n <td>-45.5752</td>\n <td>-72.0662</td>\n <td>41.0</td>\n <td>87</td>\n <td>75</td>\n <td>5.8</td>\n <td>CL</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>120</th>\n <td>Cuenca</td>\n <td>-2.8833</td>\n <td>-78.9833</td>\n <td>66.3</td>\n <td>52</td>\n <td>40</td>\n <td>11.5</td>\n <td>EC</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>121</th>\n <td>Dali</td>\n <td>25.7000</td>\n <td>100.1833</td>\n <td>61.0</td>\n <td>91</td>\n <td>24</td>\n <td>4.2</td>\n <td>CN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>122</th>\n <td>Dallas</td>\n <td>32.7668</td>\n <td>-96.7836</td>\n <td>81.5</td>\n <td>58</td>\n <td>75</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:13</td>\n </tr>\n <tr>\n <th>123</th>\n <td>Darhan</td>\n <td>49.4867</td>\n <td>105.9228</td>\n <td>57.6</td>\n <td>52</td>\n <td>0</td>\n <td>3.2</td>\n <td>MN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>124</th>\n <td>Deming</td>\n <td>32.2687</td>\n <td>-107.7586</td>\n <td>89.5</td>\n <td>12</td>\n <td>1</td>\n <td>4.6</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>125</th>\n <td>Denison</td>\n <td>33.7557</td>\n <td>-96.5367</td>\n <td>82.1</td>\n <td>52</td>\n <td>1</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>126</th>\n <td>Denizli Province</td>\n <td>37.8402</td>\n <td>29.0698</td>\n <td>56.3</td>\n <td>64</td>\n <td>0</td>\n <td>2.9</td>\n <td>TR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>127</th>\n <td>Departamento de Maldonado</td>\n <td>-34.6667</td>\n <td>-54.9167</td>\n <td>54.8</td>\n <td>75</td>\n <td>79</td>\n <td>6.9</td>\n <td>UY</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>128</th>\n <td>Deputatsky</td>\n <td>69.3000</td>\n <td>139.9000</td>\n <td>61.4</td>\n <td>37</td>\n <td>100</td>\n <td>3.7</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>129</th>\n <td>Desterro</td>\n <td>-7.2906</td>\n <td>-37.0939</td>\n <td>75.4</td>\n <td>60</td>\n <td>61</td>\n <td>13.2</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>130</th>\n <td>Dicabisagan</td>\n <td>17.0818</td>\n <td>122.4157</td>\n <td>81.0</td>\n <td>75</td>\n <td>82</td>\n <td>4.7</td>\n <td>PH</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>131</th>\n <td>Dikson</td>\n <td>73.5069</td>\n <td>80.5464</td>\n <td>30.2</td>\n <td>98</td>\n <td>98</td>\n <td>13.8</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>132</th>\n <td>Dingle</td>\n <td>10.9995</td>\n <td>122.6711</td>\n <td>77.1</td>\n <td>88</td>\n <td>100</td>\n <td>2.0</td>\n <td>PH</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>133</th>\n <td>Diriomo</td>\n <td>11.8761</td>\n <td>-86.0518</td>\n <td>86.8</td>\n <td>51</td>\n <td>89</td>\n <td>8.9</td>\n <td>NI</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>134</th>\n <td>Dolores</td>\n <td>-36.3132</td>\n <td>-57.6792</td>\n <td>51.7</td>\n <td>79</td>\n <td>80</td>\n <td>9.5</td>\n <td>AR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>135</th>\n <td>Drumnadrochit</td>\n <td>57.3344</td>\n <td>-4.4799</td>\n <td>61.9</td>\n <td>89</td>\n <td>47</td>\n <td>3.9</td>\n <td>GB</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>136</th>\n <td>Dudinka</td>\n <td>69.4058</td>\n <td>86.1778</td>\n <td>32.4</td>\n <td>99</td>\n <td>100</td>\n <td>9.3</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>137</th>\n <td>Dukat</td>\n <td>62.5500</td>\n <td>155.5500</td>\n <td>47.3</td>\n <td>53</td>\n <td>22</td>\n <td>4.3</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>138</th>\n <td>Dumas</td>\n <td>35.8656</td>\n <td>-101.9732</td>\n <td>79.8</td>\n <td>34</td>\n <td>40</td>\n <td>4.6</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>139</th>\n <td>Dunedin</td>\n <td>-45.8742</td>\n <td>170.5036</td>\n <td>49.8</td>\n <td>81</td>\n <td>100</td>\n <td>1.0</td>\n <td>NZ</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>140</th>\n <td>Dyurtyuli</td>\n <td>55.4911</td>\n <td>54.8688</td>\n <td>50.6</td>\n <td>58</td>\n <td>0</td>\n <td>15.7</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>141</th>\n <td>Dzaoudzi</td>\n <td>-12.7887</td>\n <td>45.2699</td>\n <td>76.9</td>\n <td>88</td>\n <td>20</td>\n <td>8.1</td>\n <td>YT</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>142</th>\n <td>East London</td>\n <td>-33.0153</td>\n <td>27.9116</td>\n <td>58.6</td>\n <td>66</td>\n <td>87</td>\n <td>3.7</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>143</th>\n <td>Egvekinot</td>\n <td>66.3167</td>\n <td>-179.1667</td>\n <td>29.5</td>\n <td>66</td>\n <td>19</td>\n <td>2.5</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>144</th>\n <td>Elizabeth City</td>\n <td>36.2946</td>\n <td>-76.2511</td>\n <td>71.4</td>\n <td>95</td>\n <td>90</td>\n <td>4.6</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>145</th>\n <td>Emerald</td>\n <td>-23.5333</td>\n <td>148.1667</td>\n <td>64.5</td>\n <td>82</td>\n <td>90</td>\n <td>5.8</td>\n <td>AU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>146</th>\n <td>Esim</td>\n <td>4.8699</td>\n <td>-2.2405</td>\n <td>80.0</td>\n <td>84</td>\n <td>26</td>\n <td>7.5</td>\n <td>GH</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>147</th>\n <td>Esperance</td>\n <td>-33.8667</td>\n <td>121.9000</td>\n <td>53.4</td>\n <td>81</td>\n <td>0</td>\n <td>10.5</td>\n <td>AU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>148</th>\n <td>Faanui</td>\n <td>-16.4833</td>\n <td>-151.7500</td>\n <td>80.0</td>\n <td>78</td>\n <td>51</td>\n <td>9.4</td>\n <td>PF</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>149</th>\n <td>Fairbanks</td>\n <td>64.8378</td>\n <td>-147.7164</td>\n <td>67.1</td>\n <td>30</td>\n <td>75</td>\n <td>3.4</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>150</th>\n <td>Flinders</td>\n <td>-34.5833</td>\n <td>150.8552</td>\n <td>58.3</td>\n <td>76</td>\n <td>38</td>\n <td>5.0</td>\n <td>AU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>151</th>\n <td>Forsytheganj</td>\n <td>52.6333</td>\n <td>29.7333</td>\n <td>53.4</td>\n <td>76</td>\n <td>84</td>\n <td>7.1</td>\n <td>BY</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>152</th>\n <td>Fortuna</td>\n <td>40.5982</td>\n <td>-124.1573</td>\n <td>58.5</td>\n <td>91</td>\n <td>90</td>\n <td>16.1</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>153</th>\n <td>Fujinomiya</td>\n <td>35.2167</td>\n <td>138.6167</td>\n <td>66.9</td>\n <td>98</td>\n <td>100</td>\n <td>0.5</td>\n <td>JP</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>154</th>\n <td>Fukuechō</td>\n <td>32.6881</td>\n <td>128.8419</td>\n <td>66.5</td>\n <td>90</td>\n <td>100</td>\n <td>18.9</td>\n <td>JP</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>155</th>\n <td>Funtua</td>\n <td>11.5233</td>\n <td>7.3081</td>\n <td>79.8</td>\n <td>46</td>\n <td>17</td>\n <td>11.0</td>\n <td>NG</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>156</th>\n <td>Gīdam</td>\n <td>18.9833</td>\n <td>81.4000</td>\n <td>76.3</td>\n <td>80</td>\n <td>12</td>\n <td>1.1</td>\n <td>IN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>157</th>\n <td>Gamba</td>\n <td>-2.6500</td>\n <td>10.0000</td>\n <td>75.9</td>\n <td>92</td>\n <td>4</td>\n <td>5.0</td>\n <td>GA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>158</th>\n <td>Gao</td>\n <td>16.6362</td>\n <td>1.6370</td>\n <td>96.5</td>\n <td>11</td>\n <td>73</td>\n <td>6.9</td>\n <td>ML</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>159</th>\n <td>Gemena</td>\n <td>3.2500</td>\n <td>19.7667</td>\n <td>73.5</td>\n <td>76</td>\n <td>77</td>\n <td>1.3</td>\n <td>CD</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>160</th>\n <td>George Town</td>\n <td>5.4112</td>\n <td>100.3354</td>\n <td>78.7</td>\n <td>90</td>\n <td>20</td>\n <td>2.3</td>\n <td>MY</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>161</th>\n <td>Geraldton</td>\n <td>-28.7667</td>\n <td>114.6000</td>\n <td>46.8</td>\n <td>87</td>\n <td>0</td>\n <td>10.5</td>\n <td>AU</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>162</th>\n <td>Gizo Government Station</td>\n <td>-8.1030</td>\n <td>156.8419</td>\n <td>82.0</td>\n <td>79</td>\n <td>81</td>\n <td>5.2</td>\n <td>SB</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>163</th>\n <td>Gorontalo</td>\n <td>0.5412</td>\n <td>123.0595</td>\n <td>75.0</td>\n <td>92</td>\n <td>92</td>\n <td>2.0</td>\n <td>ID</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>164</th>\n <td>Graaff-Reinet</td>\n <td>-32.2522</td>\n <td>24.5308</td>\n <td>43.1</td>\n <td>74</td>\n <td>2</td>\n <td>2.2</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>165</th>\n <td>Grajaú</td>\n <td>-5.8194</td>\n <td>-46.1386</td>\n <td>80.6</td>\n <td>67</td>\n <td>59</td>\n <td>4.9</td>\n <td>BR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>166</th>\n <td>Grand Gaube</td>\n <td>-20.0064</td>\n <td>57.6608</td>\n <td>74.4</td>\n <td>75</td>\n <td>25</td>\n <td>6.0</td>\n <td>MU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>167</th>\n <td>Grand-Santi</td>\n <td>4.2500</td>\n <td>-54.3833</td>\n <td>74.5</td>\n <td>97</td>\n <td>47</td>\n <td>2.4</td>\n <td>GF</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>168</th>\n <td>Grindavik</td>\n <td>63.8424</td>\n <td>-22.4338</td>\n <td>49.0</td>\n <td>76</td>\n <td>75</td>\n <td>33.5</td>\n <td>IS</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>169</th>\n <td>Guangming</td>\n <td>45.3333</td>\n <td>122.7833</td>\n <td>55.9</td>\n <td>67</td>\n <td>100</td>\n <td>6.2</td>\n <td>CN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>170</th>\n <td>Guilin</td>\n <td>25.2819</td>\n <td>110.2864</td>\n <td>68.1</td>\n <td>94</td>\n <td>100</td>\n <td>4.7</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>171</th>\n <td>Gushikawa</td>\n <td>26.3544</td>\n <td>127.8686</td>\n <td>82.0</td>\n <td>89</td>\n <td>75</td>\n <td>14.4</td>\n <td>JP</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>172</th>\n <td>Gwadar</td>\n <td>25.1216</td>\n <td>62.3254</td>\n <td>83.8</td>\n <td>84</td>\n <td>33</td>\n <td>12.1</td>\n <td>PK</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>173</th>\n <td>Härnösand</td>\n <td>62.6323</td>\n <td>17.9379</td>\n <td>47.6</td>\n <td>73</td>\n <td>87</td>\n <td>5.4</td>\n <td>SE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>174</th>\n <td>Hässleholm</td>\n <td>56.1591</td>\n <td>13.7664</td>\n <td>49.5</td>\n <td>77</td>\n <td>100</td>\n <td>6.1</td>\n <td>SE</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>175</th>\n <td>Haines Junction</td>\n <td>60.7522</td>\n <td>-137.5108</td>\n <td>53.4</td>\n <td>41</td>\n <td>50</td>\n <td>15.2</td>\n <td>CA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>176</th>\n <td>Half Moon Bay</td>\n <td>37.4636</td>\n <td>-122.4286</td>\n <td>73.5</td>\n <td>58</td>\n <td>1</td>\n <td>7.8</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>177</th>\n <td>Halifax</td>\n <td>44.6453</td>\n <td>-63.5724</td>\n <td>61.8</td>\n <td>73</td>\n <td>90</td>\n <td>4.0</td>\n <td>CA</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>178</th>\n <td>Hambantota</td>\n <td>6.1241</td>\n <td>81.1185</td>\n <td>79.2</td>\n <td>87</td>\n <td>99</td>\n <td>15.5</td>\n <td>LK</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>179</th>\n <td>Hamilton</td>\n <td>39.1834</td>\n <td>-84.5333</td>\n <td>66.4</td>\n <td>89</td>\n <td>90</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>180</th>\n <td>Harlingen</td>\n <td>26.1906</td>\n <td>-97.6961</td>\n <td>83.6</td>\n <td>71</td>\n <td>75</td>\n <td>8.1</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>181</th>\n <td>Harlow</td>\n <td>51.7766</td>\n <td>0.1116</td>\n <td>61.8</td>\n <td>74</td>\n <td>92</td>\n <td>5.2</td>\n <td>GB</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>182</th>\n <td>Harper</td>\n <td>4.3750</td>\n <td>-7.7169</td>\n <td>79.7</td>\n <td>83</td>\n <td>100</td>\n <td>6.3</td>\n <td>LR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>183</th>\n <td>Hasaki</td>\n <td>35.7333</td>\n <td>140.8333</td>\n <td>68.5</td>\n <td>83</td>\n <td>100</td>\n <td>25.8</td>\n <td>JP</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>184</th>\n <td>Haukipudas</td>\n <td>65.1765</td>\n <td>25.3523</td>\n <td>53.2</td>\n <td>87</td>\n <td>100</td>\n <td>6.7</td>\n <td>FI</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>185</th>\n <td>Havøysund</td>\n <td>70.9963</td>\n <td>24.6622</td>\n <td>50.9</td>\n <td>78</td>\n <td>12</td>\n <td>15.2</td>\n <td>NO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>186</th>\n <td>Henties Bay</td>\n <td>-22.1160</td>\n <td>14.2845</td>\n <td>74.9</td>\n <td>47</td>\n <td>0</td>\n <td>4.0</td>\n <td>NaN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>187</th>\n <td>Hermanus</td>\n <td>-34.4187</td>\n <td>19.2345</td>\n <td>55.8</td>\n <td>76</td>\n <td>100</td>\n <td>9.5</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>188</th>\n <td>High Rock</td>\n <td>26.6208</td>\n <td>-78.2833</td>\n <td>82.4</td>\n <td>76</td>\n <td>22</td>\n <td>15.5</td>\n <td>BS</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>189</th>\n <td>Hilo</td>\n <td>19.7297</td>\n <td>-155.0900</td>\n <td>80.4</td>\n <td>70</td>\n <td>40</td>\n <td>10.4</td>\n <td>US</td>\n <td>6/3/2021 22:13</td>\n </tr>\n <tr>\n <th>190</th>\n <td>Hithadhoo</td>\n <td>-0.6000</td>\n <td>73.0833</td>\n <td>83.5</td>\n <td>70</td>\n <td>100</td>\n <td>18.3</td>\n <td>MV</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>191</th>\n <td>Hobart</td>\n <td>-42.8794</td>\n <td>147.3294</td>\n <td>46.0</td>\n <td>71</td>\n <td>20</td>\n <td>5.0</td>\n <td>AU</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>192</th>\n <td>Hobyo</td>\n <td>5.3505</td>\n <td>48.5268</td>\n <td>78.3</td>\n <td>87</td>\n <td>69</td>\n <td>24.8</td>\n <td>SO</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>193</th>\n <td>Hofn</td>\n <td>64.2539</td>\n <td>-15.2082</td>\n <td>48.6</td>\n <td>80</td>\n <td>98</td>\n <td>7.9</td>\n <td>IS</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>194</th>\n <td>Homer</td>\n <td>59.6425</td>\n <td>-151.5483</td>\n <td>51.7</td>\n <td>71</td>\n <td>90</td>\n <td>10.4</td>\n <td>US</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>195</th>\n <td>Hong Kong</td>\n <td>22.2855</td>\n <td>114.1577</td>\n <td>84.7</td>\n <td>88</td>\n <td>100</td>\n <td>5.0</td>\n <td>HK</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>196</th>\n <td>Honiara</td>\n <td>-9.4333</td>\n <td>159.9500</td>\n <td>80.7</td>\n <td>82</td>\n <td>98</td>\n <td>5.1</td>\n <td>SB</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>197</th>\n <td>Houndé</td>\n <td>11.5000</td>\n <td>-3.5167</td>\n <td>86.9</td>\n <td>43</td>\n <td>98</td>\n <td>7.1</td>\n <td>BF</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>198</th>\n <td>Houston</td>\n <td>29.7633</td>\n <td>-95.3633</td>\n <td>74.1</td>\n <td>86</td>\n <td>90</td>\n <td>8.1</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>199</th>\n <td>Huarmey</td>\n <td>-10.0681</td>\n <td>-78.1522</td>\n <td>68.6</td>\n <td>74</td>\n <td>45</td>\n <td>9.1</td>\n <td>PE</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>200</th>\n <td>Huilong</td>\n <td>31.8111</td>\n <td>121.6550</td>\n <td>65.0</td>\n <td>90</td>\n <td>100</td>\n <td>9.0</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>201</th>\n <td>Humaitá</td>\n <td>-7.5061</td>\n <td>-63.0208</td>\n <td>81.0</td>\n <td>53</td>\n <td>4</td>\n <td>1.3</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>202</th>\n <td>Husavik</td>\n <td>66.0449</td>\n <td>-17.3389</td>\n <td>48.1</td>\n <td>65</td>\n <td>100</td>\n <td>1.7</td>\n <td>IS</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>203</th>\n <td>Hvide Sande</td>\n <td>56.0045</td>\n <td>8.1294</td>\n <td>60.8</td>\n <td>95</td>\n <td>18</td>\n <td>14.5</td>\n <td>DK</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>204</th>\n <td>Hvolsvollur</td>\n <td>63.7533</td>\n <td>-20.2243</td>\n <td>48.8</td>\n <td>73</td>\n <td>100</td>\n <td>8.1</td>\n <td>IS</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>205</th>\n <td>Igarka</td>\n <td>67.4667</td>\n <td>86.5833</td>\n <td>32.9</td>\n <td>98</td>\n <td>100</td>\n <td>12.7</td>\n <td>RU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>206</th>\n <td>Ilulissat</td>\n <td>69.2167</td>\n <td>-51.1000</td>\n <td>39.2</td>\n <td>81</td>\n <td>89</td>\n <td>11.5</td>\n <td>GL</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>207</th>\n <td>Inhambane</td>\n <td>-23.8650</td>\n <td>35.3833</td>\n <td>65.4</td>\n <td>88</td>\n <td>54</td>\n <td>15.2</td>\n <td>MZ</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>208</th>\n <td>Inta</td>\n <td>66.0317</td>\n <td>60.1659</td>\n <td>42.1</td>\n <td>78</td>\n <td>73</td>\n <td>13.2</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>209</th>\n <td>Inuvik</td>\n <td>68.3499</td>\n <td>-133.7218</td>\n <td>49.0</td>\n <td>76</td>\n <td>90</td>\n <td>5.8</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>210</th>\n <td>Iqaluit</td>\n <td>63.7506</td>\n <td>-68.5145</td>\n <td>31.7</td>\n <td>86</td>\n <td>90</td>\n <td>12.0</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>211</th>\n <td>Iralaya</td>\n <td>15.0000</td>\n <td>-83.2333</td>\n <td>83.6</td>\n <td>70</td>\n <td>25</td>\n <td>10.7</td>\n <td>HN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>212</th>\n <td>Isangel</td>\n <td>-19.5500</td>\n <td>169.2667</td>\n <td>77.1</td>\n <td>73</td>\n <td>40</td>\n <td>12.9</td>\n <td>VU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>213</th>\n <td>Issoire</td>\n <td>45.5500</td>\n <td>3.2500</td>\n <td>61.9</td>\n <td>89</td>\n <td>12</td>\n <td>5.8</td>\n <td>FR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>214</th>\n <td>Itarema</td>\n <td>-2.9248</td>\n <td>-39.9167</td>\n <td>79.7</td>\n <td>72</td>\n <td>11</td>\n <td>14.5</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>215</th>\n <td>Jaciara</td>\n <td>-15.9653</td>\n <td>-54.9683</td>\n <td>77.4</td>\n <td>42</td>\n <td>9</td>\n <td>2.1</td>\n <td>BR</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>216</th>\n <td>Jackson</td>\n <td>42.4165</td>\n <td>-122.8345</td>\n <td>90.1</td>\n <td>25</td>\n <td>1</td>\n <td>11.5</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>217</th>\n <td>Jadu</td>\n <td>31.9530</td>\n <td>12.0261</td>\n <td>74.5</td>\n <td>43</td>\n <td>47</td>\n <td>8.4</td>\n <td>LY</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>218</th>\n <td>Jamestown</td>\n <td>42.0970</td>\n <td>-79.2353</td>\n <td>65.2</td>\n <td>95</td>\n <td>90</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>219</th>\n <td>Jiangkou</td>\n <td>25.4872</td>\n <td>119.1986</td>\n <td>80.5</td>\n <td>92</td>\n <td>100</td>\n <td>6.4</td>\n <td>CN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>220</th>\n <td>Jieshi</td>\n <td>22.8134</td>\n <td>115.8257</td>\n <td>81.1</td>\n <td>89</td>\n <td>90</td>\n <td>5.9</td>\n <td>CN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>221</th>\n <td>Jiexiu</td>\n <td>37.0244</td>\n <td>111.9125</td>\n <td>61.9</td>\n <td>31</td>\n <td>0</td>\n <td>6.9</td>\n <td>CN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>222</th>\n <td>Jimaní</td>\n <td>18.4917</td>\n <td>-71.8502</td>\n <td>89.5</td>\n <td>47</td>\n <td>94</td>\n <td>10.2</td>\n <td>DO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>223</th>\n <td>Kaeo</td>\n <td>-35.1000</td>\n <td>173.7833</td>\n <td>58.3</td>\n <td>84</td>\n <td>44</td>\n <td>1.7</td>\n <td>NZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>224</th>\n <td>Kahului</td>\n <td>20.8947</td>\n <td>-156.4700</td>\n <td>82.1</td>\n <td>71</td>\n <td>40</td>\n <td>19.6</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>225</th>\n <td>Kaitangata</td>\n <td>-46.2817</td>\n <td>169.8464</td>\n <td>51.9</td>\n <td>82</td>\n <td>100</td>\n <td>3.1</td>\n <td>NZ</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>226</th>\n <td>Kaniama</td>\n <td>-7.5667</td>\n <td>24.1833</td>\n <td>70.7</td>\n <td>74</td>\n <td>80</td>\n <td>1.7</td>\n <td>CD</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>227</th>\n <td>Kapaa</td>\n <td>22.0752</td>\n <td>-159.3190</td>\n <td>80.6</td>\n <td>77</td>\n <td>75</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>228</th>\n <td>Karasjok</td>\n <td>69.4719</td>\n <td>25.5112</td>\n <td>55.5</td>\n <td>56</td>\n <td>31</td>\n <td>3.5</td>\n <td>NO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>229</th>\n <td>Kattivākkam</td>\n <td>13.2167</td>\n <td>80.3167</td>\n <td>85.1</td>\n <td>88</td>\n <td>99</td>\n <td>8.8</td>\n <td>IN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>230</th>\n <td>Kavaratti</td>\n <td>10.5669</td>\n <td>72.6420</td>\n <td>83.9</td>\n <td>75</td>\n <td>100</td>\n <td>16.5</td>\n <td>IN</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>231</th>\n <td>Kavieng</td>\n <td>-2.5744</td>\n <td>150.7967</td>\n <td>82.7</td>\n <td>74</td>\n <td>98</td>\n <td>3.9</td>\n <td>PG</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>232</th>\n <td>Keningau</td>\n <td>5.3378</td>\n <td>116.1602</td>\n <td>74.8</td>\n <td>99</td>\n <td>94</td>\n <td>1.9</td>\n <td>MY</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>233</th>\n <td>Ketchikan</td>\n <td>55.3422</td>\n <td>-131.6461</td>\n <td>51.8</td>\n <td>86</td>\n <td>100</td>\n <td>6.6</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>234</th>\n <td>Khāsh</td>\n <td>28.2211</td>\n <td>61.2158</td>\n <td>79.8</td>\n <td>11</td>\n <td>0</td>\n <td>0.7</td>\n <td>IR</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>235</th>\n <td>Khandagayty</td>\n <td>50.7333</td>\n <td>92.0500</td>\n <td>59.3</td>\n <td>53</td>\n <td>75</td>\n <td>4.1</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>236</th>\n <td>Khani</td>\n <td>41.9563</td>\n <td>42.9566</td>\n <td>47.8</td>\n <td>82</td>\n <td>34</td>\n <td>4.5</td>\n <td>GE</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>237</th>\n <td>Khasan</td>\n <td>42.4308</td>\n <td>130.6434</td>\n <td>55.0</td>\n <td>94</td>\n <td>92</td>\n <td>5.8</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>238</th>\n <td>Khatanga</td>\n <td>71.9667</td>\n <td>102.5000</td>\n <td>32.2</td>\n <td>89</td>\n <td>100</td>\n <td>5.8</td>\n <td>RU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>239</th>\n <td>Khovd</td>\n <td>48.0056</td>\n <td>91.6419</td>\n <td>63.0</td>\n <td>46</td>\n <td>96</td>\n <td>5.8</td>\n <td>MN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>240</th>\n <td>Khvoynaya</td>\n <td>58.9000</td>\n <td>34.5333</td>\n <td>49.5</td>\n <td>77</td>\n <td>38</td>\n <td>5.1</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>241</th>\n <td>Kieta</td>\n <td>-6.2167</td>\n <td>155.6333</td>\n <td>82.1</td>\n <td>77</td>\n <td>100</td>\n <td>2.2</td>\n <td>PG</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>242</th>\n <td>Kinkala</td>\n <td>-4.3614</td>\n <td>14.7644</td>\n <td>75.5</td>\n <td>79</td>\n <td>15</td>\n <td>3.3</td>\n <td>CG</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>243</th>\n <td>Kismayo</td>\n <td>-0.3582</td>\n <td>42.5454</td>\n <td>78.5</td>\n <td>84</td>\n <td>75</td>\n <td>17.0</td>\n <td>SO</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>244</th>\n <td>Klaksvík</td>\n <td>62.2266</td>\n <td>-6.5890</td>\n <td>50.9</td>\n <td>93</td>\n <td>85</td>\n <td>17.5</td>\n <td>FO</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>245</th>\n <td>Kloulklubed</td>\n <td>7.0419</td>\n <td>134.2556</td>\n <td>79.3</td>\n <td>79</td>\n <td>19</td>\n <td>6.8</td>\n <td>PW</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>246</th>\n <td>Kollegāl</td>\n <td>12.1500</td>\n <td>77.1167</td>\n <td>72.8</td>\n <td>84</td>\n <td>100</td>\n <td>1.8</td>\n <td>IN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>247</th>\n <td>Kommunar</td>\n <td>59.6206</td>\n <td>30.3900</td>\n <td>58.5</td>\n <td>81</td>\n <td>34</td>\n <td>4.5</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>248</th>\n <td>Konotop</td>\n <td>51.2403</td>\n <td>33.2026</td>\n <td>49.5</td>\n <td>92</td>\n <td>100</td>\n <td>8.1</td>\n <td>UA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>249</th>\n <td>Korla</td>\n <td>41.7597</td>\n <td>86.1469</td>\n <td>77.7</td>\n <td>23</td>\n <td>81</td>\n <td>3.2</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>250</th>\n <td>Koshurnikovo</td>\n <td>54.1667</td>\n <td>93.3000</td>\n <td>57.7</td>\n <td>96</td>\n <td>100</td>\n <td>2.9</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>251</th>\n <td>Koslan</td>\n <td>63.4564</td>\n <td>48.8989</td>\n <td>50.5</td>\n <td>79</td>\n <td>7</td>\n <td>4.2</td>\n <td>RU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>252</th>\n <td>Krasnosel'kup</td>\n <td>65.7000</td>\n <td>82.4667</td>\n <td>35.7</td>\n <td>75</td>\n <td>35</td>\n <td>7.7</td>\n <td>RU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>253</th>\n <td>Kruisfontein</td>\n <td>-34.0033</td>\n <td>24.7314</td>\n <td>48.8</td>\n <td>70</td>\n <td>11</td>\n <td>6.2</td>\n <td>ZA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>254</th>\n <td>Kumul</td>\n <td>42.8000</td>\n <td>93.4500</td>\n <td>77.4</td>\n <td>13</td>\n <td>22</td>\n <td>2.4</td>\n <td>CN</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>255</th>\n <td>Kununurra</td>\n <td>-15.7667</td>\n <td>128.7333</td>\n <td>73.4</td>\n <td>53</td>\n <td>40</td>\n <td>5.8</td>\n <td>AU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>256</th>\n <td>La Palma</td>\n <td>33.8464</td>\n <td>-118.0467</td>\n <td>75.1</td>\n <td>64</td>\n <td>20</td>\n <td>6.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>257</th>\n <td>Lālsot</td>\n <td>26.5667</td>\n <td>76.3333</td>\n <td>84.9</td>\n <td>38</td>\n <td>71</td>\n <td>2.9</td>\n <td>IN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>258</th>\n <td>Lüderitz</td>\n <td>-26.6481</td>\n <td>15.1594</td>\n <td>69.2</td>\n <td>31</td>\n <td>0</td>\n <td>19.8</td>\n <td>NaN</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>259</th>\n <td>Laas</td>\n <td>46.6166</td>\n <td>10.7002</td>\n <td>62.6</td>\n <td>95</td>\n <td>63</td>\n <td>1.2</td>\n <td>IT</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>260</th>\n <td>Lagoa</td>\n <td>39.0500</td>\n <td>-27.9833</td>\n <td>61.4</td>\n <td>70</td>\n <td>19</td>\n <td>13.5</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>261</th>\n <td>Laguna</td>\n <td>38.4210</td>\n <td>-121.4238</td>\n <td>95.1</td>\n <td>31</td>\n <td>1</td>\n <td>3.0</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>262</th>\n <td>Lahad Datu</td>\n <td>5.0268</td>\n <td>118.3270</td>\n <td>77.5</td>\n <td>92</td>\n <td>100</td>\n <td>4.6</td>\n <td>MY</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>263</th>\n <td>Lakes Entrance</td>\n <td>-37.8811</td>\n <td>147.9810</td>\n <td>52.2</td>\n <td>81</td>\n <td>98</td>\n <td>14.6</td>\n <td>AU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>264</th>\n <td>Lata</td>\n <td>40.1629</td>\n <td>-8.3327</td>\n <td>55.1</td>\n <td>87</td>\n <td>0</td>\n <td>4.5</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>265</th>\n <td>Lauria</td>\n <td>40.0465</td>\n <td>15.8358</td>\n <td>59.9</td>\n <td>57</td>\n <td>0</td>\n <td>4.4</td>\n <td>IT</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>266</th>\n <td>Lavrentiya</td>\n <td>65.5833</td>\n <td>-171.0000</td>\n <td>33.1</td>\n <td>69</td>\n <td>99</td>\n <td>5.0</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>267</th>\n <td>Lebu</td>\n <td>-37.6167</td>\n <td>-73.6500</td>\n <td>56.4</td>\n <td>97</td>\n <td>100</td>\n <td>4.9</td>\n <td>CL</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>268</th>\n <td>Leninaul</td>\n <td>43.0914</td>\n <td>46.5743</td>\n <td>60.4</td>\n <td>77</td>\n <td>92</td>\n <td>1.7</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>269</th>\n <td>Leningradskiy</td>\n <td>69.3833</td>\n <td>178.4167</td>\n <td>31.3</td>\n <td>80</td>\n <td>72</td>\n <td>7.3</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>270</th>\n <td>Lerwick</td>\n <td>60.1545</td>\n <td>-1.1494</td>\n <td>51.8</td>\n <td>93</td>\n <td>33</td>\n <td>14.0</td>\n <td>GB</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>271</th>\n <td>Letterkenny</td>\n <td>54.9500</td>\n <td>-7.7333</td>\n <td>50.5</td>\n <td>95</td>\n <td>5</td>\n <td>8.3</td>\n <td>IE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>272</th>\n <td>Limbang</td>\n <td>4.7500</td>\n <td>115.0000</td>\n <td>76.2</td>\n <td>99</td>\n <td>100</td>\n <td>1.9</td>\n <td>MY</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>273</th>\n <td>Linhai</td>\n <td>28.8500</td>\n <td>121.1167</td>\n <td>67.9</td>\n <td>90</td>\n <td>100</td>\n <td>1.9</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>274</th>\n <td>Lodja</td>\n <td>-3.4833</td>\n <td>23.4333</td>\n <td>71.4</td>\n <td>83</td>\n <td>84</td>\n <td>2.0</td>\n <td>CD</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>275</th>\n <td>Lokosovo</td>\n <td>61.1333</td>\n <td>74.8167</td>\n <td>34.8</td>\n <td>80</td>\n <td>44</td>\n <td>6.6</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>276</th>\n <td>Lompoc</td>\n <td>34.6391</td>\n <td>-120.4579</td>\n <td>69.0</td>\n <td>67</td>\n <td>20</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>277</th>\n <td>Longyearbyen</td>\n <td>78.2186</td>\n <td>15.6401</td>\n <td>35.4</td>\n <td>75</td>\n <td>75</td>\n <td>5.8</td>\n <td>SJ</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>278</th>\n <td>Loralai</td>\n <td>30.3705</td>\n <td>68.5980</td>\n <td>76.5</td>\n <td>49</td>\n <td>0</td>\n <td>5.8</td>\n <td>PK</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>279</th>\n <td>Lorengau</td>\n <td>-2.0226</td>\n <td>147.2712</td>\n <td>81.2</td>\n <td>75</td>\n <td>98</td>\n <td>0.6</td>\n <td>PG</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>280</th>\n <td>Los Llanos de Aridane</td>\n <td>28.6585</td>\n <td>-17.9182</td>\n <td>61.1</td>\n <td>85</td>\n <td>90</td>\n <td>14.7</td>\n <td>ES</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>281</th>\n <td>Loukhi</td>\n <td>66.0764</td>\n <td>33.0381</td>\n <td>60.2</td>\n <td>60</td>\n <td>45</td>\n <td>5.3</td>\n <td>RU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>282</th>\n <td>Luba</td>\n <td>3.4568</td>\n <td>8.5547</td>\n <td>76.9</td>\n <td>83</td>\n <td>20</td>\n <td>7.7</td>\n <td>GQ</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>283</th>\n <td>Lucapa</td>\n <td>-8.4192</td>\n <td>20.7447</td>\n <td>69.1</td>\n <td>50</td>\n <td>6</td>\n <td>3.9</td>\n <td>AO</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>284</th>\n <td>Médéa</td>\n <td>36.2642</td>\n <td>2.7539</td>\n <td>59.5</td>\n <td>62</td>\n <td>10</td>\n <td>8.1</td>\n <td>DZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>285</th>\n <td>Machico</td>\n <td>32.7000</td>\n <td>-16.7667</td>\n <td>64.6</td>\n <td>71</td>\n <td>75</td>\n <td>19.6</td>\n <td>PT</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>286</th>\n <td>Mackay</td>\n <td>-21.1500</td>\n <td>149.2000</td>\n <td>68.0</td>\n <td>88</td>\n <td>37</td>\n <td>5.8</td>\n <td>AU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>287</th>\n <td>Mahébourg</td>\n <td>-20.4081</td>\n <td>57.7000</td>\n <td>71.9</td>\n <td>88</td>\n <td>40</td>\n <td>11.5</td>\n <td>MU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>288</th>\n <td>Mahibadhoo</td>\n <td>3.7833</td>\n <td>72.9667</td>\n <td>83.0</td>\n <td>70</td>\n <td>100</td>\n <td>17.2</td>\n <td>MV</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>289</th>\n <td>Malchevskaya</td>\n <td>49.0552</td>\n <td>40.3625</td>\n <td>52.7</td>\n <td>96</td>\n <td>100</td>\n <td>4.0</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>290</th>\n <td>Malindi</td>\n <td>-3.2175</td>\n <td>40.1191</td>\n <td>77.9</td>\n <td>81</td>\n <td>83</td>\n <td>15.4</td>\n <td>KE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>291</th>\n <td>Manali</td>\n <td>13.1667</td>\n <td>80.2667</td>\n <td>85.1</td>\n <td>88</td>\n <td>100</td>\n <td>8.1</td>\n <td>IN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>292</th>\n <td>Manaus</td>\n <td>-3.1019</td>\n <td>-60.0250</td>\n <td>81.1</td>\n <td>83</td>\n <td>75</td>\n <td>3.4</td>\n <td>BR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>293</th>\n <td>Mandalgovi</td>\n <td>45.7625</td>\n <td>106.2708</td>\n <td>48.9</td>\n <td>67</td>\n <td>0</td>\n <td>4.5</td>\n <td>MN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>294</th>\n <td>Manicoré</td>\n <td>-5.8092</td>\n <td>-61.3003</td>\n <td>82.1</td>\n <td>59</td>\n <td>19</td>\n <td>1.1</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>295</th>\n <td>Maniitsoq</td>\n <td>65.4167</td>\n <td>-52.9000</td>\n <td>36.1</td>\n <td>70</td>\n <td>100</td>\n <td>12.3</td>\n <td>GL</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>296</th>\n <td>Manokwari</td>\n <td>-0.8667</td>\n <td>134.0833</td>\n <td>76.7</td>\n <td>86</td>\n <td>100</td>\n <td>5.7</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>297</th>\n <td>Manta</td>\n <td>-0.9500</td>\n <td>-80.7333</td>\n <td>80.4</td>\n <td>74</td>\n <td>75</td>\n <td>8.1</td>\n <td>EC</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>298</th>\n <td>Mar del Plata</td>\n <td>-38.0023</td>\n <td>-57.5575</td>\n <td>56.3</td>\n <td>51</td>\n <td>0</td>\n <td>9.0</td>\n <td>AR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>299</th>\n <td>Maraã</td>\n <td>-1.8333</td>\n <td>-65.3667</td>\n <td>75.9</td>\n <td>96</td>\n <td>61</td>\n <td>3.0</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>300</th>\n <td>Maragogi</td>\n <td>-9.0122</td>\n <td>-35.2225</td>\n <td>76.4</td>\n <td>83</td>\n <td>5</td>\n <td>6.5</td>\n <td>BR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>301</th>\n <td>Margate</td>\n <td>51.3813</td>\n <td>1.3862</td>\n <td>61.4</td>\n <td>77</td>\n <td>25</td>\n <td>2.0</td>\n <td>GB</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>302</th>\n <td>Masuguru</td>\n <td>-11.3667</td>\n <td>38.4167</td>\n <td>69.1</td>\n <td>88</td>\n <td>68</td>\n <td>4.4</td>\n <td>TZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>303</th>\n <td>Matane</td>\n <td>48.8286</td>\n <td>-67.5220</td>\n <td>60.6</td>\n <td>97</td>\n <td>100</td>\n <td>7.4</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>304</th>\n <td>Matara</td>\n <td>5.9485</td>\n <td>80.5353</td>\n <td>78.2</td>\n <td>90</td>\n <td>100</td>\n <td>10.7</td>\n <td>LK</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>305</th>\n <td>Mataram</td>\n <td>-8.5833</td>\n <td>116.1167</td>\n <td>76.0</td>\n <td>86</td>\n <td>55</td>\n <td>4.1</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>306</th>\n <td>Mataura</td>\n <td>-46.1927</td>\n <td>168.8643</td>\n <td>49.5</td>\n <td>91</td>\n <td>98</td>\n <td>4.1</td>\n <td>NZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>307</th>\n <td>Mayo</td>\n <td>38.8876</td>\n <td>-76.5119</td>\n <td>77.0</td>\n <td>85</td>\n <td>1</td>\n <td>7.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>308</th>\n <td>Mayumba</td>\n <td>-3.4320</td>\n <td>10.6554</td>\n <td>76.4</td>\n <td>92</td>\n <td>4</td>\n <td>6.2</td>\n <td>GA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>309</th>\n <td>Mbigou</td>\n <td>-1.9005</td>\n <td>11.9060</td>\n <td>71.4</td>\n <td>81</td>\n <td>91</td>\n <td>1.3</td>\n <td>GA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>310</th>\n <td>Meïganga</td>\n <td>6.5167</td>\n <td>14.3000</td>\n <td>64.5</td>\n <td>92</td>\n <td>18</td>\n <td>1.4</td>\n <td>CM</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>311</th>\n <td>Mehamn</td>\n <td>71.0357</td>\n <td>27.8492</td>\n <td>50.6</td>\n <td>71</td>\n <td>35</td>\n <td>17.6</td>\n <td>NO</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>312</th>\n <td>Menongue</td>\n <td>-14.6585</td>\n <td>17.6910</td>\n <td>50.5</td>\n <td>45</td>\n <td>1</td>\n <td>3.9</td>\n <td>AO</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>313</th>\n <td>Merauke</td>\n <td>-8.4667</td>\n <td>140.3333</td>\n <td>78.3</td>\n <td>80</td>\n <td>76</td>\n <td>12.4</td>\n <td>ID</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>314</th>\n <td>Meridian</td>\n <td>43.6121</td>\n <td>-116.3915</td>\n <td>98.7</td>\n <td>19</td>\n <td>1</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>315</th>\n <td>Meulaboh</td>\n <td>4.1363</td>\n <td>96.1285</td>\n <td>77.2</td>\n <td>90</td>\n <td>100</td>\n <td>3.2</td>\n <td>ID</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>316</th>\n <td>Millinocket</td>\n <td>45.6573</td>\n <td>-68.7098</td>\n <td>65.1</td>\n <td>80</td>\n <td>90</td>\n <td>4.6</td>\n <td>US</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>317</th>\n <td>Mitsamiouli</td>\n <td>-11.3847</td>\n <td>43.2844</td>\n <td>77.6</td>\n <td>74</td>\n <td>12</td>\n <td>3.7</td>\n <td>KM</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>318</th>\n <td>Miyako</td>\n <td>39.6367</td>\n <td>141.9525</td>\n <td>62.7</td>\n <td>94</td>\n <td>100</td>\n <td>22.9</td>\n <td>JP</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>319</th>\n <td>Mmabatho</td>\n <td>-25.8500</td>\n <td>25.6333</td>\n <td>51.8</td>\n <td>54</td>\n <td>15</td>\n <td>6.0</td>\n <td>ZA</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>320</th>\n <td>Mnogovershinnyy</td>\n <td>53.9353</td>\n <td>139.9242</td>\n <td>47.2</td>\n <td>76</td>\n <td>100</td>\n <td>2.6</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>321</th>\n <td>Mokhsogollokh</td>\n <td>61.4681</td>\n <td>128.9203</td>\n <td>65.3</td>\n <td>49</td>\n <td>41</td>\n <td>3.1</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>322</th>\n <td>Mollendo</td>\n <td>-17.0231</td>\n <td>-72.0147</td>\n <td>63.3</td>\n <td>84</td>\n <td>73</td>\n <td>2.4</td>\n <td>PE</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>323</th>\n <td>Morón</td>\n <td>-34.6534</td>\n <td>-58.6198</td>\n <td>61.3</td>\n <td>75</td>\n <td>87</td>\n <td>2.0</td>\n <td>AR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>324</th>\n <td>Morehead</td>\n <td>37.2711</td>\n <td>-87.1764</td>\n <td>75.7</td>\n <td>68</td>\n <td>40</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>325</th>\n <td>Morros</td>\n <td>-9.4467</td>\n <td>-46.3003</td>\n <td>77.7</td>\n <td>45</td>\n <td>36</td>\n <td>2.4</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>326</th>\n <td>Mount Gambier</td>\n <td>-37.8333</td>\n <td>140.7667</td>\n <td>48.1</td>\n <td>93</td>\n <td>75</td>\n <td>12.7</td>\n <td>AU</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>327</th>\n <td>Mpongwe</td>\n <td>-13.5091</td>\n <td>28.1550</td>\n <td>61.7</td>\n <td>72</td>\n <td>70</td>\n <td>6.3</td>\n <td>ZM</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>328</th>\n <td>Mukhen</td>\n <td>48.1000</td>\n <td>136.1000</td>\n <td>52.5</td>\n <td>96</td>\n <td>100</td>\n <td>6.9</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>329</th>\n <td>Muros</td>\n <td>42.7762</td>\n <td>-9.0603</td>\n <td>52.0</td>\n <td>83</td>\n <td>21</td>\n <td>7.2</td>\n <td>ES</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>330</th>\n <td>Mweka</td>\n <td>-4.8500</td>\n <td>21.5667</td>\n <td>71.0</td>\n <td>74</td>\n <td>54</td>\n <td>1.4</td>\n <td>CD</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>331</th>\n <td>Mzimba</td>\n <td>-11.9000</td>\n <td>33.6000</td>\n <td>60.0</td>\n <td>83</td>\n <td>13</td>\n <td>6.7</td>\n <td>MW</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>332</th>\n <td>Nabire</td>\n <td>-3.3667</td>\n <td>135.4833</td>\n <td>76.3</td>\n <td>92</td>\n <td>98</td>\n <td>3.9</td>\n <td>ID</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>333</th>\n <td>Nador</td>\n <td>35.1740</td>\n <td>-2.9287</td>\n <td>68.3</td>\n <td>88</td>\n <td>0</td>\n <td>4.0</td>\n <td>MA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>334</th>\n <td>Namanga</td>\n <td>-2.5433</td>\n <td>36.7905</td>\n <td>63.4</td>\n <td>78</td>\n <td>99</td>\n <td>3.2</td>\n <td>KE</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>335</th>\n <td>Namatanai</td>\n <td>-3.6667</td>\n <td>152.4333</td>\n <td>81.0</td>\n <td>79</td>\n <td>99</td>\n <td>2.6</td>\n <td>PG</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>336</th>\n <td>Nanortalik</td>\n <td>60.1432</td>\n <td>-45.2371</td>\n <td>36.8</td>\n <td>86</td>\n <td>91</td>\n <td>13.4</td>\n <td>GL</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>337</th>\n <td>Nara</td>\n <td>34.6851</td>\n <td>135.8049</td>\n <td>70.0</td>\n <td>91</td>\n <td>100</td>\n <td>3.9</td>\n <td>JP</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>338</th>\n <td>Narón</td>\n <td>43.5167</td>\n <td>-8.1528</td>\n <td>55.8</td>\n <td>83</td>\n <td>72</td>\n <td>4.9</td>\n <td>ES</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>339</th>\n <td>Narasannapeta</td>\n <td>18.4167</td>\n <td>84.0500</td>\n <td>83.7</td>\n <td>83</td>\n <td>36</td>\n <td>7.2</td>\n <td>IN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>340</th>\n <td>Naryan-Mar</td>\n <td>67.6713</td>\n <td>53.0870</td>\n <td>42.6</td>\n <td>68</td>\n <td>0</td>\n <td>8.4</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>341</th>\n <td>Nísia Floresta</td>\n <td>-6.0911</td>\n <td>-35.2086</td>\n <td>79.2</td>\n <td>82</td>\n <td>26</td>\n <td>7.1</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>342</th>\n <td>Naze</td>\n <td>28.3667</td>\n <td>129.4833</td>\n <td>76.1</td>\n <td>90</td>\n <td>100</td>\n <td>17.7</td>\n <td>JP</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>343</th>\n <td>Ndélé</td>\n <td>8.4109</td>\n <td>20.6473</td>\n <td>71.0</td>\n <td>88</td>\n <td>100</td>\n <td>5.7</td>\n <td>CF</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>344</th>\n <td>Ndendé</td>\n <td>-2.4008</td>\n <td>11.3581</td>\n <td>73.9</td>\n <td>91</td>\n <td>27</td>\n <td>1.5</td>\n <td>GA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>345</th>\n <td>Necochea</td>\n <td>-38.5473</td>\n <td>-58.7368</td>\n <td>53.0</td>\n <td>77</td>\n <td>85</td>\n <td>15.1</td>\n <td>AR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>346</th>\n <td>Neftçala</td>\n <td>39.3742</td>\n <td>49.2472</td>\n <td>68.6</td>\n <td>72</td>\n <td>0</td>\n <td>9.7</td>\n <td>AZ</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>347</th>\n <td>Neiafu</td>\n <td>-18.6500</td>\n <td>-173.9833</td>\n <td>75.9</td>\n <td>88</td>\n <td>75</td>\n <td>9.2</td>\n <td>TO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>348</th>\n <td>Nemuro</td>\n <td>43.3236</td>\n <td>145.5750</td>\n <td>51.4</td>\n <td>96</td>\n <td>100</td>\n <td>18.4</td>\n <td>JP</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>349</th>\n <td>Nentón</td>\n <td>15.8000</td>\n <td>-91.7500</td>\n <td>86.2</td>\n <td>50</td>\n <td>58</td>\n <td>2.3</td>\n <td>GT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>350</th>\n <td>New Norfolk</td>\n <td>-42.7826</td>\n <td>147.0587</td>\n <td>44.6</td>\n <td>62</td>\n <td>91</td>\n <td>2.0</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>351</th>\n <td>Ngunguru</td>\n <td>-35.6167</td>\n <td>174.5000</td>\n <td>58.2</td>\n <td>77</td>\n <td>99</td>\n <td>4.1</td>\n <td>NZ</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>352</th>\n <td>Nikolskoye</td>\n <td>59.7035</td>\n <td>30.7861</td>\n <td>55.4</td>\n <td>82</td>\n <td>20</td>\n <td>4.7</td>\n <td>RU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>353</th>\n <td>Nokaneng</td>\n <td>-19.6667</td>\n <td>22.2667</td>\n <td>52.2</td>\n <td>50</td>\n <td>0</td>\n <td>6.3</td>\n <td>BW</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>354</th>\n <td>Nome</td>\n <td>64.5011</td>\n <td>-165.4064</td>\n <td>35.7</td>\n <td>60</td>\n <td>75</td>\n <td>19.6</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>355</th>\n <td>Norman Wells</td>\n <td>65.2820</td>\n <td>-126.8329</td>\n <td>57.2</td>\n <td>72</td>\n <td>75</td>\n <td>9.2</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>356</th>\n <td>Nouadhibou</td>\n <td>20.9310</td>\n <td>-17.0347</td>\n <td>66.5</td>\n <td>78</td>\n <td>71</td>\n <td>25.0</td>\n <td>MR</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>357</th>\n <td>Novikovo</td>\n <td>51.1537</td>\n <td>37.8879</td>\n <td>51.7</td>\n <td>87</td>\n <td>99</td>\n <td>9.5</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>358</th>\n <td>Novouzensk</td>\n <td>50.4592</td>\n <td>48.1431</td>\n <td>56.8</td>\n <td>97</td>\n <td>100</td>\n <td>14.2</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>359</th>\n <td>Nuquí</td>\n <td>5.7125</td>\n <td>-77.2708</td>\n <td>81.2</td>\n <td>82</td>\n <td>100</td>\n <td>7.8</td>\n <td>CO</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>360</th>\n <td>Oktyabr'skiy</td>\n <td>54.4815</td>\n <td>53.4710</td>\n <td>52.3</td>\n <td>64</td>\n <td>51</td>\n <td>16.0</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>361</th>\n <td>Oliveira</td>\n <td>-20.6964</td>\n <td>-44.8272</td>\n <td>65.6</td>\n <td>55</td>\n <td>9</td>\n <td>3.5</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>362</th>\n <td>Omboué</td>\n <td>-1.5746</td>\n <td>9.2618</td>\n <td>76.4</td>\n <td>91</td>\n <td>8</td>\n <td>4.6</td>\n <td>GA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>363</th>\n <td>Oneida</td>\n <td>43.2001</td>\n <td>-75.4663</td>\n <td>69.3</td>\n <td>87</td>\n <td>75</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>364</th>\n <td>Opuwo</td>\n <td>-18.0607</td>\n <td>13.8400</td>\n <td>58.8</td>\n <td>31</td>\n <td>0</td>\n <td>8.8</td>\n <td>NaN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>365</th>\n <td>Oranjemund</td>\n <td>-28.5500</td>\n <td>16.4333</td>\n <td>65.9</td>\n <td>41</td>\n <td>0</td>\n <td>9.0</td>\n <td>NaN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>366</th>\n <td>Ossora</td>\n <td>59.2353</td>\n <td>163.0719</td>\n <td>49.2</td>\n <td>63</td>\n <td>9</td>\n <td>4.4</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>367</th>\n <td>Ostrovnoy</td>\n <td>68.0531</td>\n <td>39.5131</td>\n <td>61.6</td>\n <td>57</td>\n <td>27</td>\n <td>17.7</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>368</th>\n <td>Otterup</td>\n <td>55.5153</td>\n <td>10.3976</td>\n <td>61.5</td>\n <td>80</td>\n <td>83</td>\n <td>16.5</td>\n <td>DK</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>369</th>\n <td>Ouésso</td>\n <td>1.6136</td>\n <td>16.0517</td>\n <td>75.7</td>\n <td>79</td>\n <td>50</td>\n <td>1.8</td>\n <td>CG</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>370</th>\n <td>Paamiut</td>\n <td>61.9940</td>\n <td>-49.6678</td>\n <td>36.6</td>\n <td>70</td>\n <td>27</td>\n <td>10.6</td>\n <td>GL</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>371</th>\n <td>Pacific Grove</td>\n <td>36.6177</td>\n <td>-121.9166</td>\n <td>62.2</td>\n <td>77</td>\n <td>1</td>\n <td>5.0</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>372</th>\n <td>Padang</td>\n <td>-0.9492</td>\n <td>100.3543</td>\n <td>78.9</td>\n <td>88</td>\n <td>100</td>\n <td>2.8</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>373</th>\n <td>Palmer</td>\n <td>42.1584</td>\n <td>-72.3287</td>\n <td>69.0</td>\n <td>77</td>\n <td>90</td>\n <td>8.3</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>374</th>\n <td>Pangnirtung</td>\n <td>66.1451</td>\n <td>-65.7125</td>\n <td>37.4</td>\n <td>65</td>\n <td>90</td>\n <td>6.9</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>375</th>\n <td>Pathein</td>\n <td>16.7833</td>\n <td>94.7333</td>\n <td>81.6</td>\n <td>83</td>\n <td>65</td>\n <td>5.0</td>\n <td>MM</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>376</th>\n <td>Pemangkat</td>\n <td>1.1667</td>\n <td>108.9667</td>\n <td>79.3</td>\n <td>88</td>\n <td>55</td>\n <td>3.7</td>\n <td>ID</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>377</th>\n <td>Petatlán</td>\n <td>17.5167</td>\n <td>-101.2667</td>\n <td>91.2</td>\n <td>55</td>\n <td>68</td>\n <td>5.1</td>\n <td>MX</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>378</th>\n <td>Pevek</td>\n <td>69.7008</td>\n <td>170.3133</td>\n <td>34.8</td>\n <td>74</td>\n <td>54</td>\n <td>2.2</td>\n <td>RU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>379</th>\n <td>Pimentel</td>\n <td>-6.8367</td>\n <td>-79.9342</td>\n <td>69.0</td>\n <td>81</td>\n <td>20</td>\n <td>4.0</td>\n <td>PE</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>380</th>\n <td>Pingliang</td>\n <td>35.5392</td>\n <td>106.6861</td>\n <td>54.0</td>\n <td>31</td>\n <td>5</td>\n <td>4.9</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>381</th>\n <td>Pisco</td>\n <td>-13.7000</td>\n <td>-76.2167</td>\n <td>66.3</td>\n <td>72</td>\n <td>0</td>\n <td>10.4</td>\n <td>PE</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>382</th>\n <td>Pitimbu</td>\n <td>-7.4706</td>\n <td>-34.8086</td>\n <td>77.6</td>\n <td>77</td>\n <td>40</td>\n <td>10.1</td>\n <td>BR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>383</th>\n <td>Plettenberg Bay</td>\n <td>-34.0527</td>\n <td>23.3716</td>\n <td>55.1</td>\n <td>66</td>\n <td>33</td>\n <td>4.3</td>\n <td>ZA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>384</th>\n <td>Polson</td>\n <td>47.6936</td>\n <td>-114.1632</td>\n <td>85.5</td>\n <td>42</td>\n <td>89</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>385</th>\n <td>Ponta Delgada</td>\n <td>37.7333</td>\n <td>-25.6667</td>\n <td>61.5</td>\n <td>77</td>\n <td>20</td>\n <td>2.3</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>386</th>\n <td>Ponta do Sol</td>\n <td>32.6667</td>\n <td>-17.1000</td>\n <td>64.1</td>\n <td>49</td>\n <td>49</td>\n <td>8.1</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>387</th>\n <td>Porirua</td>\n <td>-41.1333</td>\n <td>174.8500</td>\n <td>56.1</td>\n <td>85</td>\n <td>90</td>\n <td>10.0</td>\n <td>NZ</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>388</th>\n <td>Port Alfred</td>\n <td>-33.5906</td>\n <td>26.8910</td>\n <td>56.6</td>\n <td>64</td>\n <td>56</td>\n <td>2.0</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>389</th>\n <td>Port Blair</td>\n <td>11.6667</td>\n <td>92.7500</td>\n <td>82.7</td>\n <td>76</td>\n <td>100</td>\n <td>3.9</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>390</th>\n <td>Port Elizabeth</td>\n <td>-33.9180</td>\n <td>25.5701</td>\n <td>46.7</td>\n <td>87</td>\n <td>0</td>\n <td>4.6</td>\n <td>ZA</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>391</th>\n <td>Port Hardy</td>\n <td>50.6996</td>\n <td>-127.4199</td>\n <td>55.6</td>\n <td>82</td>\n <td>40</td>\n <td>10.4</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>392</th>\n <td>Port Hawkesbury</td>\n <td>45.6169</td>\n <td>-61.3485</td>\n <td>64.7</td>\n <td>68</td>\n <td>75</td>\n <td>3.0</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>393</th>\n <td>Port Hedland</td>\n <td>-20.3167</td>\n <td>118.5667</td>\n <td>64.5</td>\n <td>59</td>\n <td>90</td>\n <td>11.5</td>\n <td>AU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>394</th>\n <td>Port Macquarie</td>\n <td>-31.4333</td>\n <td>152.9167</td>\n <td>52.4</td>\n <td>82</td>\n <td>3</td>\n <td>9.6</td>\n <td>AU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>395</th>\n <td>Portland</td>\n <td>45.5234</td>\n <td>-122.6762</td>\n <td>81.7</td>\n <td>49</td>\n <td>20</td>\n <td>2.0</td>\n <td>US</td>\n <td>6/3/2021 22:18</td>\n </tr>\n <tr>\n <th>396</th>\n <td>Poum</td>\n <td>-20.2333</td>\n <td>164.0167</td>\n <td>74.4</td>\n <td>64</td>\n <td>93</td>\n <td>12.6</td>\n <td>NC</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>397</th>\n <td>Praia da Vitória</td>\n <td>38.7333</td>\n <td>-27.0667</td>\n <td>62.9</td>\n <td>77</td>\n <td>20</td>\n <td>6.9</td>\n <td>PT</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>398</th>\n <td>Preobrazheniye</td>\n <td>42.9019</td>\n <td>133.9064</td>\n <td>53.3</td>\n <td>98</td>\n <td>100</td>\n <td>4.3</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>399</th>\n <td>Presidencia Roque Sáenz Peña</td>\n <td>-26.7852</td>\n <td>-60.4388</td>\n <td>77.1</td>\n <td>65</td>\n <td>36</td>\n <td>8.1</td>\n <td>AR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>400</th>\n <td>Prieska</td>\n <td>-29.6641</td>\n <td>22.7474</td>\n <td>45.0</td>\n <td>51</td>\n <td>0</td>\n <td>7.6</td>\n <td>ZA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>401</th>\n <td>Prince Rupert</td>\n <td>54.3161</td>\n <td>-130.3201</td>\n <td>51.3</td>\n <td>79</td>\n <td>90</td>\n <td>6.9</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>402</th>\n <td>Prizren</td>\n <td>42.2139</td>\n <td>20.7397</td>\n <td>56.8</td>\n <td>74</td>\n <td>40</td>\n <td>4.5</td>\n <td>XK</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>403</th>\n <td>Progreso</td>\n <td>21.2833</td>\n <td>-89.6667</td>\n <td>75.3</td>\n <td>75</td>\n <td>50</td>\n <td>17.1</td>\n <td>MX</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>404</th>\n <td>Provideniya</td>\n <td>64.3833</td>\n <td>-173.3000</td>\n <td>35.7</td>\n <td>69</td>\n <td>75</td>\n <td>13.4</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>405</th>\n <td>Puerto Ayora</td>\n <td>-0.7393</td>\n <td>-90.3518</td>\n <td>75.5</td>\n <td>83</td>\n <td>43</td>\n <td>10.0</td>\n <td>EC</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>406</th>\n <td>Puerto Escondido</td>\n <td>15.8500</td>\n <td>-97.0667</td>\n <td>88.6</td>\n <td>66</td>\n <td>40</td>\n <td>5.8</td>\n <td>MX</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>407</th>\n <td>Puerto Padre</td>\n <td>21.1950</td>\n <td>-76.6028</td>\n <td>79.8</td>\n <td>68</td>\n <td>20</td>\n <td>12.5</td>\n <td>CU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>408</th>\n <td>Puerto Quijarro</td>\n <td>-17.7833</td>\n <td>-57.7667</td>\n <td>82.1</td>\n <td>48</td>\n <td>86</td>\n <td>0.8</td>\n <td>BO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>409</th>\n <td>Punta Arenas</td>\n <td>-53.1500</td>\n <td>-70.9167</td>\n <td>35.7</td>\n <td>93</td>\n <td>90</td>\n <td>12.0</td>\n <td>CL</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>410</th>\n <td>Qaanaaq</td>\n <td>77.4840</td>\n <td>-69.3632</td>\n <td>28.7</td>\n <td>83</td>\n <td>28</td>\n <td>2.9</td>\n <td>GL</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>411</th>\n <td>Qandala</td>\n <td>11.4720</td>\n <td>49.8728</td>\n <td>88.3</td>\n <td>60</td>\n <td>54</td>\n <td>4.4</td>\n <td>SO</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>412</th>\n <td>Qaqortoq</td>\n <td>60.7167</td>\n <td>-46.0333</td>\n <td>50.5</td>\n <td>78</td>\n <td>82</td>\n <td>6.6</td>\n <td>GL</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>413</th>\n <td>Qeshm</td>\n <td>26.9581</td>\n <td>56.2719</td>\n <td>85.9</td>\n <td>78</td>\n <td>33</td>\n <td>6.8</td>\n <td>IR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>414</th>\n <td>Quatre Cocos</td>\n <td>-20.2078</td>\n <td>57.7625</td>\n <td>74.1</td>\n <td>72</td>\n <td>50</td>\n <td>16.2</td>\n <td>MU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>415</th>\n <td>Rānīr Bāzār</td>\n <td>23.8333</td>\n <td>91.3667</td>\n <td>76.3</td>\n <td>96</td>\n <td>30</td>\n <td>4.8</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>416</th>\n <td>Rabo de Peixe</td>\n <td>37.8000</td>\n <td>-25.5833</td>\n <td>60.3</td>\n <td>77</td>\n <td>20</td>\n <td>5.2</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>417</th>\n <td>Rafaela</td>\n <td>-31.2503</td>\n <td>-61.4867</td>\n <td>65.2</td>\n <td>50</td>\n <td>3</td>\n <td>2.0</td>\n <td>AR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>418</th>\n <td>Rangāpāra</td>\n <td>26.8167</td>\n <td>92.6500</td>\n <td>74.4</td>\n <td>93</td>\n <td>61</td>\n <td>4.2</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>419</th>\n <td>Ranot</td>\n <td>7.7777</td>\n <td>100.3213</td>\n <td>81.2</td>\n <td>83</td>\n <td>23</td>\n <td>6.7</td>\n <td>TH</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>420</th>\n <td>Rapid Valley</td>\n <td>44.0625</td>\n <td>-103.1463</td>\n <td>86.7</td>\n <td>27</td>\n <td>1</td>\n <td>1.0</td>\n <td>US</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>421</th>\n <td>Ratnagiri</td>\n <td>16.9833</td>\n <td>73.3000</td>\n <td>79.4</td>\n <td>84</td>\n <td>83</td>\n <td>4.4</td>\n <td>IN</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>422</th>\n <td>Raudeberg</td>\n <td>61.9875</td>\n <td>5.1352</td>\n <td>60.9</td>\n <td>60</td>\n <td>23</td>\n <td>11.1</td>\n <td>NO</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>423</th>\n <td>Raymond</td>\n <td>43.9015</td>\n <td>-70.4703</td>\n <td>67.2</td>\n <td>76</td>\n <td>98</td>\n <td>10.7</td>\n <td>US</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>424</th>\n <td>Ribeira Brava</td>\n <td>32.6500</td>\n <td>-17.0667</td>\n <td>64.1</td>\n <td>49</td>\n <td>55</td>\n <td>8.1</td>\n <td>PT</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>425</th>\n <td>Ribeira Grande</td>\n <td>38.5167</td>\n <td>-28.7000</td>\n <td>62.4</td>\n <td>67</td>\n <td>44</td>\n <td>9.5</td>\n <td>PT</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>426</th>\n <td>Richards Bay</td>\n <td>-28.7830</td>\n <td>32.0377</td>\n <td>62.3</td>\n <td>67</td>\n <td>100</td>\n <td>20.3</td>\n <td>ZA</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>427</th>\n <td>Rikitea</td>\n <td>-23.1203</td>\n <td>-134.9692</td>\n <td>72.6</td>\n <td>65</td>\n <td>89</td>\n <td>15.9</td>\n <td>PF</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>428</th>\n <td>Rongcheng</td>\n <td>30.6340</td>\n <td>117.8517</td>\n <td>63.6</td>\n <td>93</td>\n <td>100</td>\n <td>2.2</td>\n <td>CN</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>429</th>\n <td>Rumonge</td>\n <td>-3.9736</td>\n <td>29.4386</td>\n <td>71.0</td>\n <td>78</td>\n <td>87</td>\n <td>5.2</td>\n <td>BI</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>430</th>\n <td>Ruska Poliana</td>\n <td>49.4170</td>\n <td>31.9201</td>\n <td>53.0</td>\n <td>92</td>\n <td>98</td>\n <td>8.9</td>\n <td>UA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>431</th>\n <td>Sørland</td>\n <td>67.6670</td>\n <td>12.6934</td>\n <td>51.9</td>\n <td>80</td>\n <td>6</td>\n <td>3.9</td>\n <td>NO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>432</th>\n <td>São Félix do Xingu</td>\n <td>-6.6447</td>\n <td>-51.9950</td>\n <td>81.2</td>\n <td>56</td>\n <td>49</td>\n <td>2.9</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>433</th>\n <td>São Filipe</td>\n <td>14.8961</td>\n <td>-24.4956</td>\n <td>74.4</td>\n <td>76</td>\n <td>8</td>\n <td>7.2</td>\n <td>CV</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>434</th>\n <td>São Geraldo do Araguaia</td>\n <td>-6.4006</td>\n <td>-48.5550</td>\n <td>81.1</td>\n <td>68</td>\n <td>49</td>\n <td>1.7</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>435</th>\n <td>São João da Barra</td>\n <td>-21.6403</td>\n <td>-41.0511</td>\n <td>74.0</td>\n <td>85</td>\n <td>0</td>\n <td>8.3</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>436</th>\n <td>Sémbé</td>\n <td>1.6481</td>\n <td>14.5806</td>\n <td>72.0</td>\n <td>90</td>\n <td>25</td>\n <td>2.0</td>\n <td>CG</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>437</th>\n <td>Sai Buri</td>\n <td>6.7013</td>\n <td>101.6168</td>\n <td>78.8</td>\n <td>88</td>\n <td>95</td>\n <td>3.6</td>\n <td>TH</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>438</th>\n <td>Saint George</td>\n <td>37.1041</td>\n <td>-113.5841</td>\n <td>104.1</td>\n <td>10</td>\n <td>1</td>\n <td>4.0</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>439</th>\n <td>Saint Paul Harbor</td>\n <td>57.7900</td>\n <td>-152.4072</td>\n <td>53.1</td>\n <td>62</td>\n <td>1</td>\n <td>11.5</td>\n <td>US</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>440</th>\n <td>Saint-Augustin</td>\n <td>51.2260</td>\n <td>-58.6502</td>\n <td>60.8</td>\n <td>61</td>\n <td>95</td>\n <td>19.0</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>441</th>\n <td>Saint-François</td>\n <td>46.4154</td>\n <td>3.9054</td>\n <td>62.1</td>\n <td>95</td>\n <td>0</td>\n <td>1.1</td>\n <td>FR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>442</th>\n <td>Saint-Joseph</td>\n <td>-21.3667</td>\n <td>55.6167</td>\n <td>67.8</td>\n <td>66</td>\n <td>8</td>\n <td>11.4</td>\n <td>RE</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>443</th>\n <td>Saint-Lô</td>\n <td>49.1162</td>\n <td>-1.0903</td>\n <td>55.8</td>\n <td>96</td>\n <td>100</td>\n <td>3.7</td>\n <td>FR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>444</th>\n <td>Saint-Philippe</td>\n <td>-21.3585</td>\n <td>55.7679</td>\n <td>68.9</td>\n <td>74</td>\n <td>15</td>\n <td>11.7</td>\n <td>RE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>445</th>\n <td>Saint-Pierre</td>\n <td>-21.3393</td>\n <td>55.4781</td>\n <td>68.9</td>\n <td>68</td>\n <td>0</td>\n <td>6.9</td>\n <td>RE</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>446</th>\n <td>Sakakah</td>\n <td>29.9697</td>\n <td>40.2064</td>\n <td>85.6</td>\n <td>18</td>\n <td>0</td>\n <td>4.6</td>\n <td>SA</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>447</th>\n <td>Salalah</td>\n <td>17.0151</td>\n <td>54.0924</td>\n <td>80.7</td>\n <td>83</td>\n <td>40</td>\n <td>0.0</td>\n <td>OM</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>448</th>\n <td>Saldanha</td>\n <td>-33.0117</td>\n <td>17.9442</td>\n <td>51.9</td>\n <td>65</td>\n <td>57</td>\n <td>5.6</td>\n <td>ZA</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>449</th>\n <td>Salihorsk</td>\n <td>52.7876</td>\n <td>27.5415</td>\n <td>50.6</td>\n <td>80</td>\n <td>15</td>\n <td>5.7</td>\n <td>BY</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>450</th>\n <td>San Luis</td>\n <td>-33.2950</td>\n <td>-66.3356</td>\n <td>62.3</td>\n <td>55</td>\n <td>49</td>\n <td>9.2</td>\n <td>AR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>451</th>\n <td>San Patricio</td>\n <td>28.0170</td>\n <td>-97.5169</td>\n <td>78.8</td>\n <td>74</td>\n <td>96</td>\n <td>9.9</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>452</th>\n <td>San Quintín</td>\n <td>30.4833</td>\n <td>-115.9500</td>\n <td>67.9</td>\n <td>64</td>\n <td>1</td>\n <td>8.3</td>\n <td>MX</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>453</th>\n <td>San Ramon</td>\n <td>37.7799</td>\n <td>-121.9780</td>\n <td>78.4</td>\n <td>39</td>\n <td>1</td>\n <td>7.0</td>\n <td>US</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>454</th>\n <td>Santa Maria</td>\n <td>-29.6842</td>\n <td>-53.8069</td>\n <td>58.3</td>\n <td>88</td>\n <td>0</td>\n <td>5.1</td>\n <td>BR</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>455</th>\n <td>Santa Rosa</td>\n <td>14.3122</td>\n <td>121.1114</td>\n <td>81.8</td>\n <td>82</td>\n <td>98</td>\n <td>2.9</td>\n <td>PH</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>456</th>\n <td>Santiago del Estero</td>\n <td>-27.7951</td>\n <td>-64.2615</td>\n <td>71.3</td>\n <td>62</td>\n <td>1</td>\n <td>4.7</td>\n <td>AR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>457</th>\n <td>Saryozek</td>\n <td>44.3583</td>\n <td>77.9753</td>\n <td>66.0</td>\n <td>53</td>\n <td>99</td>\n <td>6.9</td>\n <td>KZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>458</th>\n <td>Saskylakh</td>\n <td>71.9167</td>\n <td>114.0833</td>\n <td>33.8</td>\n <td>100</td>\n <td>100</td>\n <td>10.1</td>\n <td>RU</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>459</th>\n <td>Sayyan</td>\n <td>15.1718</td>\n <td>44.3244</td>\n <td>65.0</td>\n <td>25</td>\n <td>92</td>\n <td>4.1</td>\n <td>YE</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>460</th>\n <td>Sefton</td>\n <td>-43.2500</td>\n <td>172.6667</td>\n <td>46.5</td>\n <td>89</td>\n <td>40</td>\n <td>2.9</td>\n <td>NZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>461</th>\n <td>Selty</td>\n <td>57.3132</td>\n <td>52.1345</td>\n <td>48.0</td>\n <td>62</td>\n <td>0</td>\n <td>10.8</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>462</th>\n <td>Sept-Îles</td>\n <td>50.2001</td>\n <td>-66.3821</td>\n <td>56.4</td>\n <td>74</td>\n <td>75</td>\n <td>3.0</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>463</th>\n <td>Serik</td>\n <td>36.9169</td>\n <td>31.0989</td>\n <td>66.5</td>\n <td>52</td>\n <td>0</td>\n <td>4.6</td>\n <td>TR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>464</th>\n <td>Sertanópolis</td>\n <td>-23.0586</td>\n <td>-51.0364</td>\n <td>77.1</td>\n <td>73</td>\n <td>0</td>\n <td>2.4</td>\n <td>BR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>465</th>\n <td>Severnoye</td>\n <td>56.3491</td>\n <td>78.3619</td>\n <td>47.5</td>\n <td>92</td>\n <td>100</td>\n <td>8.3</td>\n <td>RU</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>466</th>\n <td>Severo-Kuril'sk</td>\n <td>50.6789</td>\n <td>156.1250</td>\n <td>44.8</td>\n <td>76</td>\n <td>52</td>\n <td>10.2</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>467</th>\n <td>Sheridan</td>\n <td>44.8333</td>\n <td>-106.9173</td>\n <td>91.1</td>\n <td>21</td>\n <td>1</td>\n <td>6.9</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>468</th>\n <td>Shevchenkove</td>\n <td>49.6959</td>\n <td>37.1735</td>\n <td>53.4</td>\n <td>95</td>\n <td>100</td>\n <td>11.1</td>\n <td>UA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>469</th>\n <td>Shizunai-furukawachō</td>\n <td>42.3339</td>\n <td>142.3669</td>\n <td>59.4</td>\n <td>91</td>\n <td>100</td>\n <td>26.7</td>\n <td>JP</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>470</th>\n <td>Sibolga</td>\n <td>1.7427</td>\n <td>98.7792</td>\n <td>77.3</td>\n <td>90</td>\n <td>100</td>\n <td>2.5</td>\n <td>ID</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>471</th>\n <td>Sidi Ali</td>\n <td>36.0998</td>\n <td>0.4206</td>\n <td>68.0</td>\n <td>83</td>\n <td>62</td>\n <td>3.2</td>\n <td>DZ</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>472</th>\n <td>Simao</td>\n <td>22.7886</td>\n <td>100.9748</td>\n <td>64.1</td>\n <td>96</td>\n <td>100</td>\n <td>0.8</td>\n <td>CN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>473</th>\n <td>Sinnamary</td>\n <td>5.3833</td>\n <td>-52.9500</td>\n <td>78.8</td>\n <td>88</td>\n <td>96</td>\n <td>9.6</td>\n <td>GF</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>474</th>\n <td>Sioux Lookout</td>\n <td>50.1001</td>\n <td>-91.9170</td>\n <td>82.5</td>\n <td>34</td>\n <td>40</td>\n <td>5.8</td>\n <td>CA</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>475</th>\n <td>Sistranda</td>\n <td>63.7256</td>\n <td>8.8340</td>\n <td>53.6</td>\n <td>82</td>\n <td>7</td>\n <td>6.4</td>\n <td>NO</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>476</th>\n <td>Sitka</td>\n <td>57.0531</td>\n <td>-135.3300</td>\n <td>48.1</td>\n <td>87</td>\n <td>90</td>\n <td>8.1</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>477</th>\n <td>Skibbereen</td>\n <td>51.5500</td>\n <td>-9.2667</td>\n <td>49.8</td>\n <td>87</td>\n <td>100</td>\n <td>12.3</td>\n <td>IE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>478</th>\n <td>Sola</td>\n <td>-13.8833</td>\n <td>167.5500</td>\n <td>84.9</td>\n <td>74</td>\n <td>75</td>\n <td>11.5</td>\n <td>VU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>479</th>\n <td>Sorong</td>\n <td>-0.8833</td>\n <td>131.2500</td>\n <td>78.2</td>\n <td>87</td>\n <td>100</td>\n <td>7.5</td>\n <td>ID</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>480</th>\n <td>Sosnovo-Ozerskoye</td>\n <td>52.5249</td>\n <td>111.5418</td>\n <td>49.9</td>\n <td>79</td>\n <td>25</td>\n <td>2.4</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>481</th>\n <td>Souillac</td>\n <td>-20.5167</td>\n <td>57.5167</td>\n <td>72.0</td>\n <td>88</td>\n <td>40</td>\n <td>15.4</td>\n <td>MU</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>482</th>\n <td>Sovetskaya Gavan'</td>\n <td>48.9723</td>\n <td>140.2878</td>\n <td>48.2</td>\n <td>93</td>\n <td>100</td>\n <td>2.9</td>\n <td>RU</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>483</th>\n <td>Soyo</td>\n <td>-6.1349</td>\n <td>12.3689</td>\n <td>77.3</td>\n <td>88</td>\n <td>54</td>\n <td>8.6</td>\n <td>AO</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>484</th>\n <td>Staritsa</td>\n <td>56.5075</td>\n <td>34.9354</td>\n <td>48.8</td>\n <td>89</td>\n <td>52</td>\n <td>5.4</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>485</th>\n <td>Stornoway</td>\n <td>58.2093</td>\n <td>-6.3865</td>\n <td>54.4</td>\n <td>94</td>\n <td>40</td>\n <td>2.3</td>\n <td>GB</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>486</th>\n <td>Supaul</td>\n <td>26.1167</td>\n <td>86.6000</td>\n <td>84.6</td>\n <td>52</td>\n <td>0</td>\n <td>4.5</td>\n <td>IN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>487</th>\n <td>Ténenkou</td>\n <td>14.4572</td>\n <td>-4.9169</td>\n <td>96.0</td>\n <td>25</td>\n <td>72</td>\n <td>8.6</td>\n <td>ML</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>488</th>\n <td>Tabuk</td>\n <td>17.4189</td>\n <td>121.4443</td>\n <td>77.8</td>\n <td>77</td>\n <td>56</td>\n <td>2.5</td>\n <td>PH</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>489</th>\n <td>Tahoua</td>\n <td>14.8888</td>\n <td>5.2692</td>\n <td>94.5</td>\n <td>17</td>\n <td>46</td>\n <td>9.6</td>\n <td>NE</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>490</th>\n <td>Takaka</td>\n <td>-40.8500</td>\n <td>172.8000</td>\n <td>55.5</td>\n <td>66</td>\n <td>53</td>\n <td>1.4</td>\n <td>NZ</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>491</th>\n <td>Taldan</td>\n <td>53.6667</td>\n <td>124.8000</td>\n <td>54.3</td>\n <td>83</td>\n <td>75</td>\n <td>7.9</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>492</th>\n <td>Tasiilaq</td>\n <td>65.6145</td>\n <td>-37.6368</td>\n <td>43.0</td>\n <td>76</td>\n <td>79</td>\n <td>5.2</td>\n <td>GL</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>493</th>\n <td>Te Anau</td>\n <td>-45.4167</td>\n <td>167.7167</td>\n <td>48.1</td>\n <td>90</td>\n <td>100</td>\n <td>5.1</td>\n <td>NZ</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>494</th>\n <td>Teguise</td>\n <td>29.0605</td>\n <td>-13.5640</td>\n <td>64.6</td>\n <td>77</td>\n <td>20</td>\n <td>19.5</td>\n <td>ES</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>495</th>\n <td>Tena</td>\n <td>-0.9833</td>\n <td>-77.8167</td>\n <td>73.6</td>\n <td>83</td>\n <td>52</td>\n <td>2.1</td>\n <td>EC</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>496</th>\n <td>Tezu</td>\n <td>27.9167</td>\n <td>96.1667</td>\n <td>75.7</td>\n <td>77</td>\n <td>94</td>\n <td>2.1</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>497</th>\n <td>The Valley</td>\n <td>18.2170</td>\n <td>-63.0578</td>\n <td>81.1</td>\n <td>78</td>\n <td>20</td>\n <td>12.7</td>\n <td>AI</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>498</th>\n <td>Thinadhoo</td>\n <td>0.5333</td>\n <td>72.9333</td>\n <td>84.1</td>\n <td>67</td>\n <td>100</td>\n <td>17.1</td>\n <td>MV</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>499</th>\n <td>Thompson</td>\n <td>55.7435</td>\n <td>-97.8558</td>\n <td>61.0</td>\n <td>44</td>\n <td>90</td>\n <td>4.6</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>500</th>\n <td>Tiksi</td>\n <td>71.6872</td>\n <td>128.8694</td>\n <td>32.1</td>\n <td>93</td>\n <td>100</td>\n <td>5.0</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>501</th>\n <td>Tiznit Province</td>\n <td>29.5833</td>\n <td>-9.5000</td>\n <td>64.4</td>\n <td>69</td>\n <td>100</td>\n <td>2.6</td>\n <td>MA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>502</th>\n <td>Tommot</td>\n <td>58.9564</td>\n <td>126.2925</td>\n <td>61.0</td>\n <td>63</td>\n <td>14</td>\n <td>1.5</td>\n <td>RU</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>503</th>\n <td>Torbay</td>\n <td>47.6666</td>\n <td>-52.7314</td>\n <td>68.5</td>\n <td>69</td>\n <td>40</td>\n <td>12.7</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>504</th>\n <td>Touros</td>\n <td>-5.1989</td>\n <td>-35.4608</td>\n <td>77.9</td>\n <td>74</td>\n <td>20</td>\n <td>12.5</td>\n <td>BR</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>505</th>\n <td>Tromsø</td>\n <td>69.6496</td>\n <td>18.9570</td>\n <td>55.0</td>\n <td>62</td>\n <td>0</td>\n <td>2.3</td>\n <td>NO</td>\n <td>6/3/2021 22:21</td>\n </tr>\n <tr>\n <th>506</th>\n <td>Tual</td>\n <td>-5.6667</td>\n <td>132.7500</td>\n <td>81.8</td>\n <td>75</td>\n <td>64</td>\n <td>18.9</td>\n <td>ID</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>507</th>\n <td>Tuktoyaktuk</td>\n <td>69.4541</td>\n <td>-133.0374</td>\n <td>44.6</td>\n <td>70</td>\n <td>75</td>\n <td>15.0</td>\n <td>CA</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>508</th>\n <td>Tumpat</td>\n <td>6.1978</td>\n <td>102.1710</td>\n <td>78.8</td>\n <td>94</td>\n <td>20</td>\n <td>5.1</td>\n <td>MY</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>509</th>\n <td>Two Hills</td>\n <td>53.7169</td>\n <td>-111.7518</td>\n <td>81.3</td>\n <td>39</td>\n <td>100</td>\n <td>13.0</td>\n <td>CA</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>510</th>\n <td>Tyup</td>\n <td>42.7276</td>\n <td>78.3648</td>\n <td>54.2</td>\n <td>78</td>\n <td>86</td>\n <td>3.8</td>\n <td>KG</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>511</th>\n <td>Ubari</td>\n <td>26.5921</td>\n <td>12.7805</td>\n <td>92.6</td>\n <td>14</td>\n <td>33</td>\n <td>6.6</td>\n <td>LY</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>512</th>\n <td>Ucluelet</td>\n <td>48.9329</td>\n <td>-125.5528</td>\n <td>55.9</td>\n <td>84</td>\n <td>42</td>\n <td>5.0</td>\n <td>CA</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>513</th>\n <td>Udachny</td>\n <td>66.4167</td>\n <td>112.4000</td>\n <td>53.3</td>\n <td>60</td>\n <td>100</td>\n <td>2.4</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>514</th>\n <td>Ulaangom</td>\n <td>49.9811</td>\n <td>92.0667</td>\n <td>61.7</td>\n <td>48</td>\n <td>33</td>\n <td>0.5</td>\n <td>MN</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>515</th>\n <td>Upernavik</td>\n <td>72.7868</td>\n <td>-56.1549</td>\n <td>35.0</td>\n <td>69</td>\n <td>10</td>\n <td>2.8</td>\n <td>GL</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>516</th>\n <td>Ushuaia</td>\n <td>-54.8000</td>\n <td>-68.3000</td>\n <td>26.3</td>\n <td>93</td>\n <td>0</td>\n <td>5.8</td>\n <td>AR</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>517</th>\n <td>Usinsk</td>\n <td>65.9939</td>\n <td>57.5281</td>\n <td>41.1</td>\n <td>84</td>\n <td>5</td>\n <td>10.2</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>518</th>\n <td>Ust-Kuyga</td>\n <td>70.0167</td>\n <td>135.6000</td>\n <td>56.7</td>\n <td>74</td>\n <td>100</td>\n <td>3.1</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>519</th>\n <td>Ust-Nera</td>\n <td>64.5667</td>\n <td>143.2000</td>\n <td>57.2</td>\n <td>38</td>\n <td>21</td>\n <td>4.9</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>520</th>\n <td>Vaini</td>\n <td>-21.2000</td>\n <td>-175.2000</td>\n <td>77.2</td>\n <td>78</td>\n <td>20</td>\n <td>4.6</td>\n <td>TO</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>521</th>\n <td>Vaitape</td>\n <td>-16.5167</td>\n <td>-151.7500</td>\n <td>80.0</td>\n <td>78</td>\n <td>53</td>\n <td>9.2</td>\n <td>PF</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>522</th>\n <td>Valença</td>\n <td>-13.3703</td>\n <td>-39.0731</td>\n <td>70.6</td>\n <td>92</td>\n <td>20</td>\n <td>3.2</td>\n <td>BR</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>523</th>\n <td>Vanimo</td>\n <td>-2.6741</td>\n <td>141.3028</td>\n <td>78.7</td>\n <td>87</td>\n <td>86</td>\n <td>2.5</td>\n <td>PG</td>\n <td>6/3/2021 22:22</td>\n </tr>\n <tr>\n <th>524</th>\n <td>Varberg</td>\n <td>57.1056</td>\n <td>12.2508</td>\n <td>58.9</td>\n <td>61</td>\n <td>99</td>\n <td>11.2</td>\n <td>SE</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>525</th>\n <td>Vardø</td>\n <td>70.3705</td>\n <td>31.1107</td>\n <td>51.0</td>\n <td>63</td>\n <td>20</td>\n <td>10.0</td>\n <td>NO</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>526</th>\n <td>Vershino-Shakhtaminskiy</td>\n <td>51.3012</td>\n <td>117.8868</td>\n <td>45.7</td>\n <td>93</td>\n <td>68</td>\n <td>5.2</td>\n <td>RU</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>527</th>\n <td>Vestmannaeyjar</td>\n <td>63.4427</td>\n <td>-20.2734</td>\n <td>48.7</td>\n <td>82</td>\n <td>100</td>\n <td>25.4</td>\n <td>IS</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>528</th>\n <td>Vila</td>\n <td>42.0304</td>\n <td>-8.1588</td>\n <td>45.9</td>\n <td>90</td>\n <td>7</td>\n <td>4.3</td>\n <td>PT</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>529</th>\n <td>Vila Franca do Campo</td>\n <td>37.7167</td>\n <td>-25.4333</td>\n <td>61.2</td>\n <td>77</td>\n <td>60</td>\n <td>4.1</td>\n <td>PT</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>530</th>\n <td>Vila Velha</td>\n <td>-20.3297</td>\n <td>-40.2925</td>\n <td>75.0</td>\n <td>89</td>\n <td>0</td>\n <td>5.0</td>\n <td>BR</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>531</th>\n <td>Vostok</td>\n <td>46.4856</td>\n <td>135.8833</td>\n <td>47.5</td>\n <td>99</td>\n <td>100</td>\n <td>3.7</td>\n <td>RU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>532</th>\n <td>Vozhega</td>\n <td>60.4725</td>\n <td>40.2213</td>\n <td>47.8</td>\n <td>78</td>\n <td>5</td>\n <td>4.6</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>533</th>\n <td>Waingapu</td>\n <td>-9.6567</td>\n <td>120.2641</td>\n <td>75.7</td>\n <td>89</td>\n <td>52</td>\n <td>6.5</td>\n <td>ID</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>534</th>\n <td>Walvis Bay</td>\n <td>-22.9575</td>\n <td>14.5053</td>\n <td>74.4</td>\n <td>11</td>\n <td>0</td>\n <td>10.4</td>\n <td>NaN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n <tr>\n <th>535</th>\n <td>Weatherford</td>\n <td>32.7593</td>\n <td>-97.7972</td>\n <td>79.3</td>\n <td>69</td>\n <td>87</td>\n <td>3.0</td>\n <td>US</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>536</th>\n <td>West Linton</td>\n <td>55.7497</td>\n <td>-3.3561</td>\n <td>52.5</td>\n <td>68</td>\n <td>38</td>\n <td>5.2</td>\n <td>GB</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>537</th>\n <td>Willimantic</td>\n <td>41.7107</td>\n <td>-72.2081</td>\n <td>67.4</td>\n <td>84</td>\n <td>90</td>\n <td>5.8</td>\n <td>US</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>538</th>\n <td>Winslow</td>\n <td>35.0242</td>\n <td>-110.6974</td>\n <td>92.0</td>\n <td>13</td>\n <td>1</td>\n <td>3.0</td>\n <td>US</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>539</th>\n <td>Wollongong</td>\n <td>-34.4333</td>\n <td>150.8833</td>\n <td>55.6</td>\n <td>71</td>\n <td>29</td>\n <td>10.9</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>540</th>\n <td>Xai-Xai</td>\n <td>-25.0519</td>\n <td>33.6442</td>\n <td>64.1</td>\n <td>87</td>\n <td>94</td>\n <td>23.0</td>\n <td>MZ</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>541</th>\n <td>Yārāda</td>\n <td>17.6500</td>\n <td>83.2667</td>\n <td>80.9</td>\n <td>94</td>\n <td>20</td>\n <td>10.5</td>\n <td>IN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>542</th>\n <td>Yazman</td>\n <td>29.1212</td>\n <td>71.7446</td>\n <td>86.8</td>\n <td>47</td>\n <td>31</td>\n <td>3.0</td>\n <td>PK</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>543</th>\n <td>Yellowknife</td>\n <td>62.4560</td>\n <td>-114.3525</td>\n <td>63.1</td>\n <td>63</td>\n <td>75</td>\n <td>15.0</td>\n <td>CA</td>\n <td>6/3/2021 22:20</td>\n </tr>\n <tr>\n <th>544</th>\n <td>Yeppoon</td>\n <td>-23.1333</td>\n <td>150.7333</td>\n <td>66.0</td>\n <td>82</td>\n <td>92</td>\n <td>6.3</td>\n <td>AU</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>545</th>\n <td>Yerbogachën</td>\n <td>61.2767</td>\n <td>108.0108</td>\n <td>59.8</td>\n <td>65</td>\n <td>62</td>\n <td>3.3</td>\n <td>RU</td>\n <td>6/3/2021 22:24</td>\n </tr>\n <tr>\n <th>546</th>\n <td>Yermakovskoye</td>\n <td>53.2831</td>\n <td>92.4003</td>\n <td>60.9</td>\n <td>77</td>\n <td>100</td>\n <td>2.6</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>547</th>\n <td>Yoichi</td>\n <td>43.2039</td>\n <td>140.7703</td>\n <td>62.5</td>\n <td>84</td>\n <td>100</td>\n <td>13.4</td>\n <td>JP</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>548</th>\n <td>Yugorsk</td>\n <td>61.3133</td>\n <td>63.3319</td>\n <td>34.6</td>\n <td>63</td>\n <td>0</td>\n <td>4.9</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>549</th>\n <td>Yulara</td>\n <td>-25.2406</td>\n <td>130.9889</td>\n <td>46.2</td>\n <td>81</td>\n <td>81</td>\n <td>5.8</td>\n <td>AU</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>550</th>\n <td>Yumen</td>\n <td>40.2833</td>\n <td>97.2000</td>\n <td>63.0</td>\n <td>9</td>\n <td>0</td>\n <td>4.8</td>\n <td>CN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>551</th>\n <td>Zafra</td>\n <td>38.4167</td>\n <td>-6.4167</td>\n <td>61.4</td>\n <td>44</td>\n <td>7</td>\n <td>8.2</td>\n <td>ES</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>552</th>\n <td>Zavetnoye</td>\n <td>47.1194</td>\n <td>43.8903</td>\n <td>56.7</td>\n <td>88</td>\n <td>12</td>\n <td>14.8</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>553</th>\n <td>Zenzeli</td>\n <td>45.9241</td>\n <td>47.0474</td>\n <td>64.0</td>\n <td>63</td>\n <td>0</td>\n <td>10.3</td>\n <td>RU</td>\n <td>6/3/2021 22:26</td>\n </tr>\n <tr>\n <th>554</th>\n <td>Zhezkazgan</td>\n <td>47.8043</td>\n <td>67.7144</td>\n <td>67.1</td>\n <td>65</td>\n <td>1</td>\n <td>5.8</td>\n <td>KZ</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>555</th>\n <td>Zhigansk</td>\n <td>66.7697</td>\n <td>123.3711</td>\n <td>62.0</td>\n <td>54</td>\n <td>100</td>\n <td>6.0</td>\n <td>RU</td>\n <td>6/3/2021 22:23</td>\n </tr>\n <tr>\n <th>556</th>\n <td>Zhuhai</td>\n <td>22.2769</td>\n <td>113.5678</td>\n <td>84.3</td>\n <td>89</td>\n <td>40</td>\n <td>6.1</td>\n <td>CN</td>\n <td>6/3/2021 22:25</td>\n </tr>\n <tr>\n <th>557</th>\n <td>Zonguldak</td>\n <td>41.2500</td>\n <td>31.8333</td>\n <td>49.4</td>\n <td>95</td>\n <td>100</td>\n <td>5.3</td>\n <td>TR</td>\n <td>6/3/2021 22:27</td>\n </tr>\n <tr>\n <th>558</th>\n <td>Zunyi</td>\n <td>27.6867</td>\n <td>106.9072</td>\n <td>59.4</td>\n <td>91</td>\n <td>100</td>\n <td>1.3</td>\n <td>CN</td>\n <td>6/3/2021 22:28</td>\n </tr>\n </tbody>\n</table>\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7249467ce9f8a941a727e033005a2dfb98eb983
45,016
ipynb
Jupyter Notebook
Demo/Standalone_VQA_demo.ipynb
zeryabmoussaoui/LightVQA
57f5e38e0a7db76e06f65022cd09283cd35d2cd9
[ "MIT", "Unlicense" ]
3
2019-10-08T01:41:47.000Z
2020-05-29T02:49:02.000Z
Demo/Standalone_VQA_demo.ipynb
zeryabmoussaoui/LightVQA
57f5e38e0a7db76e06f65022cd09283cd35d2cd9
[ "MIT", "Unlicense" ]
2
2021-02-01T07:31:51.000Z
2021-03-21T15:15:33.000Z
Demo/Standalone_VQA_demo.ipynb
zeryabmoussaoui/LightVQA
57f5e38e0a7db76e06f65022cd09283cd35d2cd9
[ "MIT", "Unlicense" ]
2
2019-08-31T22:41:02.000Z
2020-09-25T10:58:06.000Z
40.960874
483
0.463502
[ [ [ "# VQA Demo\n", "_____no_output_____" ], [ "# Imports and variables", "_____no_output_____" ] ], [ [ "import os\nimport io\nimport base64\nfrom IPython.display import HTML, Image\nimport IPython\nfrom google.colab.output import eval_js\nfrom base64 import b64decode\nfrom IPython.display import Image, display, clear_output\nimport json\nimport re\nfrom collections import Counter\nimport numpy as np\nimport os\nimport importlib.util\nfrom google.colab import drive\nimport tensorflow as tf\nfrom tensorflow import keras\nimport re\nimport numpy as np\nimport json\nfrom pathlib import Path\nimport pickle\nfrom collections import Counter\ntf.enable_eager_execution()\nimport cv2\nimport time\nfrom keras import regularizers\nfrom keras.models import load_model\nimport time\n\ndemo_root = ''\nvideoPath = os.path.join(demo_root, 'video.mp4')\nphotoPath = os.path.join(demo_root, 'photo.jpg')\ntrain_questions_file = os.path.join(demo_root,'v2_OpenEnded_mscoco_train2014_questions.json')\ntrain_answers_file = os.path.join(demo_root,'v2_mscoco_train2014_annotations.json')\nglove_file_path = os.path.join(demo_root,'glove.6B.300d.txt')\npretrained_model_path = os.path.join(demo_root, 'pretrainedModel.h5')\nSENTENCE_SPLIT_REGEX = re.compile(r'(\\W+)')\n\nMAX_LEN = 14\nFILL_TOKEN = 'FILL_TOKEN'\nANSWER_OCCURENCE_MIN = 15\nANSWER_DIM = 0\nIMAGE_NON_LINEAR_OUTPUT_DIM = TEXT_NON_LINEAR_OUTPUT_DIM = 512\nWORD_EMBEDDING_DIM = 300\nnum_occurence = 5\n", "Using TensorFlow backend.\n" ] ], [ [ "# Mount Google Drive (if you use it)", "_____no_output_____" ] ], [ [ "from google.colab import drive\nfrom google.colab import files\n\ndrive_root = '/content/gdrive'\ndrive.mount(drive_root)\n", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n" ], [ "from IPython.display import display, HTML, Javascript\nfrom base64 import b64decode\nfrom IPython.display import Image, display, clear_output\n\nVIDEO_HTML = \"\"\"\n<video autoplay\n width=800 height=600></video>\n<script>\nvar video = document.querySelector('video')\nnavigator.mediaDevices.getUserMedia({ video: true })\n .then(stream=> video.srcObject = stream)\nvar button = document.querySelector('button')\n\nvar data = new Promise(resolve=>{\n video.onclick = ()=>{\n var canvas = document.createElement('canvas')\n var [w,h] = [video.offsetWidth, video.offsetHeight]\n canvas.width = w\n canvas.height = h\n canvas.getContext('2d')\n .drawImage(video, 0, 0, w, h)\n video.srcObject.getVideoTracks()[0].stop()\n video.replaceWith(canvas)\n resolve(canvas.toDataURL('image/jpeg', %f))\n }\n})\n</script>\n\"\"\"\ndef testphoto(videoPath = '/content/video.mp4', filename='/content/photo.jpg') :\n video = io.open(videoPath, 'r+b').read()\n encoded = base64.b64encode(video)\n x = '''<video controls autoplay width=800 height=600 >\n <source src=\"data:video/mp4;base64,'''+str(encoded.decode('ascii'))+''' \" type=\"video/mp4\" />\n </video>\n <button id =\"but\" type=\"button\">take screen</button>\n <script>\n \n var video = document.querySelector('video')\n var button = document.getElementById(\"but\");\n \n var data = new Promise(resolve=>{\n button.onclick = ()=>{\n\tconsole.log('hhhhhhhhheeeeeeloooooo')\n var canvas = document.createElement('canvas')\n\tcanvas.crossOrigin=\"anonymous\"\n var [w,h] = [video.offsetWidth, video.offsetHeight]\n canvas.width = w\n canvas.height = h\n canvas.getContext('2d')\n .drawImage(video, 0, 0, w, h)\n //video.srcObject.getVideoTracks()[0].stop()\n\tvideo.pause()\n video.replaceWith(canvas)\n resolve(canvas.toDataURL('image/jpeg', '%f'))\n }\n})\n </script>'''\n \n display(HTML(x % 0.8) )\n current = os.getcwd()\n os.chdir('/content')\n #display(HTML(VIDEO_HTML2 % quality))\n data = eval_js(\"data\")\n binary = b64decode(data.split(',')[1])\n with open(filename, 'wb') as f:\n f.write(binary)\n os.chdir(current)\n return binary\n \ndef take_photo(source, filename=photoPath, quality = 0.8):\n VIDEO_HTML2 = \"\"\"\n <video controls alt=\"test\" width=800 height=600 autoplay >\n <source src=\"/content/video.mp4\" type=\"video/mp4\">\n Sorry, not displaying\n </video>\n <script>\n var video = document.querySelector('video')\n //console.log(video)\n video.play(); \n var data = new Promise(resolve=>{\n video.onclick = ()=>{\n var canvas = document.createElement('canvas')\n var [w,h] = [video.offsetWidth, video.offsetHeight]\n canvas.width = w\n canvas.height = h\n canvas.getContext('2d')\n .drawImage(video, 0, 0, w, h)\n video.srcObject.getVideoTracks()[0].stop()\n video.replaceWith(canvas)\n resolve(canvas.toDataURL('image/jpeg', %f))\n }\n })\n </script>\n \n \"\"\"\n \n ss = \"\"\"\n \"\"\"\n current = os.getcwd()\n os.chdir(root_root)\n display(HTML(VIDEO_HTML2 % quality))\n data = eval_js(\"data\")\n binary = b64decode(data.split(',')[1])\n with open(filename, 'wb') as f:\n f.write(binary)\n os.chdir(current)\n return binary\n\ndef take_photo_live(filename=photoPath, quality=0.8):\n clear_output()\n display(HTML(VIDEO_HTML % quality))\n data = eval_js(\"data\")\n binary = b64decode(data.split(',')[1])\n with open(filename, 'wb') as f:\n f.write(binary)\n return binary", "_____no_output_____" ], [ "\n\ndef take_photo_live(filename=photoPath, quality=0.8):\n clear_output()\n display(HTML(VIDEO_HTML % quality))\n #data = display(IPython.display.Javascript(data))\n #data = eval_js(\"data\")\n binary = b64decode(data.split(',')[1])\n with open(filename, 'wb') as f:\n f.write(binary)\n return binary\n\ndef get_uploaded_image() :\n current = os.getcwd()\n os.chdir(demo_root)\n t = False \n uploaded = files.upload() \n if not uploaded :\n return None\n im_path = os.path.join(demo_root,list(uploaded.keys())[0])\n clear_output()\n display(Image(filename= im_path, width=700, height=500))\n os.chdir(current)\n return im_path\n\ndef get_uploaded_video() :\n current = os.getcwd()\n os.chdir(demo_root)\n t = False\n uploaded = files.upload()\n\n if not uploaded :\n return None\n im_path = os.path.join(demo_root,list(uploaded.keys())[0])\n clear_output()\n os.chdir(current)\n return im_path\n", "_____no_output_____" ], [ "\nclass QuestionsPreprocessing : \n \n def __init__(self, glove_file, questions_files, num_occurence) :\n self.gloveFile = glove_file\n self.questionsFiles = questions_files\n self.word2Glove = self.readGloveFile()\n questions = self.readQuestionsFiles()\n self.questions_files = questions_files\n #filter words that have occurence less than occurence\n words = self.filter_words(questions,num_occurence)\n #partition words from weather they are in glove or not\n self.word2Index, self.index2Word = self.matchWordIndex(words)\n #consider the 0 padding\n self.vocab_length = len(self.word2Index) + 1 \n \n \n def readGloveFile(self):\n with open(self.gloveFile, 'r') as f:\n wordToGlove = {} # map from a token (word) to a Glove embedding vector\n #wordToIndex = {} # map from a token to an index\n #indexToWord = {} # map from an index to a token \n\n for line in f:\n record = line.strip().split()\n token = record[0] # take the token (word) from the text line\n wordToGlove[token] = np.array(record[1:], dtype=np.float64) # associate the Glove embedding vector to a that token (word)\n\n return wordToGlove\n \n #return all words in questions\n #WARNING IMPORTANT : THIS FUNCTION IS ONLY FOR RETRIEVING QUESTIONS FROM VQA DATASET\n def readQuestionsFiles(self) : \n questions = []\n for file in self.questionsFiles :\n with open(file, 'r') as f : \n data = json.load(f)\n for x in data['questions']:\n questions.append(x['question'])\n return questions \n \n \n def filter_words(self,questions, num_occurence):\n words = {}\n for question in questions : \n for word in tokenize(question):\n if(word in words):\n words[word] +=1\n else :\n words[word] = 1\n return [k for k,v in words.items() if v > num_occurence]\n \n \n def matchWordIndex(self, words) : \n word2Index = {}\n index2Word = {}\n for i, word in enumerate(words):\n word2Index[word] = i+1\n index2Word[i+1] =word\n return word2Index, index2Word\n \n \n # create embedding matrix\n def createPretrainedEmbeddingLayer(self):\n wordToIndex = self.word2Index\n wordToGlove = self.word2Glove\n vocabLen = len(wordToIndex) + 1 # adding 1 to account for masking\n glove_words = wordToGlove.keys()\n embDim = next(iter(wordToGlove.values())).shape[0] \n embeddingMatrix = np.zeros((vocabLen, embDim), 'float64') # initialize with zeros\n for word, index in wordToIndex.items():\n if word in glove_words:\n embeddingMatrix[index, :] = wordToGlove[word] # create embedding: word index to Glove word embedding\n else : \n embeddingMatrix[index, :] = np.random.rand(1, embDim)\n return embeddingMatrix\n\n \n def get_embedding_matrix(self) : \n \n embedding_matrix = self.createPretrainedEmbeddingLayer()\n return embedding_matrix\n \n# TODO : check if there is another method like removing most common words \n def preprocessBatch(self,questions, max_len):\n batch = []\n for question in questions : \n words =self.preprocessElem(question)\n batch.append(num_words)\n return self.postTruncate(batch, max_len)\n\n def preprocessElem(self,question):\n words = self.preTruncate(question)\n words = tokenize(words)\n num_words = []\n for word in words :\n w = self.word2Index.get(word, 'not_found_word')\n if not w is 'not_found_word' :\n num_words.append(w)\n return num_words\n #private\n #TODO : implement\n def preTruncate(self, words): \n return words\n \n def postTruncate(self,batch, max_len) : \n return keras.preprocessing.sequence.pad_sequences(batch, max_len,padding ='post', truncating = 'post')\n \n \nclass AnswerPreprocessing : \n def __init__(self, questions_files, answers_files):\n global ANSWER_DIM\n assert len(questions_files) == len(answers_files), 'not the same length'\n self.answers, _ = self.get_answers(questions_files[0], answers_files[0])\n #for i, f in enumerate(questions_files):\n # s,m = self.get_answers(questions_files[i], answers_files[i])\n # self.answers = self.answers.union(s)\n self.word2Index = {}\n self.index2Word = {}\n self.word2Index, self.index2Word = self.matchWords2Indexes()\n ANSWER_DIM = len(self.word2Index)\n self.num_words = len(self.word2Index)\n \n def get_dim():\n return len(self.word2Index)\n #private\n def matchWords2Indexes(self): \n wordToIndex = {}\n indexToWord = {}\n words = self.answers\n words = self.filterWords(words)\n wordToIndex['no idea'] = 0\n indexToWord[0] = 'no idea'\n i = 1\n for w in words:\n if not w == 'no idea' :\n wordToIndex[w] = i\n indexToWord[i] = w\n i += 1\n return (wordToIndex, indexToWord)\n #private\n def readWords(self) :\n words = []\n for file in self.answer_files : \n with open(file, 'r') as f : \n data = json.load(f)\n for annotation in data['annotations'] : \n for answer in annotation['answers']:\n words.append(answer['answer'])\n return words\n \n #private\n def filterWords (self, words) : \n return words\n \n def preprocessBatch(self, answers) : \n batch = []\n ans = np.array(answers)\n if ans.ndim == 2 : \n for answers2 in answers : \n batch.append([self.word2Index[x] for x in answers2])\n else : \n for answer in answers: \n batch.append(self.word2Index[answer])\n return batch\n \n \n def _preprocessElem(self,ans):\n # if the dataset contains multiple answers : Exemple VQA dataset\n answer = np.array([x for x in ans if not x == FILL_TOKEN])\n if answer.ndim == 1 :\n arr = np.zeros((ANSWER_DIM,))\n if len(answer) == 0 :\n arr[0] = 1.0\n return arr\n else :\n #arr = np.zeros(self.num_words,dtype=int)\n \n value = 1/len(answer)\n found = False\n for i, a in enumerate(answer):\n if a in self.word2Index:\n found = True\n arr[self.word2Index[a]] += value\n if not found :\n arr[0] = 1.0\n return arr\n #if the answer is unique in the dataset\n else :\n return self.word2Index[answer]\n \n def preprocessElem(self,ans):\n answer = np.array([x for x in ans if not x == FILL_TOKEN], 'str')\n arr = np.zeros((ANSWER_DIM,))\n if len(answer) == 0 :\n arr[0] =1\n else :\n c = Counter(answer)\n t = [x for (x,y) in c.items() if x in self.word2Index]\n if t == [] :\n arr[0] = 1\n else :\n for elem, occur in c.items() :\n if elem in self.word2Index :\n index = self.word2Index[elem]\n score = 1 if occur >=3 else 1/3* occur\n arr[index] = score\n return arr \n \n def get_ques2(self,questions_file):\n with open(questions_file,'r') as f : \n data = json.load(f)\n questions = data['questions']\n qsid_iq = { x['question_id']: x['question'] for x in questions }\n return qsid_iq\n\n \n def get_answers( self, questions_file,answers_file) :\n global ANSWER_DIM\n numbers = [\"zero\",\"one\",\"two\",\"three\",\"four\",\n \"five\",\"six\",\"seven\",\"eight\",\"nine\",\"ten\",\n \"eleven\",\"twelve\",\"thirteen\",\"fourteen\",\"fifteen\",\n \"sixteen\",\"seventeen\",\"eighteen\",\"nineteen\"];\n tens = [\"Twenty\",\"Thirty\",\"Forty\",\"Fifty\",\n \"Sixty\",\"Seventy\",\"Eighty\",\"Ninety\"]\n id_qs = self.get_ques2(questions_file)\n\n with open(answers_file,'r') as f : \n data = json.load(f)\n resps = data['annotations']\n res = [] \n ids = []\n conf =[]\n for x in resps:\n question = id_qs[x['question_id']].strip().lower()\n questionID = x['question_id']\n ans = [y['answer'].replace(',', ' ').replace('?', '').replace('\\'s', ' \\'s').strip().lower() for y in x['answers'] ]\n res1 =[]\n for word in ans :\n if word == 'no 1' or 'no one' in word :\n res1.append('no one')\n elif word in ['no clue', \"i dont know\", \"i don't know\", \"cannot know\", \"can't know\", \"can't tell\", \"not sure\", \"don't know\", \"cannot tell\", \"unknown\"]:\n res1.append('no idea')\n elif word == 'my best guess is no' or \"it isn't\" in word or 'it is not' in word:\n res1.append('no')\n elif 'many' in word or 'several' in word or 'lot' in word or 'numerous' in word:\n res1.append('many')\n elif word in numbers :\n res1.append(str(numbers.index(word)))\n elif word in tens:\n res1.append(str((ten.index(word) + 2) * 10))\n else :\n res1.append(word)\n\n\n\n if question.startswith('how many') or question.startswith('what is the number'):\n for word in res1 :\n if re.search('(\\s|^)no ', word) or re.search(' no(\\s|$)',word):\n if word == 'no idea':\n res.append('no idea') \n else :\n res.append('0')\n elif word == 'o' :\n res.append(0)\n elif not len(re.findall('\\d+', word)) == 0:\n res.append(re.findall('\\d+', word)[0]) \n elif word == 'no' :\n res.append('0')\n elif word =='yes' :\n res.append('1')\n else :\n res.append(word)\n\n elif question.startswith('is') or question.startswith('are'):\n\n for word in res1 :\n if re.search('(\\s|^)no ', word) or re.search(' no(\\s|$)',word):\n res.append('no')\n elif word == 'it is' or 'yes' in word:\n res.append('yes')\n elif 'it is' in word :\n s = word.replace('it is', '').strip()\n res.append(s)\n continue\n else :\n res.append(word) \n\n else :\n for word in res1 :\n if word == 'it is' or 'yes' in word:\n res.append('yes')\n elif 'it is' in word :\n s = word.replace('it is', '').strip()\n res.append(s)\n elif ('there is no' in word) or (\"there's no\" in word) or ('there are no' in word):\n res.append('not found')\n elif word.strip().startswith('no ') :\n ans_tokens = tokenize(word[2:])\n ques_tokens = tokenize(question)\n boo = True\n for t in ans_tokens:\n if not (t in ques_tokens or t+'s' in ques_tokens):\n boo = False\n break\n if boo :\n res.append('not found')\n else :\n res.append(word)\n else :\n res.append(word) \n\n for s in ans:\n ids.append(questionID)\n\n #TODO remove this: \n conf1 = [y['answer_confidence'] for y in x['answers'] ]\n conf.extend(conf1)\n\n newres = []\n newids = []\n for index in range(len(res)) :\n if conf[index] == 'yes' :\n newres.append(res[index])\n newids.append(ids[index])\n c = Counter(newres)\n resset = set([k for k,v in c.items() if v >= ANSWER_OCCURENCE_MIN])\n m = {}\n for index in range(len(newres)) :\n qid = newids[index]\n response = newres[index]\n if response in resset : \n if qid in m :\n m[qid].append(response)\n else :\n m[qid] = [response]\n # queskeys = set(ques.keys())\n\n # m = {k : v for k,v in m.items() if k in queskeys} \n return resset, m\n \n\n\n\ndef tokenize(sentence):\n sentence = sentence.strip().lower()\n sentence = (\n sentence.replace(',', '').replace('?', '').replace('\\'s', ' \\'s'))\n tokens = SENTENCE_SPLIT_REGEX.split(sentence)\n tokens = [t.strip() for t in tokens if len(t.strip()) > 0]\n return tokens\n \nanswer_preprocessor = AnswerPreprocessing([train_questions_file],[train_answers_file])\n\nquestion_preprocessor = QuestionsPreprocessing(glove_file_path, [train_questions_file], num_occurence)\n\n\nquestion_args = {\n \n 'embedding_size': WORD_EMBEDDING_DIM,\n 'embedding_weights' : question_preprocessor.get_embedding_matrix(),\n 'hidden_size' : 512,\n 'vocab_len' : question_preprocessor.vocab_length,\n 'num_layers' : 1,\n #'dropout' : 0.3\n 'dropout' : None\n}\ntext_args = {\n 'input_shape' : (512,),\n 'dims' : [1024 ,512],\n 'activation' : 'relu',\n 'dropout' : False,\n 'normalization' : False,\n 'regularisation' : 0.01\n}\nimage_args = {\n 'input_shape' : (2048,),\n 'dims' : [2048 ,512],\n #'dims' : [512],\n 'activation' : 'relu',\n 'dropout' : False,\n 'normalization' : False,\n 'regularisation' : False\n}\n\natt_args = {\n 'input_shape' : (512,),\n 'dims' : [512, 1],\n 'activation' : 'relu',\n 'dropout' : False,\n 'normalization' : False, \n 'probability_function' : 'softmax',\n 'regularisation' : 0.01\n\n}\n\n\nclass_args = {\n #'input_shape' : (2560,),\n 'input_shape' : (512,),\n 'dims' : [ 2048, ANSWER_DIM],\n 'activation' : 'relu',\n 'dropout' : False,\n 'normalization' : True,\n 'regularisation' : 0.01\n}\n\ndef get_question_module(model_type, kwargs): \n if model_type == 'GRU':\n return MyGRU(kwargs)\n else :\n raise NotImplementedError('Unknown question module')\n\n\n\nclass MyGRU(keras.Model) : \n def __init__(self, kwargs) :\n super(MyGRU, self).__init__()\n self.embedding_weights = kwargs['embedding_weights']\n self.embedding_size =kwargs['embedding_size']\n self.hidden_size = kwargs['hidden_size']\n self.num_layers = kwargs['num_layers']\n self.vocab_size = kwargs['vocab_len']\n tmp_dropout = kwargs['dropout']\n self.dropout = tmp_dropout if tmp_dropout else 0.\n \n #TODO Check trainable\n self.embedding = keras.layers.Embedding(input_dim = self.vocab_size,\n output_dim = self.embedding_size, weights = [self.embedding_weights], trainable = True)\n #TODO : see if we use args like : use_bias, activation, initilizers .... see also reset_after\n self.seq = keras.models.Sequential()\n input_size = self.hidden_size\n for i in range(self.num_layers):\n self.seq.add(keras.layers.GRU(units = self.hidden_size, dropout = self.dropout, recurrent_dropout= self.dropout))\n \n \n def call(self, x):\n x = self.embedding(x) \n x = self.seq(x)\n return x\n \n\n\ndef get_non_linear_function(func_type, kwargs) : \n \n if func_type == 'activation':\n return Activation(kwargs)\n else :\n raise NotImplementedError('Unknown non linear function')\n\n\nclass Activation(keras.Model):\n #WARNING must be positive (num_layers), dims size > 1\n def __init__(self, kwargs):\n super(Activation, self).__init__()\n input_shape = kwargs['input_shape']\n dims = kwargs['dims']\n activation = kwargs['activation']\n dropout = kwargs['dropout']\n normalization = kwargs['normalization']\n regularisation = kwargs['regularisation']\n regularisation = regularizers.l2(regularisation) if regularisation else None\n \n num_layers = len(dims)\n self.model = keras.models.Sequential()\n for i in range(num_layers- 1):\n if input_shape and i == 0 :\n self.model.add(keras.layers.Dense(units=dims[i],activation=activation, input_shape = input_shape, kernel_regularizer=regularisation))\n else :\n self.model.add(keras.layers.Dense(units=dims[i],activation=activation, kernel_regularizer=regularisation))\n if normalization :\n self.model.add(keras.layers.BatchNormalization())\n if dropout:\n self.model.add(keras.layers.Dropout(dropout))\n self.model.add(keras.layers.Dense(dims[i+1]))\n \n #TODO : May be we need first dimension if we use this. Done ?\n \n def call(self,x) :\n return self.model(x)\n\nclass NewModel2(keras.Model) : \n def __init__(self,question_args, text_args, image_args, att_args, class_args):\n super(NewModel2, self).__init__()\n self.text_model = get_question_module('GRU', question_args)\n self.resnet = keras.applications.ResNet50(include_top=False, weights='imagenet', input_tensor=None, input_shape=(224, 224,3), pooling=None)\n self.text_net = get_non_linear_function('activation', text_args )\n self.image_net = get_non_linear_function('activation', image_args ) \n self.DImage = keras.layers.Dense(512)\n self.merger1 = keras.layers.Multiply()\n self.reshape1 = keras.layers.Reshape((7,7,2048))\n self.attention_net = get_non_linear_function('activation', att_args) \n self.merger2 = keras.layers.Multiply()\n self.classifier = get_non_linear_function('activation', class_args) \n self.merger3 = keras.layers.Multiply()\n self.optimizer = tf.train.AdamOptimizer(learning_rate = 0.1)\n \n def call(self,images, questions):\n timeF = time.time()\n images = self.reshape1(images)\n images = tf.math.l2_normalize(images, axis = -1)\n questions = self.text_net(questions)\n questions_att = tf.keras.backend.expand_dims(\n questions,\n axis=1)\n questions_att = tf.keras.backend.expand_dims(\n questions_att,\n axis=1)\n questions_att = tf.cast(questions_att, tf.float32)\n questions_att = tf.keras.backend.tile( questions_att,(1,7,7,1))\n att_imgs = self.DImage(images)\n common1 = self.merger1([questions, att_imgs])\n att_vec = self.attention_net(common1)\n att_vec = tf.keras.backend.tile( att_vec, (1,1,1,2048))\n new_imgs = self.merger2([att_vec, images])\n new_imgs = tf.reduce_sum(new_imgs, 1) \n new_imgs = tf.reduce_sum(new_imgs, 1)\n new_imgs = self.image_net(new_imgs)\n common2 = self.merger3([new_imgs, questions])\n res = self.classifier(common2)\n print(\"model call time {}\".format(time.time() - timeF))\n return res\n \n @tf.contrib.eager.defun \n def __func(self,tensor,fn):\n return tf.map_fn(fn, tensor, parallel_iterations=49)\n \n def loss(self, result, labels):\n loss1 = tf.nn.sigmoid_cross_entropy_with_logits(logits = tf.to_double(result), labels = tf.to_double(labels))\n s = loss1.shape[-1]\n loss11 = tf.reduce_mean(loss1)\n return loss11\n \n \n \n \nmodel = NewModel2(question_args,text_args, image_args, att_args, class_args )\n\n", "WARNING: Logging before flag parsing goes to stderr.\nW0831 14:49:31.795946 140593693730688 lazy_loader.py:50] \nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\n/usr/local/lib/python3.6/dist-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of `ResNet50(include_top=False)` has been changed since Keras 2.2.0.\n warnings.warn('The output shape of `ResNet50(include_top=False)` '\n" ], [ "class Evaluator : \n def __init__(self, model, question_preprocessor, answer_preprocessor, weights_path) :\n self.resnet = keras.applications.ResNet50(include_top=False, weights='imagenet', input_tensor=None, input_shape=(224, 224,3), pooling=None)\n self.model = model\n self.question_preprocessor = question_preprocessor\n self.answer_preprocessor = answer_preprocessor\n if os.path.exists(weights_path) and os.path.isfile(weights_path) : \n self.model.load_weights(weights_path)\n \n def evaluate(self, image_path, question) : \n\n preQues = np.array(self.question_preprocessor.preprocessElem(question))\n preQues = tf.convert_to_tensor(preQues)\n preQues = tf.expand_dims(preQues, 0)\n img_raw = tf.io.read_file(image_path)\n img_tensor = tf.image.decode_image(img_raw)\n img_tensor = tf.expand_dims(img_tensor, 0)\n img_tensor = tf.image.resize_images(img_tensor, [224,224])\n ten = resnet(tf.to_float(img_tensor))\n res1 = model4(ten,preQues)\n res2 = tf.nn.softmax(res1)\n argmax = tf.math.argmax(res2, 1).numpy()[0]\n answer = self.answer_preprocessor.index2Word[argmax]\n \n return answer\n \nevaluator = Evaluator(model, question_preprocessor, answer_preprocessor, pretrained_model_path)\n", "/usr/local/lib/python3.6/dist-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of `ResNet50(include_top=False)` has been changed since Keras 2.2.0.\n warnings.warn('The output shape of `ResNet50(include_top=False)` '\n" ], [ "def demo():\n\n \n print('VQA Demo')\n print('Say next to go to next image')\n print('Say stop to stop demo')\n im_files = []\n \n i = 0\n while(True):\n if i == 0 :\n print('give an instruction')\n else :\n print(\"What question would you like to ask?\")\n question_str = input()\n \n if question_str.lower() == 'stop':\n clear_output()\n print('Bye')\n break\n elif question_str.lower() == 'live' :\n clear_output()\n take_photo_live()\n im_file = photoPath\n i=1\n continue\n elif question_str.lower() == 'up_im' :\n clear_output()\n im_file = get_uploaded_image()\n print(im_file)\n if not im_file or not im_file[-3:] in ['jpg'] : \n i == 0\n continue\n else :\n i=1\n continue\n elif question_str.lower() == 'up_vid' :\n i=1\n !rm videoPath\n im_file = get_uploaded_video()\n os.rename(im_file, videoPath)\n if not im_file or not im_file[-3:] == 'mp4' :\n i = 0\n continue\n else :\n #take_photo(im_file)\n clear_output()\n testphoto()\n im_file = photoPath\n continue\n elif question_str.lower() == 'last_vid' : \n clear_output()\n exists = os.path.isfile(videoPath)\n if exists :\n testphoto()\n im_file = photoPath\n continue\n else :\n i = 0\n continue\n elif question_str.lower() =='img' : \n clear_output()\n print('get image url')\n im_file = input()\n exists = os.path.isfile(url)\n \n if not exist or not im_file or not im_file[-3:] in ['jpg'] : \n i == 0\n continue\n else :\n i=1\n\n else :\n if i == 0 :\n clear_output()\n continue\n \n print(evaluator.evaluate(im_file, question_str) )\n \n \n \n \n", "Bye\n" ], [ "demo()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e72495d7606d08dcfc7eefe1e53214ea38614f11
57,734
ipynb
Jupyter Notebook
sandbox/GS/notebooks/OCCA_pot_dens_surf.ipynb
geoffstanley/neutralocean
5e93c9732d3a64bf4c5dcb81a6d2f47839b0c6f7
[ "MIT" ]
10
2022-03-03T16:00:01.000Z
2022-03-14T18:51:08.000Z
sandbox/GS/notebooks/OCCA_pot_dens_surf.ipynb
geoffstanley/neutralocean
5e93c9732d3a64bf4c5dcb81a6d2f47839b0c6f7
[ "MIT" ]
null
null
null
sandbox/GS/notebooks/OCCA_pot_dens_surf.ipynb
geoffstanley/neutralocean
5e93c9732d3a64bf4c5dcb81a6d2f47839b0c6f7
[ "MIT" ]
null
null
null
303.863158
18,168
0.932968
[ [ [ "#%matplotlib notebook\n%matplotlib inline\n\nimport numpy as np\nimport xarray as xr\nimport matplotlib.pyplot as plt\n\nfrom neutral_surfaces import pot_dens_surf, delta_surf\n#from neutral_surfaces._densjmd95 import rho_ufunc", "_____no_output_____" ], [ "OCCA_dir = \"~/work/data/OCCA/\"\nxrs = xr.open_dataset(OCCA_dir + \"DDsalt.0406annclim.nc\")\nxrt = xr.open_dataset(OCCA_dir + \"DDtheta.0406annclim.nc\")\nlon = xrs.Longitude_t.values\nlat = xrs.Latitude_t.values\nS = np.require(xrs.salt.values[0] , dtype=np.float64, requirements=\"C\") # pick first time\nT = np.require(xrt.theta.values[0], dtype=np.float64, requirements=\"C\") # pick first time\nZ = np.require(xrs.Depth_c, dtype=np.float64, requirements=\"C\") # DEV: currently ignoring distinction between Z and P, until Boussinesq equation of state is ready. \nS = np.moveaxis(S, 0, -1) # Move vertical axis to end\nT = np.moveaxis(T, 0, -1) # Move vertical axis to end\n\n# Select pinning cast\ni0 = int(len(lon) / 2)\nj0 = int(len(lat) / 2)\nz0 = 1500.", "_____no_output_____" ], [ "# Here we discard the unneeded time axis.\n%time s, t, z_sigma = pot_dens_surf(S, T, Z, 0., (j0, i0, z0), axis=-1, tol=1e-4)", "CPU times: user 1.91 s, sys: 20.1 ms, total: 1.93 s\nWall time: 1.93 s\n" ], [ "fig, ax = plt.subplots()\ncs = ax.contourf(lon, lat, z_sigma)\ncbar = fig.colorbar(cs, ax=ax)\ncbar.set_label(\"Depth [m]\")\nax.set_title(r\"Depth of surface in OCCA\");", "_____no_output_____" ], [ "%time s, t, z_delta = delta_surf(S, T, Z, (), (), (j0, i0, z0), axis=-1, tol=1e-4)", "CPU times: user 1.63 s, sys: 28.1 ms, total: 1.66 s\nWall time: 1.66 s\n" ], [ "fig, ax = plt.subplots()\ncs = ax.contourf(lon, lat, z_delta)\ncbar = fig.colorbar(cs, ax=ax)\ncbar.set_label(\"Depth [m]\")\nax.set_title(r\"Depth of surface in OCCA\");", "_____no_output_____" ], [ "fig, ax = plt.subplots()\ncs = ax.contourf(lon, lat, z_delta - z_sigma)\ncbar = fig.colorbar(cs, ax=ax)\ncbar.set_label(\"Depth [m]\")\nax.set_title(r\"Depth difference in OCCA\");", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7249940c2cb52f71f69856869ffc440f83e4000
124,309
ipynb
Jupyter Notebook
Exp1_Fourier_Series_09112020.ipynb
abdislam/ee472TeleLab
dc5da78fab9b8ae8fa24bc40a0245e98852a26aa
[ "BSD-3-Clause" ]
null
null
null
Exp1_Fourier_Series_09112020.ipynb
abdislam/ee472TeleLab
dc5da78fab9b8ae8fa24bc40a0245e98852a26aa
[ "BSD-3-Clause" ]
null
null
null
Exp1_Fourier_Series_09112020.ipynb
abdislam/ee472TeleLab
dc5da78fab9b8ae8fa24bc40a0245e98852a26aa
[ "BSD-3-Clause" ]
null
null
null
535.814655
26,562
0.939208
[ [ [ "# FOURIER SERIES\n\n## Objective\nThis experiment aims:\n\n1-To verify Fourier series through synthesizing certain wave shapes using orthogonal sinusoidal signals.\n\n2-To investigate Gibbs phenomena.\n\n## Introduction\nFourier series is an important technique that is widely used in mathematics, physics and engineering. It is a mathematical operation that represent periodic functions in an infinitely sums of sine and cosine with different frequencies. Fourier series has many applications like harmonic analysis, data compression, and solving partial differential equations...\nIn this experiment, we will construct certain functions using sine and cosine signals of different frequencies; this operation is called Fourier synthesis, which is the opposite of Fourier analysis. Also we will observe Gibbs phenomena which arise from Fourier series representation of discontinuous signals, such as square wave.\n\n## Procedure\nUsing the table below, try to synthesize the three signal waves from their Fourier series.\n\nThe ```Matlab``` code is provided below. Nevertheless, you may write your own code.\n\n### SYNTHESIS OF A SQUARE WAVE\n![Sq.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnMAAADOCAYAAABVe4SMAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAEq/SURBVHhe7Z0LvE1l/v+fan7VdDGlkoqkVBppRkahC7+MIVSoUEqh9EOhyzRNKV10R/cLGkKEaKII3QYZkcvIpZLSdJHSxaXL/GfK/u/311pm2+1zzj63vdezzuf9eu3XOWftdfZez1rPWs/n+d6enf71r38lvvrqK3fwwQe7VFatWuW6d+/u3njjDXf00Ue7evXqud12281t3bo12EMIISouO+20k/v3v//t5s2b5z755BPXsGFDV6tWLdueSCSCvYQQomh23nln9//+3/9zCxcudD/99JN79NFHXatWrYJ3swAx9+mnnyafPTuycuXKRKNGjXgi6aWXXnrppZdeeumVg1fNmjUT06ZNC9RYdmRlmTv11FNd27Zt3e67764ZpxBCJMECl3x+utGjR7tly5a5zp07u+QEWJY5IUSx4bnxww8/uAkTJrj169e7Rx55xLVp0yZ4t2iyEnNXXXWVGzx4cPCOEEKIkPPOO88ewOPHj3cdOnQItgohRPHp3bu3mzp1qnvssceKJeZ2Dn4WCn7cb7/9NvhLCCEE8Fwkbg6+++47+ymEECVhy5YtZu0vCVmJOSGEEEIIEU0k5oQQQogIkin28scff7SMx7lz5xZpxQn3nT59uizHMUdiTgghhIgQBMAPHTrU3XbbbW727NnBVuc2b97sRo0aZbHs1atXd7vuumvwTmZ22WUXV7NmTffZZ5+5Bx54wBEfL+KJxJwQQgiRA95++2338ssvu//85z/Blp/z1ltvuYcfftgdc8wxrnbt2pbhCNQeI9GGzOlOnTq5ww47zGqTFQYZkgcccIAl5uy1115uyJAh22M8RbyQmBNCCCHKGVymb775pgkykgozQVH+v/zlL+6QQw5xp5xyijv77LNdkyZN7L3ly5e7OXPmuI4dO7oqVarYtmzZe++9XcuWLc3iN3ny5GCriBMSc0IIIUQOwCKHpS1TLBwWsyVLlriXXnrJ7bPPPm7Tpk3mJv3lL3+5XQjiZmWlkXRwn65bt267xY9YuY8//niHKhRHHXWUuVxnzJhhVj4RLyTmhBBCiByAWxSBlgnE2Lhx49z333/v5s+f71588UUrVQEbN240F+1xxx33s////PPP3cSJE60eLPF1CLlp06a5G264wb377rvBXtsgzo74OSx0Il5IzAkhhBB5pkaNGhbXhlv1iiuucO3bt3d77rmnvYeVDktb5cqV7e9UEGeNGzd2v/rVr9ysWbPczJkzLe6uWbNm7tBDDw322gaLA/Adn376abBFxAWJOSGEECLPEC+3YcMGd8QRR5gFbbfddtue4MB7vDIlPPz2t791v/nNb1yLFi3c+++/7xYsWOAuu+wyd9FFF1nyQypY9XC9YukT8UJiTgghhMgzX3zxhfvwww9NzLEOeirher8IuoIgKeKDDz5wDRo0KDBBAhfsHnvs4fbdd99gi4gLEnNCCCFEnkHIUdj3oIMOCrb8FwQYLlbcrZlApBEHR925TMkVIQhGLHPpFjvhPxJzQgghRJ6h9Mj+++/vqlWrFmz5Lwi5OnXquFWrVgVbtkGyBEkSZMAi6Ii7W7lypQk2YunSs1aJuyMOr2rVqsEWERck5oQQQog8s2LFCnN/7rfffsGW//I///M/rl69eu7rr7820Rcyb94817dvX7do0SKrS9e6dWtLghg+fLjF36VCtuxHH33kWrVq9TM3rvAfiTkhhBAij+A+xZJGvFylSpWCrTtSv35917x5cytfEhYdxl1KuZJ27dpZoeEzzzzTdenSxerJ1apVa4cyJlOmTLG1XMmSFfFDYk4IIYTII2ShEi9Xt25ds8JlglUcLr74YnO5sm7r6tWrLZO1X79+7thjj7V9sOx17drVtW3b1uLsgDp0o0ePtp89e/ZUvFxMkZgTQggh8sQ333zj/vrXv7ojjzzSMlEL48ADDzRBR/04rHlFreRAMgT7Ye3DYnfCCScE74i4ITEnhBBC5AGW33rnnXcscYHacNkkJmBZO+ussywhgpIlRYHwI5bu8MMPD7aIOCIxJ4QQQuQBYtp+/etfu/PPP99+ZgsiDjdqpiLCqbAfyQ4FuW5FfJCYE0IIIfIAYoxluML4NiFKisScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBBCeIzEnPCWrVu3FrqodC6hIjvV1YUQQohcIzEnvAPR9Pe//90Wl863gOL7OY777rvP3XPPPW7atGlu8+bNwbtCCCFE+SMxJ/IGBTN5ZcuPP/7o1q5d6x5//HHXu3dvq5penP8va/juZ5991j344IPuiy++sIWs77zzTveXv/zFjrU0/Pvf/3ZfffXV9jUY8wHf//333wd/CSGEiCoScyLnIFRWrVrl5s6d6z755JOsXaVUSp8zZ45bv369+/rrr60gZjYV0MsL1lN86qmn3Mknn+wGDRrkHnjgAVsuZ+LEida+krBlyxa3YMECN3z4cDdz5kz7u6SUxAWNQF2xYoUbM2aMmzBhgonUsoLjiYpbXAgh4oTEnMgpuCWff/5598orr1jBzOIUy6SKOWLp8ssvtwWmIV/igO996623TFi2bNnS2rLbbru50047zQTY4sWLgz2zA9E0a9Ys9/DDD5uIYj1FluHZa6+9gj1+Dt/zzDPPuMcee8x9+umntm3jxo0mMLEYFseqxmfhumYBbyyLa9ascdWrV3f77LNPsEfhIND5//vvv9/ddttt7tZbb7XXHXfc4QYOHOgGDBhg7RJCCFH2SMyJnDJv3jyLKzv66KNd06ZNbeHobK1r/M8xxxzj9txzz7xa5AA36ocffmgirkqVKsHWbQth894HH3wQbCkcRNPIkSPd4MGDTeBSDf7cc891V1xxhVn8WIonE4hiBCPnc/z48W7q1KkmyF5//XUTVRs2bMhK6LIf7mpEFwIQt+7pp59ubuwzzjgjKzFHIgrHsnDhQvs5Y8YME6ffffedCXdEKr9zPPm+bkIIEUck5kTOYEBHyFWqVMk1aNAg2Fp8fvrpp7y76xBsxJSxtiIWuZDKlSubtZH3CgIhtmTJku2uWeIAsTRefPHFrkePHq5Ro0Zu7733DvbODO1HOF566aXud7/7nXv55ZfNLfvDDz+4fv36uY4dOxZo9eT8vffee2aBI8YP8XfYYYe5Tp062WLff/jDH3YQqEXB53G8iEAE6P/+7/+6P//5z+7KK6+0RcERp1jp+CmEEKLskZgTOQMrEJaogw8+2CxQPoOYwrWYDsKOV0Fik+0vvvii+9Of/mSuUaxfWOHOO+88szr+4he/CPYsHCx27F+nTh0TxlgCiSlETB111FFmUStoEe7ly5e7a665xv3tb39zjRs3Nrc1Ig5RWJhbtyA45mOPPda+9+OPPzaRecghh5irmGvOAuIIy2zbJoQQonhIzHkOVhFERTaWqlCA8D/5gJId3377rQmRgoSGL2CRQ5DiYkwVdbSRv/fdd99gy47gZkQw8ULsEONG0gGfUxxS3ZX77befxRMiog466KBga8Fw/jl2YuqwIHK8pcm+DY/lyy+/tPZUq1bN/n7zzTdNVBJ7J0RFgedsNs9jIcoSibkIwyDL4IhAyDTYkk05ZcoUt3LlyqzFAFYZsi1Xr14dbMkdxGOlCp+SgnhIfeUDrEyIJ9yaZNaGECuG0Dv88MODLT/n1FNPdffee6+5JZctW+buuusuK7eydOlSc0UXB84nJVE4D9me21q1atl34tL95ptvttfII7s4tS3F5Z///Kf1Vc4LkCCCuxZXqxBxh4kRyUjcW2WZBS5ENkjMRRCEGXFUuMGeeOIJd/vtt7u33347eHcbBL5TvoL4K1xa2Vi6GPDZl1nj2LFjLVi+uBah0oDI4RgQdaX5XixZWJUQL/maAdOWI4880tqBBSoEsUyCxm9+85tgy8/BDYugat++vbk4cbUigkaMGGE161577TWzchV2jkIrK7F3nE9cmZQUQZwxAaBfFARCFFd3ixYtzL3apUsXsxROmjTJhB0TBErGFNdahzClHyLmsP6S6Rta5UpTYkWIKMN98uqrr1rGNtncJAEVdu/wDFTIgShrdunfv//NWBfSA66JdSFDjoc65SB+//vfu1133TV4NzowoDEQMrjyygYGQuKVsE4hCmh7tv+bCziuJ5980sQCooBjJZMztHAQG4Ulh5gpAt1xZWVroWLQRkhggWHlAiwnDOy5gAE9/M6TTjqp2A80jhlrz/Tp0y3zEjclLkY+B7dmrq8hrsqPPvrIat/RhxDc3DMkMBDsX5TA5n3+74gjjnDHHXecXRdEHOcIgYjrlGuTfp4Qb1jROA/sz73JPTB79mx7H0G3//77ZxX/9stf/tLconw/ghABiZjk83ku8P3sUxRYIh599FG7HohTrgWikM8I3fq56me5hOfPc889Z/fkWWed5erVqxe8Ew04PvoL4sL3F/dDFGGMJImIZzX3IfcxSUSZjvcf//iHxbeSJMT7XJ9MbY36i+cE93i2406U4Rpkex1ob3mGCHEcVAPA69amTRuLQ86WnZIz+ATm4fQHLUVPu3fv7t544w0rU4DpOJvBIdfgYmTQoNbX8ccfH2wtGDrh/PnzzQKCZQphQfYfAehREKt0mIceeshueCxyZH7yMGZA5+ZhYLzxxhtt8GQWWNJrgjvvlltusQfKVVddZYNwefPZZ5+5Xr16mXWQUhi0rTjQF7lunJvQaoXQaNiwobv22mtz0oZ0CPintAhCiu+nH7Zr167AmLmi4GZmIkWZD657s2bNfuamRBRTD46M1D59+lgSBEKSzFgmKpxjEhJKIm7pX4hBrHw8UCgfg9gsCu4l+i1C7oILLrDjoPAwpUnYhhUyis+P0kIM6EUXXWTlXcgO7tq1a/BO/kFIY4Gn5E1qxrVv0JcYRHlOMemIGjyzuW8xDHTu3NmdeeaZNmZmYtSoUZZBzlhVWKJUlOEZwWSRTHomgD6DoQTXOJPywkRaeJ0wnoR1RcsDnu3oEco5cVwIuqxBzCUblDzWHUmKpERykKQFiWTHTCS/JHgnOiQH9ETy5khUq1YtcffddwdbC4b9kwOU7XvTTTclkoNuIimSEsnBMJEUCsFe+SP58E0sW7Ys0bx580SPHj0S8+bNS6xfvz54dxuLFy9OtG7dOjFixIhgy44kO2UiOftLbNy4MdiSSCTFTyI5OCeSAi7Yso2JEyfaOZg7d26wpXxJDnyJ5AMg0aVLl5+1Kxvoq0kRa//75Zdf2ovfk5ORRPIBE+yVezgujiMpwhL/+c9/gq2lg/bwuZnaxbbNmzfvcI0h3FZW54LvTw5UwV+FQ9/9+uuv7X9CkgNcIikMd9gWN3guJoVqIik2Crwn8wX9gXuNZ3gcXjfccIP1s6iyfPnyxCmnnJKYMmVKsOXnjB49OpGcZGVsn08v+nty8mhjqs8kJ5yJypUrZ2xjptcdd9xRZs/4THDPduvWzTTN1KlTg63Z4bVlDhVLtXmCyVu1auWGDRtmVp+CSA5yVhoDiweWCywP1MLCXYUVgVlVPsHK+PTTT1sFf6w8mOuJZ0qKt2APZ8eJC5aq+rjzUiEAnZkf8RvMIJg5sQ0rDrN0ylHUqFEj2Hvb9/3xj390bdu2tX0zmcw5Z8SoJftKsCUz/C8zzUyfEcJn9e/f3yx0WObi6HYTFYsoW+Zw+VH2Buso1lUsSL7B84RnD9bq+vXrWwxxVBNqSCzDEorHI1yhJp2kmDMvAm3AK1LUczWK4CXCK0BdTHRBQYXNfYA4R/oU9UGLuh68h9UMz0N66EtZgaZBk3DP4uno0KFD8E7ReCvmOLEIs2eeecZuEFxDxOycffbZwR4/h/9BmHAhMJNiGr/uuuu2P4hPO+20YM/8QExYuCYnLgXcZ5izU+MZEXFh8kNYAiIEVxemfky0CNWrr77axBz/jzsZF0VqIVlMzFxb4nyoe5bppuRziAMjPq0g0zIuT24GOjk/C4Jjo5gsbeLYirOUlxBRJMpijmcibngSUpjIcf/5BmKOF8k5xJuRdR3VSSCFsd999107xkxGBcYfJtsM1IgInsuFiYeowjhx8803m/ihrSR8+QjnPgxZQkARshOG7xQE4xux0oUZLUoDYo6xn/5OSNGFF14YvFM03iZA8GAi6JgSEMw4iQshTqlJkyYFxodwAVKDNmkjgfRk3GEBy3enpDNhVSMbsVu3brY2Z2pb6HyTJ0+2zocVMT0wndle7dq1TXRRmBaxiiDEaonw4+ZLhfdZkYFrT7mMTNc3jN8i0D4p/O28p7+Iv+OcUnS2oHOIUGWVAvoTWZQkdwjhO9xDUU2AYOLKcfEMoDg0z0cfXyR4YWVkIMXiFVVLEIKeOo/EzBU02CNI8Uww3vC8ztTeqL8YL4gVZjJOvHL6uOILjKckG3IdiIOjf2Vqb+qLvldeQg54nmDMwZhG6arCqiKkU35pGeUM4gARQVYkFjXUMkKImyUE0UPtLsQIWZCo3hAU+IIFC0zA9OjRwyxX+YZZPut9Ii5pTyY47oJmc6G4Q+DS6fg8zk9hFDUzJIuWGecjjzxiM4VMLxaHZ6ZWmPsDKyqzDUQlkwMhRPnCxArvCpmVvnPiiSeaW6+g52K+YAwiIQtvCWMMAhoXJJPcguCZ66PLOxWSvDAo+JxYgygjxAhvXnklNOQSL1uAoMGcTcYdAg5hR0wINxEWOtQt4G7EwkXHo54XMWIhuB+ZXTBDwprHzZXvG+zzzz+3+nLMpDNZuOh8CCYeBqnCNB0se+zDTKOwWSyWNtqMCCwo8xFxifuaDKy7774744v3iMvDrVMQXB8shJz3VMEthCgfuKex7kdholpasIqUV5xSaWBMoeoAlQUwHixatMh+X79+fbBHPMECmW1906jCeEo7cNuXp7UtV3h5JYjfQhSwODjJDFWrVjXrHA8vanSFqxtgBcIyRU0fzMFhPBeig5uOtGrcD9x4WPAw5ecThBNCFctcQZ0LqxuJBKRSZ4K2IZY4JwjeUPRlEqq0l5kknbkgF3o48+SzOK+ZXrzHPqGIzgQJHYhqvo9rhGtfCFF+MFmjZiDFnH2H5wwTd55HUYL431NOOcWsO3gwLrnkErOEIj7jDN4uauv5bGHE4EE9T/ICGFN9x0sxh/UKixMZm/iVuXmoE0eBPSriY/LmQlEjCzcgQackcPA+AoZAZTJfcfsR+E9dLpIH8h2LwUOXeARm0wVBoWCEF7EwqSAEMfMTA0jAc/PmzW0bD3NWksDqlw6fwWfhSi1ohoXg5dyR8YP1LdMrXKWiKAsA55eHHNcPa6oQovxgwjRu3DjzVvgOgy5Z/kzkowSTa9y/nTp1slqKjEm47uIu5l544QWL38aY4itoBGLLSUjByOA73ok5BApCjHg3rEHAhcBlimWO2SiBqIg6QKSwnZ/sx4WjuCrB+HwOv7M/7+czuxKLHO0h2Lew4reUVKlbt661j7aGYO0i05XPIZkBiyMWN4QW5yY9E5mYDkQe7k8+syCwEOLeyOZVlKkayx2JFFyHgty6JYWYPESi8AfWlB0/fryV2uG+FmUPlhOeCb5DG+Iw4MaFsGSV79COOFjlwDsxx6oBmLRZLinMwEWIYUEaNGiQrQZBam+42Hcq7Ec2FNY6MnURcezP74WVNMkFxLnhOsV6WNisDusW5Q8oJ0NNuhBM/QQ7n3POOXZeEGnUO6IuDpk66YHDZLHikqXduVo5Aasox427u6yEM5/JeqJ9+/ZVLJ5HYD2nTA1L1xDbSgkNZsmi7MDywEAVFzHHizaJ/BOnfiUxlycQArgQES9h9iRWHkpdUPKCtHDcruk12ID9SC4gdoslhnixPz8z7Z9LsEwgTEhFLipDCEsaGbj4+lmcncBbSpLQ/lAIIlzZj3OVmmWKmGL5Kf6X5BBiCXMF8XvEvCAsS5sFhcWB9UNZoxaTP0u0pVoqKyLMlFknFtdavgKwCQjH0oaltCCob4h7npIG1Hm86aab7N5kkkUSkygbmNRRX44Jnu9QGotiu8SoifzD2EF4Ur5Dk0oDniQSIDGCFBQz7hNexszFDQZhLBQU9aVWWzawZifWDB7YDI7ZmLyZhWD94wakA5MYUtbuzsJAyOHe5Zhxy5YGxBwv4iIprEhmlQ8ZSYgsRChuflzjvIhp4if1hVhjtbgzRVzXEyZMMCssP7m+odU6E1g3EE0kAYUB5fQNYiixbhY3OYUSOMRmUgiasAX+v7B1dxHefBelDZiE8CLmiPpbcQjWjwqIZZ4pxHX5Dvd3NhNdkRvwIOENy+X4UdYwXmAECmuz+o7EXJ7BqsYyMMS0sfROYcuRpYOVkUKlJDBkI47osNyEWCbzsUAyMXMIFR7IpX0IMJMiLpCadTVr1vSicCXiCesTNflwMbJcC+IHi8P1119vCysvXrw42LtwCDxGEFE2hv8lhpISO7ibWf6tsALYZG5TuR3xRQwpkJnGEmv8nY2FM5wY4Orn+wlZ4KH4f//3f7YSSGHJMKwqwvVPTfQhM52HKxY9nzPkogQTPEIp4pA5jkehsNptIrdgQCAW23dXK3G7FLL3vR0gMZdncIviFiYOriSrIiBqiD/L1irFIJ8v03i2iRLZgDANZ+mIxKjH0oTJH23atDHXN0kq1KNCUFHriCw4hB3vFyR0aSOueOLNWI8Y8UWWMlYt4kSJI0WsF2a9CFfsIPsbiw3xagg5km/IyuM4Clq2j+9H6C1fvtxEJO5RShRgJaYd/P/xxx9fqFUQgcHATL9NPU7cZ/RLJjc+Z8hFCSymJHlh9fcdkrVoS9SyWSsqrEFLcp3PoS0IOCajo0ePzsqzFXUk5vIMFiVi2gobAOMCIgExyWCdKr4YdLDI4GLE2pTpxXvEFeZzoMfCgegp6jjZJ92KwHXGrc1yT1hLKDGDmELQIK6oh1ijRg3LZi4IyjMQp0LCB30G6x5WPWoskvCTjUhGMFFNn//BRU05G1ykuLDYhhgsyMpJbUdK+fC9xD+yri9rTBJ3yrFnYx3FMst55PqnHi/HxYv3oy7MfYHzSJ/EEuo73CNYrcOwAJFfeIZRXsr3e5XEw7DIvu9IzImcgeBAwGCZSX0o4yJkhQ6sQliZMr3IusWdl8/yI8SZEVCOBaygY+U4iV1LXW0EEC4INywLPARxDfMAQfyxAgdFnosCtyaDM5nKxFZSXLowd2omsGiGwb4IMI6LiQSxVUXBdSOmjiQkBCHHgfgsjqUVqyOijbanxgZiucS9ynuljacUO1Kc6xNV6C9xiGuKC1yPOAiguLQDdHeInEFQPC45Yi2wMoUQ90aG7fTp0y0zNdOL5ADckpmCuRmswld5QkwY7kVKaFDapaDjxB1UkDii7Zj0iRfDuoXLknqAvIqCWDjWyMWtTrzdkCFDLJGCxIHixnzwAMNlyzljkMzm3FHWhyxUhOSYMWPMzUusXHEspljvKIWDdQ6LbAguaMQc5yUOmWVRgWuGFdh3mDRQwaAieDB8gJhYJqDl/cwtb2hHtl6NqCMxJ3IKJQaIFyPWisLPmOuxxhA3RnmYgl5YobjxChrosfKUd2YV380DjGPJdIy8eI99CopbI/kAEDSIGdqPQEXkFCWIsOBhEcPVyQtXK9mriDpWM2FZtWxjWLAyEsSMiAzLgWAdKyx2hPOLoMW9ytJFxMeRhTt48GCrzo+FFYFYFFgEaXtqGRLc7FjkatWqJQtMGcHk6dJLL7UYTd9hEnjRRRdJzEUE4mN79uxpzyRf4TlDJQTKrMRhAqmnpsgpxM1R24f4LNZbxFJVEjM3/0PwPzFflPbAOkTtvBUrVrgNGzYEe0UL3JRY7rA+UWsP8Ya7GUFHuRLORTYgfrG28DClMDQCD0snVkOWqkNUIczSwfK1atUqC4ifPXu2xe9hFSUWiaV5SKwgfqQoEHWI1vPPP9+sc2RHkxE2fPhws9wRg1dYbBO1DcmmJvgYUY/ApXA34pCXKBsQx1hxs4lljDo8N+LQjrhAqAUTUt8tWiQg0hZZ5oQoAczmGjRoYCUsWJqsJDcSYo4Mt7vvvttcjbhhEBH8jSiJIghNLFcUc0YQ8SDB0kXcHJapgrJICwPLGkVhySylTA3uSrJTM2X9IeYQkyyhRQkcRBVJGWSSshIKrs/iHgPnHWFOPCO1C2kXwrSwosXECxJXyGyY60VCBwKXQtipBa5F6WCywJrT6fGbPkISB21ReZJowISRiZjPZYQYQ4gBZjKbGr/rK7v079//Zh7i6eZrrBvMlplxM3vHPaZYFlGWIOJKMyPif3HPkpVJRiXuWyw+1N2LYqV4rGVYw7CkYTHBwkb2KJmkHHtpjhm3LiLp5JNPNrct93MmSwZijZg39sPNwPlDXHJMiDuOqSQg4hCWjRo1svqHiLzCnhe4WhH0fD/Plw4dOpi1zze4ps8995wlpiCmub5RgZhMxDLCnpqMPkOMKqEE9FufXa2IB7wIrVq1KjAUwweI3UUEUdvSV12AmGMFIeKWiceMQuIVzxMm3HhQTj/9dBsbskWWOeElCLmwaDLLsdHxw+XZsHZFEYQbwjNVtBFnhwBC/JQFPJCILcwUy8J7PBwQc4ivcBvJGoiQsnoo075sLHy0nbp6WAfT1w4WZQOJMSUJY4giKlsTHehXcbBm0Z/iUDAYJOaEECKGMOHBiuVzkHoIVizaEk5CRH6hJBLXozSelSiAh6Q4RfejjMScEELEEAYp4lIJkfEdXPFks8qCGw3wghCr67OrmDATyj21bds2Fsk1EnNCCBFDcJsTCxnVsIPiwNq9xHTGwcoYBwjNIPPcd0spCXiEnsShHJLEnBBCxBAyDcN6gr5DQl4+l/ITO8LSfixb6HO8GfFyrKpDW+IQNycxJ4QQMYRaf9QdZMUS36HcDjUMs6mDKMof6lKy2L7PC+0j5ljBZuLEiVpoXwghRDQh2xCrHMWqfQcRR10zLbQfDb744gurJem7RYt2UHw+DlnSEnNCCBFDyNAjbi4Wwd0772zxWXHIOowDlDSiX/l+PeJyf4DEnBBCxBQW2qeQtu+QxUphbZ8LBseJI444wmpT+r42K22gyLpKkwghhIgkCB/KeVCU2XdYaYC1gCXmokH79u2tNInvWaAtW7Z05557rrerWKQiMSeEEDEEV1ilSpW8rgUWgmVOZUmiA9eDwsG+g4hj9R1Z5oQQQkQS1nkkCzQOC+1TBkML7UeHpUuXujfffNPrhfaBtXKXLFkSi6XJJOaEECKGkPk5atQo99JLLwVb/GXBggVu5MiRKk0SEaZMmeImTJjgde0/MnGff/55K0+i0iRCCCEiCeUWqAMWh4GKNtCWOJSQiAOIOJ9rzIVgvY5LMeqsxBypu3HwjwshRFmy1157WWxaFCE4nZi5OMSa7b777tYWLbQfDeKy0D7t4BWHmLmdkuo6QVHJgw8+ONi0jVWrVrnu3bu7N954wxZq7tixoz24fPeRCyFEWYBY+uGHH9xjjz1mMUSstnDxxRcH7+YfrA7ENRHgXadOnWCrnxAz99lnn9l6oL6KU6yKTz75pPvb3/7mHn74Ya8zc+nvaAFKe0R1MlMUXI+33nrL4uVYnzUKE4UtW7a4q666yk2aNMk9+OCD7sILLwzeKZqsxBwLNdevX9+youIQKCiEEKUFMYdgmj17tgkNxBylQITIRJzEnCgfyl3M8YHXXnutWeYUsyCEENsgyeCaa65xs2bNMjHXtWvX4J38Q5zZmjVrzI106KGHBlv9hCWXsMjhavWVOIk5LKUYdige7GutOa7Hhx9+aIkQNWvWjEQ7yl3M9enTxw0aNCg2y14IIURZgGDq3LmzPXyjJuY2bdrk7rrrLnfkkUe6bt26BVv9ZOrUqbbOLIWDqXHmI3ESc/fdd5+ViUF4+Or2RsRxHbiHL7/88kjUYyyNmMtKiqLA45LxIYQQZQXPxaiGnnBcn3zyiduwYUOwxV+++eYbq5n37bffBltEPsFSum7dOu/DrlhonxCJOHgcVZpECCFiCBl6BKfHIQOUNijEJzpwPehbvmeBhu2IAxJzQggRQ3B/tWvXzjVu3DjY4i9kG7KGps8xc3GiWbNmrlWrVl6HXiFEWbe4efPmsRB0EnNCCBFDqM3WunVr17Bhw2CLvxx77LGuTZs2loQn8k/Tpk1NBPm8QD1i7tRTTzVBJzEnhBAisuBG8jXbMBUGXtriu1svLtCn4tCv4tIOkJgTQogYQpbe/Pnz3dtvvx1s8Ze1a9daW3xeQgohGopR30XpsmXL3OLFi71PgKBoMAWQyWz1HYk5IYSIIYi50aNHu1deeSXY4i+LFi1yI0aMsLp+PoN1kZfP7klggfqJEyd6n5Qybdo0N2XKlFhYfLOqM9e7d2+rV6R4BSGE+C+UymDVh7/+9a9lXmeO5ZJ4Dr///vs7WHWyAcGwceNGN3z4cCsYzDH6KoRoy0svvWSWIM5v1apVvVtWkmuH9efVV191K1ascD169HB77LGHl2KIxBqWsMNKyvJ1FKX2zULH9eCYx40bZ5OeCy64wOrMldZCx2ehpRo0aFCi5JByLxosMSeEED+nPMUcA8OoUaPMulbceDH2RfBQC4xECARQpgE39TOjKiw4RurMISJoC/gmgmgDx7x+/Xpry1FHHWVB96ntCK9F1NtGX6SAM/0J3UDMmc/Xg5/cH2E7SnNPMGEiqWLAgAEl0ksSc0IIkQfKU8wBgwnrvxYXBiYsJ8QDsWLCcccdt8PnYDVID/xG/EXRwsLgyrJL++yzj7UlHIh9gnPNuR0zZoyt5Xv//fdbW7AE8R7XIxQS/ETIR7WNHCtWUvoLJWP427eYM84zx7xy5Ur7SbY0rm+EKqRei+Lef+H1LAkSc0IIkQfKW8yVFwxQJEZ8/PHHtopF9erV3e9+97ufCTxRtoTLeT3yyCPmngx555137Fpw/hFI+++/f/COyBWIOiYN7733nt3X++67r6tfv35Ol48r9+W8hBBC+AWWk1AkpMKA0b9/fzdy5Ehzw7777rtu4MCBlmAQVWsQ7jDEZ0mslFGBcxu+Ui1ZJKhgsWNpqTfffNPdeeedtgxblCG7eM2aNZG05BYH2vHPf/7T/fDDD9b/OfeIOfrbsGHD3B133OG+/PLLYO9oIzEnhBAxhPgdLEGzZs0KtmyDBdIReC1btrRA/D/96U+uZs2aliyxefPmYK9ogYcIyydJHXGC2DPiIvGMde7c2V166aVW9gPLTJSFK5bo8ePHe71mO4J66tSp1hYmOFyLOnXq2KL7eCNPO+00N336dMuk9gGJOSGEiCFYTVhE/Ouvvw62bKNy5cpu0KBBJuaAOKEjjzzSRF5qliiLqTOQIaTmzp1rPxcsWOBef/11s+alWpfKG0QmbSGWrDCIE1y+fLktys9xzps3z+K7+Mm2qGT0Eo8FWOIQ1rVr17a/99tvP3OzcrwkSkSVDRs2WP9IteQiPhGiCxcu3H7uaR+/cw2iOFHA6ka/IrO4V69e9go57LDDLLRs06ZNwZZt7aYtYf/inuAeIQ6StucTiTkhhIgpCLp0VxjB2Yccckjwl7PBikSJRo0aWXwQ+8+ZM8c98cQT7p577jGLHYMyrsAhQ4a4GTNmuNWrV+fUxYZwLOr7cJeNHTvW4tEef/xx9/LLL5tV8pprrnETJkywQRd3WpTAdYyApnxMCGIbkRHlUjJci1Th/9FHH7mhQ4eaJfj22293r732mnvxxRet/2DBW7JkyQ6iKCqE7SCz+IADDtih/h/3BHFzlBkBJjJklg8ePNg99dRTJuC4J2644QbrZyRT5BOJOSGEiCGU8ujYsaO5iwoCkUQBWIRDnz59bFDDwkIgPsHffAZWOxa5RwRiqTj77LNtvdcw8y8XcCy4IRE6BYHLDwsXyQMHHnig69Spkw3EtO0Pf/iDa9euXU6D2bMBSw/nMbVdlF9BZOTS8llcTj/9dNe2bdvt4oeEAVz1tWrVsj5y3nnnuRNPPNFiz1q0aGGL8iOWogTWUY6N9YvT12bF4oaYI7np8MMPN8GHFfL444+360LbaGOVKlWsPh116gq7z3KBxJwQQsQQhNgZZ5xhwgaXF8th/eMf/9jBOoWrCEtbt27drDwDMEAj1shurVGjhjvhhBMs2xVRhAuwXr16NjDnMvO1bt26dhyhCxVXV7obEjFx0kknmcuM40XY0VbKsjD4sq2kJSPKi9Ddmuqu9CGjmMXpEULh+TziiCNcmzZtTCxj4UXU8R59kH2xBIc1AqMC575JkyYmwlLFHIkdWNzoM0wAgGvC/UD/ol2IOkQe90TTpk3dMcccY7Xq8onEnBBCxBgsDM8884x79tlnLfaN2Dgg0xUXGJa25s2b2zbAUsTghtuIGDREEPFRiKdUd2CuIU6PgPXJkyfbihDpYo5BFqsi7jxiz4D2YvWKmpAIQfxg1cJCF4KFERGUS8tnacE6xbnHnUqsGW2g//B7agmWqEM/x1WPOKU0W2h5RMxxT1C6hL5Em7hmTI6Y8JQVXHO+MxT5xUFiTgghYghih/IjDAzUrrriiivM7UrxXd6jFMPvf/97sz6wD4MxrxDEHlYuLF4MWqGwywfEX7GsWZcuXVy/fv3sZ2rcXwhLZZHxygDL8VJD9be//a3be++9gz2iBeeWAZwg/BCOH/cd5z6qPP300yaqUwkFP1Y52oCYO/roo0skTHIFsX3Ev+E6pb+Q2cp579mzp6tUqZIl3KRm7GLhRoDzHnGCiDoyYMsKJiQI45IgMSeEEDEE1x2DD4MOFjVeiAQG14ceesh+MhAxCBMjRNIAggkIwKe4bWiR4DP4XyxGWJJymfwAWEE4RuLhEJSU8kgf9BiQsdgBLmNqtVFughguMnpTA/ajAu46zivuY4Q0teYo2E+MX9RizFJBqNG3QisvfY0kAMQzx00fIXsVVyQ/EUpRg/5CG2gL/Rkr7syZM205Lo6ZhBqSf5hIAG0i6QEBzoSI64TY4/6g3flGYk4IIWIIYo3BJj2DE8GA6MFledlll7kOHTpYfTMSIUK3Eu4yrCuIDUBwIPqwXFBKI9fWFr6PwbSwbFTaxYCMmGCwpS1YUFjcHmGaanXMN2GMHLFWBNnjQsa9R7Yk5xwxF+XYOcQP5zNM0giFDrGNZIBiscIa+sILL5gIT40JjBKITPoW4g0xSiwmdRe5J7BiI+ZCyxzvcZ1wwSLg6GPEzBG+wPZ8s0v//v1v5gZJN0MzE+JmZ3ZD4Cnm+PBGF0IIsa221nPPPWcuybPOOsuSA6ICz3VKX2BJIAkgBJFAIgNxcgSA80I8kCzBAvC4/bB6kQSBsOB3XJoMYnwO2a1h4HuuCAfLxo0bF+h+RGBw/ATcM9Ai5BBLbCOrlTEu1yI0HdzVxF2R3cl55XhwRXKNOH5clPSjMOYvqlCmg2Mn+J++gFijDSQT4IbEgst5p8+QiYx1NN/nPh2OmZI73A8kNmBF5J7AMsc9QT/iWqB/6HPsFyYGhfcE1457AkswAq+0II4pqUM/IWOY+zRbtDarEEKUECwSUV2bFasDpSEQNYWV9PABBBADKla3qImCbEE8hGuzPvzww5GN48uGt956y9zWiA2fEjVS4XoQY4mIRpBFwRKKJR3LIP3kgQce0NqsQghR0SHrjmxC34Uc0I4w3k/kH8QPVltfhRzQl3ALkyATZZd2tkjMCSFEDMFy8sEHH+yQKekrJGTQlijFvVVkCL8iKSaMmfMRLHO0gxhQn9sRIjEnhBAxBBcwWavE9PkOrsn777//Z+vMivxAOQ+SA6KYpZotiDlKrJB0UtSavz4gMSeEEDGEwYqMVDLufIdkDoQc8U0i/5D9SXFm+pjPbNmyxdrieztAYk4IIWIIAxTiJw4uJNoQhwE3LnAt4tKv4tAOkJgTQogYQnkIFkOnnIfvkDXZvn17rzNA40SzZs2sdEauS9SUJSRAUEqFciRlUVYk30jMlREE5hI/EBeVnw1hmzVj3gbnAXdQHOIvhP8g5hBAJ598crDFX8g4ZA1ZyqyI/EPd2datW3tdezYUc9RYlJiLCAT6siZcPkUF6xyy9iEZVxUBMuVuvfVWN2nSpBILWGIuqPOzbNmyWAQ2UzX/lltusZ/Fhb7L+SCziiwxsqz4HJZXUpyQKAn0myguYVUSfA60jyOMucSb+T6Rj0s7wHsxh4i78847rQhjPtPWMf9TtTsOCj9bmCVTab24NXoYYCiySjbU3//+d1tG5b777nPz588P9vATXA7U9CrJAtlY9MaPH28Fuq+++mp35ZVXmlimOGccaiCJ3MNANXHiRFt/0ndoA1m54VqgIr8wiSejNXURet9AwLEU1zPPPKNs1nyDKGCFCi4Gg14uLBhYT7CWvPfee2aFI1OMDk0BxS5duriqVavafmznfRZ65gFEBfPVq1dbdlkm6EyhRSZsBytzrFmzxgb6fEBnp74TbeVFNXmOjWOlXcRMsPQJ5mqOkWNl/Ub+j2OnvSwLlw7/zzJDrOHHsjYtW7Y0i9SYMWO2Z95xbfkOFjsObzTOHeeU7KN8wfFxTJwP+kE4uNAvEPK4tVhCCGg754D/4ZzQRv4v04CE5YH2skQMn3HmmWe6Tp06WaxQWCiVc0z7WYMSsIhSQyz1HAkRwr362muvuUWLFgVb/IV7g3U+uc9E/mF5NVaJysWYW17wTOZ5zFJ8cQiP8lrMIeC4GMRTUO2ci1Oe8H2DBg1yCxcudK+//robMGCALVjNQIybFcsKs2FgwMVaePvtt7spU6ZYPRv2f/DBB82amA4utWHDhtn+oXDCWkVtpXw8wDiXtPOJJ55wy5cvtzY8/vjjdlxYQNk+dOhQay9CgvXkaO+QIUNsf9b1pS2cLwRhKsRZnH/++e7cc8911apVszX9EHbcUKFw4QbDcsdnsH4e38NySSwrt3btWtsn13BNWQybtjKJoK3UvwKE7B133OFmzJhhog7hxuwV6xrngrUM6QMs1YKFId39xUOR5fJYD/C8886zZVwQc6wXCJzzV155xZZ4oQ8hFFeuXGluXY5JYk6kw71Ev8inx6KsCCeMFSkmOcpwPcJntc/g9YhDO8BbMYcFC0HFQre8uMnL0x3FYMtA/eqrr7oWLVqY5YQsMb6TQRjXGgN4aEU78MADbTuDPuKlQ4cO7te//rWbOXOmCZVUeNjioiOrBqsTg/bs2bNtICcbDVdmrsEKhlClPQSIkr3Eosl0fNqKtQkRS9s4ftb2ZZFxLAGLFy+2c8OCxMzg0q1zLAHD8jwIcKxKzz//vP2OZZOgbQYgzgdWP/bFDYto4jwQBI0AzAdcO9zCtKtNmzZ2ThChsOeee9o5Cd0OnBMWmGY7Yo6YwPBaEieYHgMUPlA4Vwh7xB/WzRA+d//997eFrRGOnBMWY27UqJH1GxZ+FiIVJmQHHXTQdm+BzzBBYoF/ng8i//Dc9zn5IYRxJg7tAC/FHJaq6dOn2w1+0kknmdBCzDEQYvUqj9kboo2BGJGDxYWgSRbYxtWI4GDdQEROaB1EnDDQ4zY78cQTTQixnh0WmXRXKx2K90455RR7YCEaAQsNWUN8Vq6hTZxHrGwIMtaw43gYGGgjbuNw3UfajTuQ/WkLIgaBzYOXfRG1mcCyNHLkSItbYB/OFaKG61qrVi3LNOJ7EXIIGyx5CGliE/MBcZFYGUO3Vbt27dwJJ5xgv9P3uLZ16tSxvzk3tWvXNjFMv6EPHHXUUSbKEHi0MRW2sT/nD7GPFe/GG280qyTnkPcbNGhgfYTjmDZtmjv00ENd586dbbvPaySK8oH78rLLLrOsQ98hI7dnz57bJ08iv/Asxnvg8ySSsYaQlo4dO8ZC0Hkp5nB3YiFj8MSChdjA/Tdq1Cj7uzR+fP430/9z4enAffr0MasIAeq4t1ItLKF1BRA2DMIIsVQxlsl6GAoY3mMw5/sRR/mchZLcgJsPAYL7kHg2rE0cI3FaWJpq1Khhf3P8vIelCDGDKEG0EFP2q1/9yoRuJogt69Gjh7v88stt38cee8wsk3xeKE44h/zOceS7LAGWOCyDCCncmxxzeJzEO3IuQrco24nzWbdunQk+BCh/E1OICEsX6FxrJgaIM76D84JAnDx5sk0c+Gw+k3PDi/8nvCBdFPoCfVyu4fKFPhKXhfarV69u3g76vsg/POcZo3x9/gB9CW/Zsccea89X3/GuBYgkBnWsNgyQDP6IOsQDViMG05J0MD4XUYhQw12aDoMPQqt79+7unnvusYGUuLHQihYOtmGnoKPwe+rDh9/D7ZlYunSptYdjoT0ImXwMeHw/IhUB1b9/f7MGYSnC9QdYprAO8T7nhePkuHF9kwjCLAfBR7A+Nwqzt1SBzO/sz364Z5s0aWICh89NjQ8keYA4Na5naM3kWvN9uSa0qGGNRchjmSQjFzhXxDnS/7AucozANtqOlRKwcnLtsTqG5y0EMRx+B/+DID7ggANskhJOGPhcXLD0iTAWimvloyiif3CvUZZGoq58oL9wD8ZloX0mT+or0YBnve8L7QMTbNriezvAOzHHYMiMAMsFVoxzzjnHZm24snB74a5MFVDZwEOPuC0C+olHyiS2EBlPPvmkmzNnjs0QiefC5B9mKiI6iP8KRQcDLa5BLFgIF/YhiJ/P4RUO5PwkKwhhgFURtzEuR4LscbFlEpaFweelCqeSwPFhfSLej3OJixgLW3jMHBMCgxuBY+cBy+88bHGLAm3nAcy5RcSkxs1hacKqyWCOW5zPYxtWBMQMNxcWKc4BcXgIvvB8IKLC48gWzkdpb1Zi5TgnXE8mDAiy0CqHK5X2k6VKPBwiDhCntAehCjz8OG+cC/ZLHZg4d0wOiAOlv/A+54TzibhDxJG1zWfjemcfJhJYQ9MTTIqCc1HaPlJaEPKcB+4pJgph4pAoO3g28Uyj3/oO9wWW+0zJYyL3TJw40Txh4UTTR3gO8kwdN25cLCYJ3om5dBjIEF9YihABxYULipAbPXq0icRLL710++CbCt+BYCM9nhfCAosVFsJQoPHwDGcriBkGd46L2TG/M+hjvUEQYIUBBlUEImIOKxbJBnwmrmRexL0UB8QVWbUca0nh+BGmiAXayvET8E/gP2CaRtAiJGgz54afuE3JTAWsSpxPRA7Zp6kxCVjaGLyJhePcv/jiixbcf8kll5hoREQSR0fwNvE+ZHhiocIyyHdlEtuFQRtoS2luWPoZrn0yUflJHwljkXBnMYngWiJcw8kEbUaMhm4u4iFJ3qAsC/0ltR38D/2Xc0E8KMkn9AeSJoDzQR0+rJj0Ec4t/Yb+Vpw+wjEiTOlvYbJOPsD1fvHFF1tpGjKlEfdMgETZwX2MWz8OAoiJDc8AWeaiAc9DxpjiTqyjBmMxY7Xv7TCSyjqRHHCTbdmRlStXJho2bEgLE717904kb6bgnWiRfFAlVqxYkUgOoonPPvsskXyABe9kx5IlSxLJQTlx7733Fvm/yQufWLx4cSI50NrPpCDZvj0pohLLli1LfPzxx4nkA8e2cQ6TgiyRFDOJpNhJJEVRIimAEqtXr04kxZ39748//ph477337MXvkLxR7POTDy/7uziMGTMmkRSZ1q6SkhzwE8kbNbFw4UJrK+eXNoVwnGxbs2bN9nO2cePGRFIA2v8C2zkXtJ+2p8J7nDvOT1IYJZLizc5X+B6fw/lMDu62jWu8aNGiRFK42N/FpUOHDolu3bptP+clgeOlj4XHyzGlwvEvXbo0kRw87W/asW7dusSGDRu2nyPaQ5vpB+F5CuFv9k1OEuyc05eSYsve4x7l/zifIfxOH0mKwmBLdvBZAwYMSCTFufWzKJAUcomWLVsmZs6cGWzxB56L7du3TyTFeGLEiBHB1mhAn+zatas923yH59qFF16YSE54gi3+wXOAPtKlS5fE5s2bg61+csMNNyT69u27fQz0EZ65PAuvu+667c/afMMYceWVVyb23XffxOjRo4Ot2eG9ZQ73KsGYWDGwehXHxYqJGIsHliKyWor6X9yqxITh5uInLrRwO1YprDNYXvg8tmHBwk2G+xDrCVYrLCqpKfa46nDZpbrtiAXk87F+FRe+G4tQca1XqfC/WNbIkqStnF8+N4TjZFtYqgSwqGGtCr+X7ZwL2p9uOeI9zh3nB6sbmZ6cr/A9PofzSYwicI3JkMWdXhJwU6ZaBksCx0sfC4+XY0qF4yeOMkz2oB1YFrE4hueI9tBm+kH69eFv9sX6yTmnL4VJEhw//5dakoXf6SMlSQrhOHiFx5VvzjjjDGsHrhtm/KJsoN+Q/R1a1H2G/o8Vt6DMeJFbyC7mWRg+o32E5x+lnfCwpY5vvuK9mCsNuA9x3SEcEFxxISqDtIguUeojhCIQg0qIALGComxgEkXZBWoT+g4TF7LrmTSK/MOqPWeddVapJ8n5hGcgkx0mCRJznkPsEn5/MgdLY8mKErSDThpa+cS2m5ZXHG7Y0sI5iGIfwfJCDB/3ZCIO8SsRgPNIrFk+YyPLijDGWEQDYsCJe/b9XiXWm1ccnjleKRhOOB2ITD4eUoW9igqU5bMI8sdMXJRVjn3pvNl8b1m/+E6+OxM84MJ9wv3Zl7azjU4abue8ZRs8jPs5/L+ovjI93Ldu3brDdaLN/CSwnheB4KnnhH2zuYn5X/bPx/XP9sU1S4e2MZCHx03bOQdso01h0g7vhf3lp0KyXJn4kKxACZ9UyEYmYSM9AYkMXbanF8nOBCEFCE2yyeMgPqIA15Ml8Eiq8R2SdlhJpaBnocgtVBsgaclnkc14QSJYXBba34mAaLI5KP+QCi4PaqqRWdi7d29bEzPf8QoMJtdee61loBRkVWAAw/LQq1cviz0qCAYziuGy/BTrn5J1WBAMdKwRSmX+XFvwOE7ip1gNID32jJuJOCOsLOFxMbAiUonvIo6L88FnEL/VrVs3ixEoDDr16NGjLQOU/+Wzowhru1KKJtXaRv9gfViybOkf4bFTIoT9iN/jJzcxbeMc9evXz2LVCoObnfPMgyuKFk+uGdnGVPtPvV5c98GDB1sWLMdNH6FUDOVNEHL0eeKqOB/sSxwkn0EsZDqIPApHX3/99VbU+O6777b+yHmk7iJlIziXPCuYIHGuWIcWIcG9QwmhwqAWGsu5ES/J5+VrlY/iwoSB2oMMCginrl27Bu/kH56TV1xxhfXzP//5z8FWP6GEDeVJbrvtNnuW+Qj3Cu0gi5/nFOEFvsI4zCTw3nvv9TaOkecedVSZCPOMisJScTzLeW7ST1iHm1U2ssUrMceDk8GBmXthIoP3GjZsaEkFBcFgM3DgQJvxIebC+miZYF/KUVAXK9fihkGUxA7KlaTHJyBaKF/CMYXHxcBNvTqW3sLiyAOEAZwAcwKhM5VdSYUOTk0z3F1RhmQDxEequKZ/cD0RBmznRfsff/xxEy3UJSSpILQ+kaxA8Gu6SE6Hc0zJEf4v19c/Gzgu4j5ZaSL1+Ng+d+5cK1/CdgQdDy7uIQQ/4p4BhWvOvljHwtUq0mEfRD4FkxHSgwYN2v7w44GOaOQB37dvX/sexOEf//hHs6bwflFijmcQpUpIvEEo8tMHoizmsMJeffXV5sK+5pprgq1+8tRTT1nJHvoShbl9JE5i7oYbbrC+j0EkTAT0DZ5prOTD+H7zzTdvTzjLJ6URczzcvS5NUlKSFzCRvIBWxoOyF3Hh2WefTbRo0SLxzjvvBFtEcoBNXH755cFfIimWrFxLclIUbMkOyq7MmjXLSuskH4TB1oSVi0gOUDuUTuH9999/P5GcWGwvO1MYfEZywpJITiCZXAZbow/PxaiWJqH0zPXXX29lPXxn3Lhx9rz2qW+kwz0Rl9Ik99xzT+Kuu+7aXj7KR7gew4YNS9x5551WsikKcD4rbGmSkoK7DUsjpmJmsHEBiwhWFmYbYhtYJnlhlaro0D+Y/XE+ihubhiWzefPmVlon1QKIpYRixqmlU3iflTLClVKKgpIkWOewVkRhhhwHsJhg7aTItO8QHoKnKL0kkMgPhLjg/fG9NAlF2QkbSQ3X8ZUKK+ZwwbFQPAMbLqe4kBToO/wU/0XnZMdzEKXzQbwjx8N6v1GIXYkDiGKEd1j70GcIGWGykBpWIfIHIUxFhez4AOEc3CNRjIUuLhX6ziDOiFhBMujiAnEACFQJl/8SWubENrDcRq2PrF692mJCjz766B2sfqLk8CzA4kmyi++QmUvMsu7jaECGOgk29DGfwSvHKw7jZYUWc2TtUc2f4PZwcfTyggE0F1BUE4sjAf9RALceNwvBsvkCNyAZcPme1fPg43zg7uVFmQXc/Ll8ICKUyN5lNY2ouBYYqF955RVbHYWXKBvIwr/11lstecB3pk2bZkHqCDqRfx599FFLgvI5dIXn7vDhwy0ZJQ5hSRVazOGGoIo1qpxsNKwVZQ0zY7KXyAouj89Ph6zXIUOGFJrJmwuIxyIrlrpkEyZMcCNHjrQBOx+ijswrsvryXa0c69ODDz5o2Zo8CMmmJgMyl7WziHGhBAglHkqyFFh5EC7jxZJ6vpQk8QGEO5OFONRmY7D1PbaZ6xEXNzHPcSZhPlvRuRa0IZ+GhrLE256FquYGZ2aAcOB3XvxeHBcS5TpI/6Wkx9NPP22z2bIACwzlXcaNG2e1bGbNmpUTCwzxRgyI+Q5M/fDDD93QoUOtTAhB8AgpygrMnDkz5yZtgqaxWOb7wcNSVVhJVq5caXGan3/+uU0ocnmtOAck/lSuXDnv54N7l6LCCH6CkJs1axa8I8qKsL6g79AGrOs+B9xDHNx5QL+KQ5xZXNoBXtWZAwTR2rVr3bx587bXfcOlGGZx8jsCjUXis3Uj8ZksuE99Ntw8rVu3LnVhSiwNc+bMsePD6kcALwVXK0qmHkKFWnUUPEY4sA4u/QhhRx2duMxQiwMWqLDuWti/6LNxeZgUB545L7/8st0jrLtJZlxR9f6iCLP68qwzR/gH5wmKI765v5iYLl261NZnPfPMM4N3/ISJNmMRoTE8430TRVw7xhmMBhw7tS99rc8GWPbxONGvuG9zFUZUVnA9OGZqF1JUm3CEKKwziyGqwtSZ27RpU2Lo0KFWN6xRo0aJZGOtblZyVp9o0qRJYuDAgYm5c+cmkicl+I/soS7WhAkTyqRGW7KjWB2b77//PjFgwACrkVTcul5xYePGjVb/rl+/fonk4B1srXiMHTs20bZt28SCBQusVht9o6KSFPeJp556KrF8+fJgi5+Ud5056mDxDC7pq2nTplYf0HdGjRqVsX0+vjp16pT47rvvgpZlZu3atYlJkyYl1q9fH2yJFrfcckvGtvn46tWrV4n0Qnnw71LUmfPOMkcMCC4qrBnEYVFHidRiqs8T1H3VVVcFe0YDZu4sT8TxorgrUg0tMs+wzj3//PMWP4crDQtMRQXL7/jx481yHGbmUbctfeUG4Q/lbZnDA0GwORSnjySf7bZ//fr1baUPwgx8hnEoKejs+e8jXAssc3gosC4SgpLJMkeoEKsNMbYRfoDlKIrhBySkYC31Ge4RLNitWrVyHTp0iISHpDSWOe/EXAiuBzoUa6oRi8TxIRTOPffcYI9oUNHFHJnCuHpYdJ10dlw+J598coV0s/KgJpYQtwRp/dysuKNZ1zQONZsqIuUt5kS8YDLHcnoFLeeF2GN5yTCeFFHXtGnT4F0Rd0oj5rwcUWkwteGY2TAwsnYm/u/CFsvPF8zIEC78LM7MOg4Qs8g1wTLQq1cvi3V87rnnTNRURIj1oY4aFmRiM5kNfvLJJ7Z2qhAi/hS16goCj8x71jVm5RTfYtFE/vBSzK1fv94tWrTIqsUD1g4sX4gHEg+iVFgSIUf6M8cUhyVDsgWBwswydIsQXBpmblZEqxxQmoVM1hDODdnHUSkRIoQoP3DrFQUrElAnlGekhJwoDt6NqnTwGTNmmDUurKVGtiTiYdKkSW7dunWRsIAh3shcoq4YGYwvvPCCe+ihh8xtXRFuUtyIY8eOtZpqlGcZNmyYufLPOOOMyBQ0zjVMQAYMGGCZbE888YTFxrA2YL5rAgohcgfjExN7rPLU4MQ9P3nyZFvSLoT4OiGKg5cmEoJ6SSUmpRgoNkoCBIHklACJQiAjNyxm8rp169oAjtm8du3aWS06HgcQKD169LBrhTuc8hN9+/Z1J510UrBHxeOcc85xnTp1stk3qzAQP3jZZZd5WZJDCFFysLxRBHnhwoVWnoeJXlnVOBUVE+/EHEINYdCyZcvt7ikEHPFH1DCLisuK40S8kSlz/vnnW3IGx0zMVBTEZnlDskzjxo1du3btzBp3+umnW9ZxRYsbTIUsNgQdkw8scgQ2VxRxL4TYBu5W4r55HvTp08fdeOONFlNcrVq1YI9t40fqTyGKomIGL4mcUlFj5AqiIgtaIcQ2QYdFnvg4vBgkRRFTTHw1VrrXX3/dbdq0yX6nGkBFTRoT2aNRVgghhIgA1J8knnbEiBGuSpUqVn6L+nqUdRKiMCTmhBBCiAhwyCGHuJtuusmNHj3aEuf4ed1119l620IUhsScEEIIEQEo3UTWP0XEq1atau5XflakslaiZEjMCSGEEEJ4jMScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBBCeIzEnBBCCCGEx0jMCSGEEEJ4jMScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBBCeIzEnBBCCCGEx0jMCSGEEEJ4jMScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBBCeIzEnBBCCCGEx0jMCSGEEEJ4jMScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBBCeIzEnBBCCCGEx0jMCSGEEEJ4jMScEEIIIYTHSMwJIYQQQniMxJwQQgghhMdIzAkhhBA5YOvWre6nn34K/hKi7JCYE0IIIXLArrvu6vbYYw+30047BVuEKBsk5oQQQohyBgHXtGlT17NnT7f77rsHW4UoGyTmhBBCiBxQvXp1V69ePfeLX/wi2CJE2SAxJ4QQQgjhMRJzQgghhBAeIzEnhBBCCOExEnNCCCGEEB4jMSeEEEII4TESc0IIIYQQHiMxJ4QQQgjhMRJzQgghhBAeIzEnhBBCCOExEnNCCCGEEB4jMSeEEEII4TESc0IIIYQQHiMxJ4QQQgjhMRJzQgghhBAeIzEnhBBCCOExEnNCCCGEEB4jMSeEEEII4TESc0IIIYQQHiMxJ4QQQgjhMRJzQgghhBAeIzEnhBBCCOExEnNCCCGEEB6TlZjbaaed3C677BL8JYQQAngu8nyEnXfW3FgIUXJSnyfFZad//etfia+++sodfPDBwaZtrFq1ynXv3t298cYbrkePHm7gwIGuUqVK7qeffgr2EEKIigsP3S1bttjzcerUqe7RRx91Xbp0se2JRCLYSwghiobJYFKPuRtvvNGNHTvWPfDAA+7CCy8M3i2aQsXcJZdc4ubPn+9q1arl6tat63bbbTeJOSGESIJo+/HHH92CBQvcp59+6ho0aOAOO+wwiTkhRLFBzPE8Wbp0qdu8ebO777773AUXXBC8WzRZiTm5D4QQ4ucg3MIJLr/zEkKIkrJ161ZXpUoVN3jw4GKIOef+P4pzU5oteiMKAAAAAElFTkSuQmCC)\n\nFigure A-2 shows:\n- The mathematical expression for the square wave\n- Its expansion into the Fourier’s series\n- Its graphic representation.\n", "_____no_output_____" ] ], [ [ "fs = 1e3;\nt = 0 : 1/fs : 1;\n\nsq = sign( -t+0.5 );\n\ng1 = 0;\n\nfor i = 1:2:5\n g1 = g1 + 4*sin( 2*pi*i*t )/pi/i;\nend\n\nfigure();\nplot(t,sq);hold on; plot(t,g1); hold off;grid on; title('Square Wave')", "_____no_output_____" ] ], [ [ "---\n\n### SYNTHESIS OF A FULL WAVE RECTIFIED\n![FW.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnMAAADRCAYAAACn+zTCAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAEweSURBVHhe7d0HmCxVufXxQhQwYMKEgBKUnEEloyCSowQJSgYBFbmGK34GjNcAoiKCKDkJEiQJCpJRsqAcgkdQhIOAChIUUKS/+W27Ds04+UzomrP+z9PPzHT3dNcOVXvt9b571yytHqoQQgghhNBIntf+GUIIIYQQGkjEXAghhBBCg/mvMOvTTz9d/fvf/27/FUIIIYQQupHZZ5+9/HyOmPvXv/5VPfjgg+X3WWaZpfwMIYQQQgjdx+tf//ry8zli7qmnnqr++te/Tn8xhBBCCCF0N8mZCyGEEEJoMBFzIYQQQggNZshh1meeeaY6++yzq6lTp05PuAshhBBCCKMLzfW85z2vWm211arll1++/ewAEHM1Tz75ZGvatGntv56lR+C1DjvssNbcc89N+OWRRx555JFHHnnkMcaPvfbaq/XQQw+11Vj/DMmZmzJlSrXzzjtX1113XfXCF76weslLXpLVriGE0IFL6SOPPFL985//rOacc85yrQwhhOFCX9ki7uGHH66WWmqp6pvf/Ga15pprtl/tmyGJuVtvvbWIuWuvvbbacMMNq5122qkIOnvShRDCzI5wyD/+8Y/qG9/4RnX11VdX++67b7X22muX54VLQghhqMw222zVn//85+rQQw+t7r777urwww+vNt544/arfTNkMbfrrrtOv0gddNBB1ayzztp+NYQQgpn0tttuW5122mnV8ccfX22//fbtV0IIYXg89thj1e67715deeWV1fe+971ipA3EsFezcuOefPLJ9l8hhBDguljfPUeoNYQQRorryXCinyPamqTDzAshhNBDroshhNFiuNeT7DMXQgghhNBgIuZCCCGEhuFe6oO5N96TBTgzBxFzIYQQQpdBiNma4vHHH3+OaJNHZWeJSy65pKyg7g/5mzfccEN1xRVXlM8Jk5uIuRBCCKGLePDBB6vLLrus+v73v192j7DLBAi8s846q7rooouq+eabr3rRi15Unu8LO068+c1vrh544IHqBz/4QfWXv/yl/UqYjETMhRBCCF2CVYxHHHFEdf/991dbbrll9YpXvGL6qsZrrrmmOv/888u2N4stttigm/fPNddc1UYbbVS2HbNlTvaGnbxEzIUQQghdAkfu9ttvr1ZYYYVqwQUXrD70oQ9Vr3vd66pHH320uuCCC6oVV1yxeuMb39h+9+Bw77bYYovyuXfeeWf72TDZiJgLIYQQugC7/Z988sklT+6+++57Tq6bv++6665q5ZVXbj/zH4Ref/vb31a/+93vpi92uOWWW8pz9b6HbgTwyle+srr++uvL32HyETEXQgghTDAWORBz3DP39b333nvLvX5rpk2bVoSb0GmNsKmcOIsc3CVg6tSp1W233VYdfPDBJbfuiSeeKO+bffbZq1e96lXVTTfdVP4Ok4+IuRBCCGGCkf+25JJLlsfmm29e7bjjjtX888/ffvU/iyKEWp///Oe3n/mPAJxjjjnKrZ4ItxNPPLGIQffxXH/99YuIg3sEe3D3slXJ5CRiLoQQQugC5Mo99NBDz3HfaqxOJcg6ecELXlAcNzl1Hpy5hRdeuNpss82KKPR6TUTc5CZiLoQQQugC5L299KUvrRZaaKH2M8/y2te+tppzzjn7vO8vV45r9+IXv7jkxvVG7pwHp6+3IAyTg7RqCCGE0AVMmTKl5MvNM8887WeeZd555y0/f//735efsI2J8KtcuJe//OUlBHvPPfeUXDr5dTU2F/7jH/9YVsKGyUnEXAghhDDB2AvuscceK6KtMzxaM/fcc1dLLbVU9ctf/rL9TFVdfvnl1X777Vfy5HbdddfqJS95SfXTn/60uuOOO57j4BGABN6yyy7bfiZMNiLmQgghhHGkr3uq2nZEqLRz0UMn9ouzATCHjYMHGwqvuuqq1eqrr17CsJtuumn1hje8obzXwggQiGeeeWa13nrrDWt/utAsIuZCCCGEccT+ceedd15Z7FBjSxH5bO7s0B+cOStdTz/99OrKK6+sll9++WqvvfaaLtKIOq/724IJ25vYt26BBRYoiyKSLzd5ScuGEEII44j940455ZTqV7/6VflbSJTbtsQSSwwo5uTErb322tXWW29dzTbbbOX/et/SS84d0Wb1qnu6rrTSSuX2XxZWhMlLxFwIIYQwjhBa7vIgT04um1ttLb744tUee+zxnH3k+oJQW3TRRavllltu+j5yfUHked/SSy894A35w+QgYi6EEEIYRwitWrQJh66yyirVJptsUnLghopFEgOFTX3HQGIvTC4i5kIIIYQJwEIIosv+cH2tYA1hqETMhRBCCCE0mIi5EEIIIYQGEzEXQgghhNBgIuZCCCGEEBpMxFwIIYQQQoOJmAshhBBCaDARcyGEEEIIDSZiLoQQQgihwUTMhRBCCCE0mIi5EEIIIYQGEzEXQgghhNBgIuZCCCGEEBpMxFwIIYQQQhfx/Oc/v3rhC19YzTLLLFWr1Wo/2z8RcyGEEEIIXcQLXvCCarbZZmv/NTgRcyGEEEIIDSZiLoQQQgihwUTMhRBCCCE0mIi5CeTBBx+s7r777urRRx+tnnnmmfazz/Lwww9Xv/3tb6vHH3+8/czAPPnkk9Xtt99e3Xfffe1nJo5//OMf1VVXXVX99a9/bT/T3UybNq268cYby3GHEEIITSJibgIgzq6//vrql7/8ZXX00UdXJ510UvXEE0+0X/0PU6ZMqU4++eTq6quvrv7+97+3nx2Yp556qrrhhhuqE088sbruuuv6FIjjxZ/+9Kdqzz33rG6++eb2M4NDSDnuW2+9ddyP/aKLLqoOOOCA6oEHHmg/E0IIITSDrhdz3KlHHnmk/dfw4Hj98Y9/LI5Vt/DPf/6zOvfcc6uf/OQn1VJLLVUtv/zy7VeehZA7/PDDq1lnnbXaeOONq9e+9rXtVwbmZS97WbXFFltUb37zm6tjjz22+sUvfjGkJc1jxdNPPz0sUfbnP/+5+upXv1odd9xx1b/+9a/2s+OD43S8MysT2U8w0d8fQghNpuvFHOFzyimnDNmdMihzhYilT37yk9U3v/nNIhK6hbvuuqu4QMsuu2y14IILFrG24447Vi960YvK6wToD37wg+rVr351te2221aveMUryvNDxb40m222WbXiiitWxx9/fHXnnXe2Xxl/nve855U9coYK0brHHnuU47fHznjiOB3vzIRzxcThnHPOqaZOndp+dvwwyRKKP++88xoTjg8hhG6kq0evxx57rIgyguz3v/99+9mBEa78y1/+Ul1wwQXVoYceWsJ23eK4OLbLL7+8+vWvf13Kc+2115bn6o0Bcdlll5XX3vWud1UvfelLy3M1yvGrX/2quu2226Y7XnVotfdgvOGGG5aQ4SWXXNJVjpNjcfyO64orrqjuvffe6a7Y7373u2rOOeesXv/61xdXkrAVivZ46KGHSn8Qdr7pppuKw9kff/jDH0rddua/+Wx117ScOMcrl+/iiy8uk4ALL7yw/K6f/OxnPyt1M9Scyhoiyv8dfPDBJcyvPQZzxpxT6vT+++9vP/OfnE/n13AnS9rSec2FPf3008vkazhCWtsTofrQz3/+81IPdZ2oo0svvbR8RwghzCx0tZiTO2Wg8TBoDAUb7S200ELVIossUgSBx3DcobHCAGrQ4TRyIQw+BGdnCNkg5fnXvOY1pQydEG1e+8Y3vlF9/vOfn55XZuAyKHI4OuHsLbHEEtWVV17ZNa6H4zB4ExLqw2BMcBNf2uiOO+6oPv3pTxdH8d///nf5nzqX7cwzzyx1J/z8kY98pPrhD3/4X4KOILEAxHs+85nPVGeffXZ5/pZbbinC5Ywzzhiyw9sNEDnqSx194QtfKD/VjT6w//77FwdX/QxVoBI46uSLX/xicbvVsUnDTjvtVC288MLtd/03RJv3f+5znyvfSWQTcITgkUceWRaPDAWTlGOOOab0VyKM6/ye97yn2mabbapXvvKV7XcNjLSLuh8oh3aVc/qtb32rOPHf/va3i9MYMRdCmJnoWjFngCJ0Fl100Wr22WevfvzjHz/HFegPOyYLWb74xS8uQq5bcCyvetWrihO3zjrrVB/96Eerrbba6jmDGLHDqZp33nlL/lsn6oN7ZfA1uApNcUq8/33ve1/1tre9rf3OZ3nTm95U3XPPPQMm9QvDEgq77bZbtffee/f5EPo0kM9oKI6rZFHHMsssU62//vrlob04S+pnySWXLAKDOFBWzuTiiy9e8h6JV2HpXXfdtdRNXyFk/2MQX2mllUrI9tRTTy2O3DXXXFO94x3vKCFtzl8TINS1M1H+lre8pfSJd7/73dU+++xTnEuP3Xffvdp6662rl7/85e3/6ht1QPyYBBD3PksI/0Mf+lDpTyYPA014CLf555+/hO45y9xRk4g3vOENpQ/PN9987Xf+N8phIvaVr3ylOuigg4qgW3rppcuxf/CDH6ze+ta3DrlNiHVijpPt/9TBeuutVz7HdUJ/ee9731vtsssu5bUQQphZ6FoxZ0An4gzeEvoJAStAh0odhuwWOIbzzDNPyQUzKK6++urVYostVspYY9AkRgi83rfxeMlLXlK9/e1vL4M3sULMCRu+853vrDbaaKPyWb0xSHL0OCn94XuIzLnnnrt63ete1+fDa0RF57GOBK4Y50yIjPAiUj7wgQ8U0Qkizfd0ln2uueYqYuWNb3xjEazqbd111y3itrdIJQgJBTl3m266afkebpy62XLLLYsAmGOOOdrvHltqUcoVdBz9PbjOfTmnxBV3ltjiYBEnxK86szhEHzAp4ED37is1QqNcPEJK31pttdWKMLfKWF0OtS4IOd/NRVPH3ELHp07XXnvt0kZ9oXyf+tSnioOn/xPTJgfbb799EV8jccz1RX1+gQUWKP17k002qVZYYYVyXqkL6QUc6ToHNYQQZga6UsyZgXPhzMDXWmutauWVVy6OlrBkLUyIAc6M/CkP+7V1m4DrjeMlrvpzMrwm/NjXIGcwNEARVBwR+WMGWSKnP/wP1FV/OJa99tqruHNCnH09uHIGYd87Ixj4iTEC4+Mf/3jJmSMk64GXAOrdhoQLAdG5EIR48eid5yXviuiFY/W7OiXixhtik5AU+lN/fT0++9nPllAh57A3+oDjVyaLZvzObeaOOjeUb7A8N31EaF6OpjAqoUM4D1dE+V7Hoa0IN+cgMVn3r/4gZqUWOF7vJ8KHujK7Lxy3a4Lv5dY6Jq4id9r5T+jN6IQjhBCaSFeKOeElSd9CQvJf4CJ9/vnnFycDXB5hG7P8HXbYoTriiCPKwN2tECkGYgMREdYXBAqHYSDxRdwIP/ucwcJT3msAJIb6w4B7wgknlLo85JBD+nwQX8cdd1x574yg3AQMZ4iw5drIwRpKHluncKl/H0iU+EyDfV2n4w3Xy0RECPD9739/nw8iWriRm9gfQtCcOw6pz9T/5QoKOQ8myogbYny77bYrodEvf/nLJV3BptIjmfgQqPqUOh1KaJRjRqxyE3/0ox8Vh1CYVqh0RnAcykBcqhPnlc9UJwP19RBCmKx0pZgjGjgxXBwX6J133rlabrnlSr6NXB0DkQFFGMrzHpyKzgt5PdD5OdigNx4YBB2/MGJ/YSnPC7EamPoTpsKUBjPvE8ID8deXS8NBGUz0+R7OhrwqDlBfD8JL7h3XcEbwXULNRMyBBx5YwmIS693lAsRX3V51m3U+V9P7Pb0x0Bvg9R25dtBn6kUVI8H/c7qIxMEcMah34T6hUKKurwenUuhcDlt/qH/OFgcKHCm5Yf4eTJAROsKpQqvSFbhz3FALEOxDqM2H2qb6nAmW7zWZMOHCQBMP/VlOG+EqRMuBPe2006qvfe1r00XlSNpEyNix1y6f6wXBrh61zUiEagghNJmuE3N/+9vfimAx6JvRcy2EWSWwu1Ab/A3UBirJ4Icddlj13e9+t6zII/DgYk44cDD8JKQmGuX6zW9+Uwb43luO1Bj8uCkGp848N+VQJ9wVGwETAZwaf0su9+hrUPQ53lcLgb4ggoVQuTZcs74e3BUO00Ah3cHQBlZSWoXqWIX7tK1BuD52woITVTtA2pt48vCavz2E3D3np7/hJ8Ep5C60xwWUIybEaKWvUOaM3N2BO6aOjjrqqBkWtUNFP7bi1/fVq5v1C6HGuqxDORYiyuISgmq//farNthgg7KCmFPm3LG1jbrsC8JJ37OViDYhQok5otACHOfiYJiYCbPWix5slG1Spm9xhQmzoYo67azcRG2dJ6ovEM9yBL3Wee6EEMLMQFeJORdqg67FDlyQetZvcBdmFVYxm7cdg8GHoOM6edS5Mv6HA2aAkFBvsONg+YyJhJDguBEZtejsjUGaU8OJMdjWEBK2XLCCU+iKSPE+g6wwJYeqd3jJIGyQXmWVVUq99QcxRVwacAd6eM+MhCs5bNpU23Jl5D9a7GFbCg4dOD/al1Oo/ISdMii/9jRI+x/l9hyh5jNB+Jx11llFcOkLxD9n1+KJr3/96+UzlWOk6Ee+j0s5Xs6PfmAbF+1ncgP9R1k4XNq8v77UFwQP90rumtXURLzPkNtnsUJfEEdEl3Lrc6uuumpZYawvSoMYTp3q3xYzWZUrxC5FwucKwQ5VaJsU+d7amYc60Q/UiZ913mQIIcwszHqATbzamB0TSb3DcoQBV8UgIpncarLhDCJDxSBJjHGnXKANFAQEIeaYrOyzIs6AIMRicOqN/3dBF86U8O1YDYQ+ayKTow1AHDRbQtQDc19w0axS5X7YwkM9E0LKQ8R5zt9cMr+vscYa5WenmCOKbQGivWxboj7HG8LVMXCBhDsdszYV6nN82kII0PFzjrS9B7FKMBAdBn/J90SEO2ZwGf2vtlxzzTXLikjP1e3qverCa4Q+p1MIXp8lagca5IkWIVnH25dAsQceYelYHGN/Id7RRF+2QMV3+uk7/eRoqjfl7y3ih4o60y/UF9fO+aS+e1OLptoN1lYcMe6a4xpJ31IOn6M/aB9iXp0P5Zqij0gx8N2uA/qV43JrvPr8GItr01DgoJuomGxYTa1sIfSF6yNTwnVtoD0ew8yL6wnTw4TXjhWD9pOewXE6Tz75ZGvatGntv56lZ9beWmmllcSzWvvss0+rRyy1XwlD4V//+lfrM5/5TGu77bZr3X///e1n++fqq69u7b777q0TTzyx1XPSt3oGsPYrg6NtLrnkktZee+3VOvfcc8t3TwRTp05t9Qy2rQsvvLD9THdz5JFHttZdd91Wz4nTfuZZekRVq2eQbh1++OFDar8wc+Lc22KLLVo9YrV11FFHtZ8N4b9xndlqq63KNTqEvnj00Udbu+yyS2veeedtnXPOOe1n+6crF0BMJriKwrxyfKjr/hY/dMKxsgJRXpStJYSWhoJQoFwmOWJuuM/JnJHQ6IzAMeGmTtT3DxduDueuL6eLG8Rlsa/ajGytEUIIIYwFEXNjDBEnH0m4VwhvqOJGGMmiDpujDhVhcuFYuUi+r68w9HhBtO67777T85q6HWFDdw/oK8SqHi0UGYoQDyGEEMabiLkxhriSjM9p632LrsHgAnGE+hIYfSFnzN0l5CJNNMoqX6+/DZK7DUn98pz6W2kcQgghdCsRc2MMN8etmMYjYT6EEEIIMx8RcyGEEEIIDSZiLoQQQgihwUTMhRBCCCE0mIi5EEIIoQtxizp3SznooIMm/C5GobuJmAshhBC6DPdl/sAHPlAdccQR5bZ+7o0cQn9EzIUQQghdhFs5uf+0/ULXW2+9smVSq+UmTCH0TcRcCCGEMM7Yrqq/eyvbXJ6I23XXXcuG5YRcxFwYiIi5EEIIYZxxx54nnniiPHqHUN0O0e0FCb6nn366/WwI/RMxF0IIIYwjRNoDDzxQHXPMMWWBw0UXXdR+JYSRETEXQgghjDNCqW7V+LrXva64cP3BpRsoJBsCIuZCCCGEcUT+m/t2b7nlltVHP/rRavXVV2+/8izCqxZCPPXUU+WncOwzzzzTfjWE5xIxF0IIYUgQIY8//nj15z//ubr33nure+65p4QLH3300UYLjSeffLKsHp02bVop0/3331/97W9/G/PtQGrXrS8uu+yy6uCDD66uvfbaasqUKWWLkptvvrn96uDIyXvkkUdKWZRJ2exb9/e//739jmZC1CrHeLdVtzNLz8k5fYmMGYAO7cbwndx6661lVc3VV19d7bPPPtVXvvKVAW3hEEKY2SBydtxxx+rMM8+sjjzyyGrnnXduv9J8DJi//e1vq7vvvrt67LHHysDJOTJ8CP+94AUvqOacc85qnnnmqRZeeOGyArPbIXSmTp1ayvTQQw8VQadMRFBdJuMcB+3Nb35zteCCC5bQ6Ghw1113VZ/4xCdKf9lwww3bzz6Lej399NPLXnPEngfht9lmm1XveMc72u/6b/yfz/b405/+NL2tlMlnOH6Pl7/85aWNFllkkVK+bkf7aKs//OEP5XdOpTKZQGgrjxe96EXV3HPPXfqftmp6WFrbffjDH65+9rOfVYcddli10UYbtV/pm4i5EEIYBSajmCPgrrjiiuK+GR8MkAbM1772tUW8EQjKbdwgHggiYuFVr3pVtfLKK1fLLLNM+5O6B8fK9frd735XBszZZputmmuuuUqZiBzHz/0hGh588MHyk5AiepZddtlqlVVWKeWeEQYTcyPhhhtuKJsLawfHZ286beW4X/jCFxbxw0HVlt6jPV/84hcXQSfMK3ev2+C+XXXVVUXEuQOGdtC3HOvLXvayIrg5jdrovvvuK+2p/eQivuUtb6lWWmml9ic1j4i5EEKYACaTmBNGPffcc6vbbrutiB0iZtVVVy3iYCAefvjhIiiuueaaMsgutdRSZb80Qmmi4eZwun7xi18UscZtW2211arFFlus/Y6+IVCvu+66Mv4J76kDAsz/j5TRFHO///3vq5/85CdF8BBwK6ywQhHSBM1A+L/LL7+8tPHss89e/s8mxRyuiYbovPDCC0tYmbO4+OKLl7ZaYIEF2u/oG4Lv+uuvL+0s9LrQQguV+h3s/7qR4Yq5WQ/oof17Ue46uRlXJ07ss88+u+RIULrrr79+4y3MEEIYbQg5g+Omm25aLbfccu1nm8WNN95YHX300SX8aHDfY489qiWWWOK/xoW+4AAROUKBRCBXj2PE8eptEownf/zjH8s2IESmMNxuu+1W3KihhBg5PW984xuni1n1wy3iEhEJI3HpiF7bkRDJjmckGK9/+tOfVieddFIRnJtvvnm13XbblfrXDoNB7Pn+FVdcsZg4hN2dd95ZzTfffEUUThS33HJLdfzxxxdX2PHtsssu1Vvf+tZBxSk4ddrq7W9/e3k/EX7llVeWvtg0QWfyccEFF5Q2IeQG6yfDdubMJPbff/9yYutMIYQws2Ng58ztt99+5QI8I86cpG6u1qKLLtp+ZnzggJxzzjnFCRAe3XrrrUv4cUYQpjz11FOr22+/vdpggw2qddddd9yNgF/96lflGHyvMi299NLtV0aGvLozzjijuvTSS4ubRUANRTx1MqPOHNfm5JNPLmUjSuXSzaijJifthz/8YRGaO+ywQ7X88su3Xxk/OGomRASZtiIsZwTn0Y9+9KPiFq+11lrVu9/97iLOxxMOoe8cbjRzzMKsZjIqRGLhkksuWZRuxFwIIfRcSGeZpYghToBIBjFnoB4JnAkukkF+oGT30cS1nziwcpLg2njjjYtAHQ04DESisJkyeYzWZw/Gz3/+8+q0004r4tQ2IPKtRguOD1eM6GZ2yD8bKjMi5oit73//+8W52nPPPUtu2Ggh9+yoo44qIVuO2HgJOgsZhPUJFw4oZ3s0w70XX3xxEarCz9tvv33RL+OB3Exu8FZbbTVsZ3DMc+ac6CrDTKTjX0MIYaaFmJOi8p3vfKfk7BBzO+20U/vV4eFzavGz++67lxDTWEKECqtyed73vveVAW+0MVYYUAkrQlGqzowuIhgMDulxxx1XrbPOOtV73vOeYbtnQ0Gdfe973yuLCPbee++SezYURirmuDz6lmR/Bstg+X4joXazbrrppnJ8Y50uoG9I4yJ8tNW73vWuMXFvmVH6uQUs3NSxFnRyM/UNIV9ibqydORU5nSeffLI1bdq09l/PMmXKlNZKK61EubX222+/9rMhhBA62WabbVo9IqV11FFHtZ8ZGU8//XTr5JNPbu21116tW265pf3s6PPMM8+0jj322Naee+7Z6hns2s+OHeeff35rjz32aF166aXtZ8aGK664orXDDjuUdjCujSU9oqe0U8/A3X5mcO68885WzwDfOvfcc9vPDM4jjzzS+vrXv97af//9W3fffXf72bFBnfUIiFaPYGxNnTq1/ezY8OMf/7i17777lj6hP44l+oW+7jvHkl//+tetHiHcOuaYY0bc/x599NHWLrvs0pp33nlbPZO79rP9M2yvm2UuNySEEMKzuC5yuUYDzoT8HuFBboIE/rFA7hfHQn7feGzjILIjdCyHrWfAaz87unDLTjnllLJ4gwMzVLdspGgjjqaFHj0iof3s6CJqduKJJ5Z8SlGysd7HT52pO6tIhV1F7MYCYXALQTi1a6yxxpi7tVbEbrHFFmXhiJzHsUD4+5BDDikO93j0v5rcASKEELoQK/OEZ6wElZsl/DqaWL1IfMhPetvb3tZ+dmwxWFt1Of/88xchaYHEaGIFrs12hT2FVsdrIFV/FngIjROTo835559f/eY3vymhe9ttjAdWtBIjc8wxR3XCCScUQTmaCOOeddZZJUeO8B5rIVez9tprl4UrQrv2GhxNbF0jRG3hhhC1c3i8iJgLIYQu5ZWvfGVxEtyBQR7daGGPMZ9nYDOQjicE1rbbbltcTCJFtGc0kGNkILUwz0rI8RJyIESIOduCEJMWwYwWRM955503Idvd2B+Q+L7jjjvKdiyjBRFv1apFAZtsssm4rnD2XRbD2JHD/nyjFWmsF/poewtTiODxJGIuhBC6GKGuNddcsyRUj4aTIMGdI2IXfQPpeG/VgHnnnbeIH4tFOE6jgT3thCG5Sa95zWvaz44fXBjCh0glEkbjXrVWl1plbMWqPjAR2GPQwgTii/M5o9jahSNM/HCeJ2KTYnePIOiERKUZjAbC7B4c4YnofxFzIYTQ5RhMuSRyzWZ0SyirSt0mSZ4S52+isFmtba4Iyxl1RwzKBJR8PHedmCjsj8btFGqd0ZxAYlDd+Gkl43g7PTW2kSEk9T+hyRkVqXbFsEOGO4MQ9ROFfiJP1CTADRFmBLdI01ZWno9XykJvIuZCCKHLsa0BQWfQ4NCNFFtiCJfZnoHjN5EIg1oQQcgZUEcKp1G4lhtiW4uJhvCxvZfE/hkRqW5lRfhYCDORd88A0a9ubf5sL8WRIl3A4gN789nseKLhTBOnzgmO4Ujw/7bB4XBzm8cr9683EXMhhNAAJG27e4FVeI888kj72aFjsLIxq81tx2sz4sGwKtM+XAZDIcWR4PZaRIbwajfcV5RItXcc91PYbSTI/5Mnp83d0qob4GTpf45rJIshOMr6rp/y/8Zr4+iBsMjDnSGE+0eawuD/iG5tPpqbUg+XiLkQQmgAZvy2VjDQjyTPh+ixxQnnyArZbsCALh9MCNGmscPFlhlWkPqMN73pTe1nJx6uJ/eJizoSkeruEm4aT2hMVHi1NzZdtn2InMCRLIZwj1GLOXyGe9x2C0Kt7uNKkA13MQ5hSty6b+qM3iZuRomYCyGEhmC1pIFDPtZw3DnCwH0v5ShNVE5Pf8jFIjAJGHc4GA6cr0cffbTk/3UThLdVwn/605/K7dmGg5We6oIj1203hyeY9T/H5+b+Q4VIIgA5p7Yi6SYck5DvlClThr3Awypf/yMFYji3cxsLIuZCCKEhcLJsRkrIDSfBXp6TfCWD1nhu2TEUlMnGu35yR4aKe5RyvoSMrcztNuwHR/xwo7ipQ0UostVqFcdoPLfsGApW7JoM2PPQPUeHinu9coYtDrElSLfBVZOXanIw1AUeHErpAfpuN7jCEXMhhNAguHPcLAnyQ9lIWBK+sKz8tIkOBfWHxQtu6i7UOlTHpw6LEbcTlXQ+GBZ4cG6sth0K9igj0rlyE7nScyAIF+J5qMJbfp3+N9dcc5UVzN2IrUr0I8J7qHsEytO0X6MVrN0QCo+YCyGEBuEG4QYQIbyhJG0LH3Hy5CqN5470w8FxETAG/qHsO8fp4mBZIPDqV7+6/Wz34U4URIwyDSUfi9PIjePKdcMCgb7Q/7SVXMDbbrut/Wz/2PuP42Urkm5zhTuxwIMrZ5I0GBYTSVsQcu6WUHjEXAghNAz7s0lI5w4MtO8c586eZ5yvbnXlajiHCy644JAWdxhwDaicnm515WoIs6lTp5bVrQNBoMqv43x1qytXo965WYNtKaNvKpO8NOHIbsakgKATPh7MHebI1a5cN6ygRsRcCCE0DPk9XAEiwf0g+0OIjzPCSelWV65GArkVoDZwtZ9ef3BP3KCdOJ1nnnnaz3YvwsfyqyTLD5SPRXQT327Z1a2uXI22IrwJVAtQ+sNqY64cV9jko5uxT5w+JczqnBkIEw6h5m7IlauJmAshhAZCJAh19bcCj3PFFREWG+97eo4Ug6NBf6D92Qy0Qsycnom4FdlwsQ2MPEfC2wbHfaGt5GvZp0xotgnof8QpEdofBKwVyraOaQLcYSJtoFA/sccRd5uzibyDSm8i5kIIoYHMP//8JcTjrg59hVoJPYMOcdAt+8oNhjsdcNsG2s5DrhLR123bdgwEMWM1MRHaSR0iJshtSWJ/um53UGus1uUQ97eqWqiSKF9sscUmdDPd4SB0rA0I6/5Crfom55Qz3k0h/oi5EEJoIFwp7pRtH/oKtcrpkYfVFFcEXEThO9uOED+9Ea4kEJTbwNsUlIlIs3EuahFQ/yQevN4UBxUWahDVnLe++h/hapNqG113e15jDZFm8uO86et+rRxUrp178Hp0ExFzIYTQUKzm7CvHx6pQ22EIGXXboDMYHB+OY1+hLitziZ4muXKwtxonVahVaLIWNwQRB4jI40p288rcvrBgQJ4fB7gT++RpP4K7aW1lsRAn8dZbb20/8yzOM8JV2Lzr9mts/wwhhNAwrHrk0PVeKcnZEn6V19OEvLJOiBq3V+pr2xXbkXBOJvrG88OFeBNuFErVNvVzHtwruXRWKDcNAtUea1zgTkwmbBKs/030nRGGi/u1mlCYDPXeTsZkgnvcTQsfaiLmQgihoRBqBlQhrc7k+nqVofyfpsHx4FBxQIRVazhAnC0DabdsBzEc5ptvvuJYEW9+CunJdSS6uY1ysJoGYUNYE6k2p64RIre3IQHb7Stze6MtuNlyTmvhDY6qtrJhdzfecSRiLoQQGgynyuBZ39CdABK2Ey4y8DQRApXQqXPMYCDlAnXTTdqHA7eRSFUmZRNiJbiF7ogDmws3ERMGQrtzOxkhSmXt9v3y+kMf6+14+524s+K1G93uiLkQQmgwHB2uXD2YSkgXIhK24zI0kXqLEmGtGmE7Iq8Je8v1BSHK8SFK5ckRc0S4nEdhvabi2JXnvvvuaz/znz3z5Mo1LcRaI9dP3lznZIIrzFHtxhArIuZCCKHBcN9sEVHfU9JPjyaGWGu4WEKptUAQ4rJnGVekSatYO5EfR4wKRxLc3DluTzcLhKFA9Jg01KuPrQQV9reCt2n5mjXOJ26pFa31tj+EnX6pD3YjEXMhhNBgDKRCWgbQf/zjH+WnfeWaGo6EPCuJ6IQPsSMnS9mEjpuMdlI2LmrtZilnN+ZgDRUOo+PXRspkMQTB3bRFKp04p5SJMNUH9T1pDNqvW93uiLkQQmg4kuu5PW6fZN+5epVhkyHc5P8ZULki3JKm5gDW2KKEIKgXCHBQOT1NDYfXyNus95uzqa7+19QcwJr67g7KRHTbY66bt1mJmAshhIYjj0xytrAQoWAfrKYjv4yLZYGAbUqIg6aLOXmAwpL1hs5yHbs1bDcctFUdPtZWFj50+71YB4O7TWQTcvqgMHnEXAghhDFDSEhuj4FUSIhT13SUgbsoV05YUhmbmlBfYysP4W8hSStAhVibHI6s4TbqdyYTVugS3sRPk9E2cuTcas12MnI4lbNbiZgLIYSGI0eJ0LnttttK0nnTQ1wQViXmbHNBKDTdlaupw8cWddiHrZtu1j5S9D0hZHd94KZOhjLpe8pBzFmo4pySH9itRMyFEELDEdIi4tz0nFhouoMFzo5y1QKhyQs6OiF6uHLukiDXrOn5ctBW+t2VV15Zfno0He0i1ErMcRu7PRweMRdCCA2H2BESEo4Utmt6iKtGfpkcQOXp5hDXcBCu85AvNxlET428Tc4w0U2wNh19jpgTEpff2O2pCxFzIYQwCSAM6v2xJgtCW7a7qF2SyYDblWkjW650283aZwSLIIQhTSomy2SCKHV/Vm3V7cJ7Uok5eQid9/ILI0PHFQKwFDsEyO+pN88M3Ym8Oe7BZMiXq5GEbkCVu8R9nAwIHRMGyjNZygQizkbB3NTJgnQFfZArbKLUzczSM3C32r+XAdw+Rb1X10hA3XXXXaurr7662meffaqvfOUrXVMwg4yVJjfddFPZX8ny6EUXXbRae+21S+eaCH7605+Wn2uuuWaj9noyA7ZyzD5B9c7kK620UrXyyiuXVVgzI7YQuOGGG6rll1++XKhmNpxfziu35/H7O9/5zgk7r7od154dd9yxOvPMM6sjjzyy2nnnnduvDIz8KduKmDyNxNGo/+fSSy+tLrzwwur9739/ye/RXk3GNef222+vvvGNb1TrrrtuteWWW053SZoK8aZdTjzxxOrQQw+tvvzlL1frrLNO400Ijpy2OuSQQ6o99tijWmGFFYqeaDLcYNriW9/6Vtnqh/YxRo42BGNfbqbQ7oc//OHqZz/7WXXYYYdVG220UfuVvmm8mHO83/zmN8vv73nPe8rmkocffngRUv/zP/8z7smlN998c/XBD36wDP6f//zni6ofbbSTk9+FgU3fucLGxc5Dx5CX0buD6IwGDf/rYtl5uxXJ0zqNE3GVVVapfv7zn1dnnXVW9bGPfaxcTGt8Rv3/8HmOx7GM50zTIOjCqAyOpbOs8lHgmHrfUkaXVwZOU/2/fR23z/j6179eXXzxxdUBBxxQrbXWWu1X/iNytIP+VX++OvCZ/X3eaOP7tDV8Z2dfd3zK6KcJRe86qP9XXfi/3vUH9XvOOeeU+2Mus8wyZeUdQVuHhrqhDrqJkYq5X/ziF0WwuHODehsp+quHgWG8r3tjgf6oj9p5v14t2WQhV+O8MG6ZKApNcn2cS02nbivhcO5j09tK/3M9UybXPOfVaLZTXT/vete7qv322++/9uWb6cScCyjnRPKl+9sZwPbaa6+ylF0FjOcSaRfj8847rzrttNOqt771raWBRtPF0FRcM4OrCwLhusQSS1SbbLJJeZ2DcsUVV5SOZ6YvEZWoFX4xwBJrZk8GDDMOF5L11luviD5w49yge6mllipWuWRWdbn++utX//u//1u+3//95Cc/KQJy8803L/X74x//uOwvtNVWW43LnknallMkMVqHt4v69ttvX77bYHbttdeWpFUnnkGNEK0TctXDVVddVcqqDP531VVXrZZddtnyeo33XX755dUpp5xSVjIZmM2g4XOtsLvssstKPakHx3TGGWeUC8B22203vU7HCvV93XXXlWOR9K4/bLHFFiV8o+2VUd24GOHtb3/79NVYBhHnsmOsbyukDJ0Jvj73pJNOKnX53ve+t3rLW97SfuU/eF0/vOiii0o/22yzzUrdEzLq1d8zm4M3UjFno1Xnrutvb0E9GNpdffs/D38bVPXfyYAyKZ/+5jEZcP21QpIz59rqGl2ntGg/j7ofaMemlLtuK8fcISsaQ30OeahzZajL5PexOqeMIe6Yod07menEXG8M0vvvv3+pnE9+8pPjFuZUdwY/9UJQ+d7/9//+36jeFJqAOfjgg0uoi3N29NFHlw7whS98oYgSlr0Z7LbbbltEF8eS0CPIpk6dWt7HXSLgOG5uU7Lnnnv2m9ipHNp69913L4OzMhLOnvfTcwQCcWdWscsuuxRRPZYQJ0cddVQRnQSIAfSYY46pvvSlL5U2N4jaOFUdEBdee8c73lHEHn75y19WJ598chF4tgU4/vjji/BWJ07aGuW75ppryu/6/Q477FBmUHCSEXrE8Y033licWLuEEz5ubk781O7VWGDSwMkhpLWB4zz//POLe2hCw5l2zjoOzrC6sUfXZz/72dI/PvGJT5QB/+Mf/3gRZCYfQiNve9vb2t9QlUmD/iLEvummm5Z69311PoxbERGMyk/0Ey6eE+oj/BzXWLjS3cxIxdyMop+bvGhT7TOZcuYmK8Sc808fce3pxGSMwHeudnvSfRg7hivmJlUcxMWMS+Qk2GabbcZNyFHx9tdxEhrwqXkXWBBAozGz4vwYJAyYG2+8cRm0iQiDBwgqYs/3c+tcIDhsnq8dLPlwRJ+Z4fve974i0PsbcAmGc889t4hGM0cIpXGwDPzyEgkI7iCx96lPfWpc9oEiooQ911hjjWq11VYrxyac7fYxhBqRSuwut9xyJVxs1RixU7cB0VWHs9ShyYm8wM5ZkfcIfS299NLVkksuWWx2ZSdoPPQr38uB4/zpcz6fWHJxHkshZ+7l2JSVMBf6JGpNXIRAOYbcYe2j7Ry/PqHepABwAJRfn+DIqSv1R9h2QpiqB/1XO7uYfO1rXyuf4RicY+qgPs9++MMflnLrB0TkZNiaoNvhFDi/Cftjjz22+v73v1/6gfSIjjl66EJq56e32+O8/sEPflCdffbZ1Xe+853qggsuGDNHKDyL8fX666+vjjvuuDIZNsbUY3hTmDRiTn6PMJcZD3FB0IwXHJoTTjihnKAcMSEw4asf/ehHpVMYEGcUMzVuycILLzzd7SNUCBJYAEKg1KE0gtJ7dUhuq8GdO+XiwJmxSIOb1pfgdezcK7kPhFsdLuNcCc352//eddddpZ7dr8739baJRxtCxHErPzEHx6Gcvrt2CFdcccXymj5BvNYhDHDhuFTf/e53q1NPPbWUh5Ph+MFd0W7qU91wrtQ9ccN5FQ4j4DiwxCsnRHsTVo6r090bC1zYCWjHW4tn3yunjZhyLMRn5/YUBJ26cOzKK7dUOQkvAkC/7b3tgzITrgQi4f+BD3ygOJ5EgwkFcUvQOQZ1YbIgT1TdOra6PsPYoY0knBMFohEmE/jc5z5XnLrQvfQltk22v/e975Vzd9999y0TNa66a1EYWwg5kSzXfpEK1ztt0Tl2dDuTQswZdAzyRNVOO+1UBhWM1+zU4ElMuLhaTUbQechpIuSGO8ArT+9jrwdHbkpfosl3+L/OWZz31f9nwGXZfvvb3y6uCTeG29JbaCqDDu1zdOi+NupUXiKJqDOgjxeOjaAgIHrnYymHEDsxW4tdx0hwEiR1nckL+7//+78ivqxSsqpMvdUQv4QJUcjNEmI1MDrZOVWdCwl8NmGjjscr7aBuz9pp7Iu++g6hXx+7sKlycy65AJ/+9KdL3XWiTridHE5lW2ihhUo91ruh1/g/4XrCV501FefVeF0vRgvCXNs5Twl557XcSP3euVKjvfRhbqw+rL1M2Di0yt2NOJ+Vw3HrY3533I5fBGSg/t9ULDZS7nqxmRxd0SZ5qd0oKrSBsUAbaRf9SxvpV9qrSY6i8UReuDagJYQ0GTTKUuNar6ye81B255Gya6eJpvFiTocy6ApByvmR7yeHR9hTZY8HcrKo+M985jNlBS2XRIhOvpncIQPpUJDnJOFePlhvCBQDKoHImQEXRi4cfCcHjnsHA5Pf/Q9BxpXhLgm9yrvjbMkL00FrXEiEyxyH8Ju/hdUIGicmd8cCCh1eLgeHioA2wDuJx5pakBFbYI07NhcOYpaYIC5r8cwVJWTqXAMzXw+DnhA1V1F5fE4NZ4nLoY60JRdLLp6QtOe5kAZK4Uy5g2bP6onIdFJ3ftZYoGyOR7vJV9POBmvl0j4WtSgfR7FG6EYfUF7vdbGqL14f/ehHS1ksiuiEYCZsXaDhPFPv6kc9+xxtL+Srj3ldv9EPOvtUU1B/+r4yNQXtIJTeuchLPyTCOcb6gxxGYSOrsl2jOO5+N5HhTN9zzz3t/+wO9GdlcJzCxx4mlwcddFDJCZYv6zrWOQGbDNTXa9R5qdxy6Q/O5c4JVDdAzHCwtM+BBx5Yco/r3GVuvzyv+jrdBIyLHnCNNX7rYx7KIXVB2FtZ5aorr/GeM+48Gi+tMRCNF3MqUSci6AihD33oQyXhXzhovDoTwcD1INrqGaTBwUVpKLN9sy5iVOiPo2eWXTswNWbhW2+9dTnRXdhcmJWXOwSzOTM5YVThZuUnLOTUEV4Ero7IkXESEmcEXaejRLipSxdL7zXjN9ibGerURJxQDlHlcwkbnyVPbTwGcPl98rQIWHVg8CVouAsGNuUnqDxfh0T33nvvEmKG8tUnnzw3n2c/wt7uopOZUNUuLlpEswut/kS4SW73OVw+Yk+YlsOlD3p9OAylf/TGogThYgOb0IC25hxqI2W1H5cJgePUNupLHhvhqX+6KPk/uXVeMxnhunXCueTcyZdTVy5g+rgVs376m0gQEhKG1Y98H3FHRDQNAthkyApD4qEbZtrDRf92PliIpE9ysJz3zk3tbHGSdvYe1wbivtsWqTiXTVYdn3PZOccd9pzrqf5d32lgMuFao8yuY/Wk1U+CzrW6m1wu1yzjjnHOZMCDENImxg7jjX3ZhmpidBvOffnn+pnJKzeuduSU1XXU88wC7UX4GZ8nmlkPsASuTe2+uLB1YgCQkGlQM4hwbbqloQyeOpdVhNwHYodzQajoYAbl8YRro2G5FY6jziEaCCLJgCxEIt+vvxVMXEcXYCe9k18nqjf0JUhc9JxQTjLPSYyXP+b7HRORWAsSA7WFFJ1t7TWDQO3mOQ4XT4O9gUC4gwvgc+vX1b0+IS9rsHJ2wi0iJnxO73yt/nBxcyI5gQxQxBQBV6+gddwGJ26SshK4q6++enkN6oiDZJZbi1nbkvTXl9Uxd4rTQbg6ub2X4Jan57v1N9+p7OqhzlkcCgYuMzwXCZ9fX8QHo86RI6p8huNTTnXpOW3hd/3A8RPAG2ywQakTD23oef2IA/fud7/7OduSQJlckPUVn+M8Up+cIHWoL6hvz6kDdeu7R7K5MnFBQDnm3uHz8cL3aj8XciJZn1T+4fRp/0v4cnuFsp2Pw8H5Z3Jk4ZHIgtXCftYPkwXOvLp2XenEYGMCpl8Q7trLtUBfcL5o74985COlj6lvkxATmfFID3CuqheTTOd9Z5nqclmk5NicyyYIzjUDp607XBe9bpW0NBr9bajnSrdCaJskW1DmWuKcMqGXrlBvNQWOkPPUedZ70jmRuD4Ye+prl8VgDAzjARPAuTPeY+9oQZAyS2y+7RxWDu1kXNEG8sjryYYFdMZS19XRxvWEOWMiI7qknwzEpNuapGmw0G1hokNYmNDUE2C4yNfjLAqdEAgzI4S/RSkGXNuAdMsEabyRmyLsx22tF7ZMJMKQwpNW75o8DZUZ3ZrE/5vgEDFEZG8hacA32BM3+kyN6/ZXv/rVMvmQON97a5IjjjiiiG8LI4477rgSAvM7MT4emDz5Ts5aX31cuTwvTabeHkf9c4VFBhgMHHbnii2HJgOcLdd7/WXDDTcsbad8RIGJvbY3NIs0EX7SPkx2uglaQZiVYWCCTxuYFEpPaeq1zETd/rAm5XROp0gTQmaKeN31mrkl97z3xGq0MLmbabcmaRqUtzAYlW/D3ZlFyMHJbkIw2cIlw8EF20yPczIzo9/rC5y9boCzZQDlko3nAgF1YKsXAoaLZlDsfHie69Ep5MBJJIhc+HsLOWkoHIXaMebK6W+cr/GCiyP0T7z0LlNdLqKlc59DLo/rIgHDTHDM3J7JiuugeiLoiXMQeBLsOV+9o2XdgHZxrKIl3FdtVkcvmgghR6SJONnip1PImWCJUGgLEHJ2kuh9vk0kEXMTiIus8IEZ8mS+UIXQJIT5uMX1nUK6GdvxCMMQoHX4XC5PvUhKrpx8Tw4jp4dLZkAinLt1EYFjJAzqhR1yAYXBpa0QrR6TDRM70Zk6zxHCePqffTD72kJqohEyJ3CIGvnJjlHf0s+6KcdvKKjn2t22ZVNvF1ROskiK1CYCW1qDcdt51C39MWJuApFbQ/G70M5sDpWcFxewpue+zAjavK6Dmdmh7Ma+wG2QD9R7lW83IQwjt8fqZI6CsK7cJWnQBh4Dk5wr76tDqurY6mX5Wt7TjVhEY281+X8gOoUlrXAVqtVXJiPyLOWbWqRmkm8hl5B6p2PZLRi3rOaXv0jQ6GtcbBMJ7q+oU5OwJkC6ByFtkV+9qb60B2F+OZ8EqgkFx9g5pZ/Kd+emdgONzZlzIRJeUKmDDQIS/Xvfe1MDDHW5t883M+yd5KhR5XXYzmCwEBFFLzei833yaqws/OIXv1he6w8rE08//fRygnTTgNeJ2QmLXTJ9Z0hA9+IWdNa1GZzVkJI75R9IbHYx8F6iRoJ/75moE8qgZYY0Fsmmo4E6cGwuwGbTnSifWbc2NBgppz4sxCSxVZjJ//oM9SA3Qz10trf/ddFxPnb7gCZxvd68uRN9oXPls/CZ3EkLQYThJFXXA4E6EnrqLwxtxZmH+u5cdGEgFAaxKKReTOQzXXx9twVcgyWTO6/ln33sYx8r176hnHczmjM3XFz7XBcMrELV+o5+JvQj/8o55JxxzbEARp+xDyYxZ1GTSWS3hcT0fe3nGB2zBREGWCJBPpacyqEumOpmeufM1RCrHC/CyJijD49VTtaMICR5ySWXlHGV6HZ91xf1MecXDdGtY1VfWIij3p1D9cN549piEaBrFCHnbjr1ddj1nIPv+j0WKVLO75ni3qwuYCpWqLK/itQgBgRJmi5uNRpD47jges9AnY4aN6DYnLP3AK2u7NV16qmn9muD+3wd3OoxidWd76s3rZX43bnqsjcac7fddiuDxWCicaIwiFgZao+hzrsPmFWbYdq6A7WQMdiy6AlAF2f1pK7Z2zpw70R4wsc+b8cdd1xXrerqRFn1FRdp2+N0IgxhsYeQEXGiHryfUCfc5DTph+rBwwXRXnidORnan8AwS/T/3XixrPu7xQO9xYzX5KG5MGlrx68/c8CElFwshda85r3ErL5g1WVv1IWFQ85hSchyr0Csud+wFZK2+rHPI1wvzLZdgG2p0vtc7o0VbVaWuyevYxjKxXq8xZxLd11XHZfxcn6pV887L/1e9xXPEQquQ93Yf1CHfzvr3DErR7de/4ZLf2IOxmF9yaS4W/PP6n7X2UbaTX/rb/LVzShL/eg8l+r+pmzaoj5nlJ+OGMuyzjRizrE6rvrE7w8Dizg+O7hGgxk8XOCHgosj9d17Rui7zVBs3TLYhdH3G6w632eFEiHo50CDCyEjB0YH6mZcfORPdJ7g6tpgrZ5qDCS2ojDgGvDM7tSlruiE0V51vkyNstucWK5Ctw5C0Ffq7RU64eKadddOsvcRHs4l5XVRV3b1hbq/dNalOhD264YNKgfDsffeYkf7mnxpx/qyQ8gKVRColvxz85zbcCHluPWVZOw9HG0rNf0kvKCe3cD8sssuK5MkEz74XosH6n326g1C+4Pods2ztyP3uBvFXGguA4m5EDBcMeeiOp2e2U9r2rRp7b+eZcqUKa0eseHq2+oRc62eL2m/EmaEQw45pLXCCiu0Lr744vYzMw/HHHNMa+ONN27dcMMN7WdmPnqER2ubbbZpff7zn28/M3Ny2mmntdZaa63Wtdde235maDzwwAOtX//6162HH364/cx/6Jmote64447W3//+9/YzrVaPEG71TChaPYK61TNxaD/bP1dccUVrscUWax144IFDej9cF7fYYotWzwSyddRRR7WfDeG/6Zmct7baaqvWueee234mhOfy6KOPtnbZZZfWvPPO2+qZ8Laf7Z/utThmAoSRhA+4TTMb7Hgu1GDO6mRG2dUBx21mroeR9gXOH4ett2NuI2ROOtevhhsq56pegTYY3GTuX53gHUII3UzE3AQiFCXmbvuAMPPRM5lq//bc38PEIw3DZEvaQAghdDsRcxOIveU8iDmr/GYmuDBWqHZ7HuBYQsCpgzpHbGaFMyfXrZv6ghXqnDwTrhBC6HYi5iYQSfJWbUoIt8XCWEA0Ddf1kSQu0b5zC4nRxkBpS5Kx3GbAwhGJxlbNDgd1Zqm9lY8+QxhcnYy22JBUb6NQIUELdUYbq620YV0Om12qi3qRxUAQWBJw/Z//9/BZRNdQ/n84WHBlS5KJui9rbyyscj66H2Y3bgsRQgi9yb1ZJxhCziaFtrTwczRuD0J02PLCDvZu/eMG4kNZAWr7jBtuuKEIGbl8xACx4f9He4m8zyaarGy1snM00YftGUSAEB/Ks/jii5e9tTrzqPqCUPG/Vlg6RvVmlaI6sL3MYP8/HGpnTvnHYu88G8baZ4wwIxaVw75J9gIcrD2JGSuO/U/9XnVjXyWrqvrbimck6Afqeiz6wnDRHrYbkjNnK6HhOHNZzRqGSlazhsEY7mrWSeHMER61g2Ag9zsXwe8G8+E6UyOBEO48Bt9fH4OLfH9uhn3WbKvg/+x/RVDNiPPhGIiR+ib2wkVD5brrritbOhDzHENOzoEHHjj91kCjCYFgv7ixGLzVoU0dOT0EnPY3MBO3g0EIT5kypYgZAoiYkzulTkZb0Pp84nCsNkG2V5qyqGNJ/Mph25ShJPTXdUBoKTdBqP6InNFuMw7lWPWF4SBPzlYnftrGJCHWEEJTmPUA935pYyAzM+19U18hJoOjjV45FDY0He2BbSQ4XjuF23maA8E59DdBYg+z+s4MBrCxGigIBc4lF8N9Eg2g9e1y3O7EflXEmYGhv2OwYSyx4H6E9iKzcW7nvnjDgQCzD5nPUx/LL798efQO49ljj9DsDG1xRuz5JuRlpSBhaJdvgkgdNgX1LHHdcXM61b/ZjR3vO/cXs/dd7zogWria8847b/WZz3ymWmeddcrD5w3F3ewW9Es76Qtl23etLke9OXGN99X733WGvJ1bq622WrXTTjuVcKMVourG7LD3HnqTAefeWWedVfajU+aBNvHuD+6i/fLcps+tmZZbbrn2KyE8F5N3Y4TrrTSLEHrjeuIuScwU193B+kmjnTnhM24S58u9/LhjnC4XUwOyvCyD+VgOwk5Kt5sxGPpeTqBK544RkwZBwmiwY7Bp8N57710GXwJ0pG6i7+OqEWScmN6fQ+wZ5C26UEd+Z/nDtg0EjzIQqN5jUGraij55TuqRq6RsyrjmmmuWO1TApER/0V5OFLd5qjc1JnqJP33L6xwq/auJEGTcYe2sPbVrLer1C2Xmtpl8EP6cuPrG8gYZfbIOp5qY2MC387ZZkwn1oe+7w4ZQcgghNIlGizkDzVprrVXuA+n+aXJU3M+Pq+WCLB9BzthYhm84lL7LTvNWpm655ZbVFltsUYQRR0OuoRn6UI6BCJHTVN8UeyQ4HsKRkOkrXGsw5z64Fx0H00DfKfi4emeccUYJrxJzXNhuSUwfDlYHu0my8Cqhol4JfRDLRAshw3nkTNV1pZ0IYqE/gs+tyL70pS+VXMImoZ3dWYKgc+wnnXRSue0VUVej3Yld/cDkQ53U/bTTySVoha65mk28Vc9QcP3gXGr7EEJoGo0WcwYWTkG9WpEI4pIJC4+XdW0hiJAmgWRwdAwGPyFKx9Y5KA6Hkf7fYPhcQqfO7eModuZQqdOll1662nzzzUs47dhjj53u3DUJolYfIKyJaeEvDh0IFm6bOqhzG2vn1Gtcze22266E2jbZZJOSJ+aes9q4SZhgmFxwajmsRKtEWmWv+5c68OBEcu5637bKpECKhX69yCKLtJ8NIYTQTTRazMHiBwLOalCJ5HLUxJo5cuMF94OLJawp380xEEi1EzQRECcG7PpRIxQt9GyA51YRPHKhQIzWYm7jjTcu77n44otLGK5JKIf6t4KV2+Jm6wT2hRdeWF4nZrmgcsKEX4WRtWHtUMoZlVsmT1C4Vs6h9m2SmFMWOXCd5bAyuV6tDO2u7MLP7onqfb33O+ROCsfKlR3NlbwhhBBGj8aLOQONAap2DaxEI2SIPPlhtVAZS4SgbrnlliIQCCfJ1AQBgeB4uBvjjQHbYhYP4rbGMXKciDgPA7ncQoO/sCPBw6GB0KNyWGnYJAgQW0PU5VB+Yr8OFwulcd8WW2yxImbUhwUP2q4OM9dhVfXHwSR0et/8v5txzFY0qwsQ8c4H50mdB6fdCXbindgj2DoXNyi7BFzpA3HlQgihe2m0mCM2rB61qWsdVjXoGoSsFBJuHWzhwWgg38h31W4gwST0Kx/Lz05nbKypy37MMceUUK/6OfXUU6fnSgmjcViEgDlXnDghScconGhl8PHHH1/+R1hSuLVpq/KUW76c0KjwqlXGxIp8SnBwPYgUgk6ovA4vEv833nhjdfTRRxcxdMIJJ5RQ9FZbbVX6VlMwgbj++uuLqD3ttNNKmyrbe9/73umrVrU7sW6xA0GnDjxXI59SCNZCiOwrGUII3UujtybhJhmAHJPkbMckf41QMUgTeGN9nI7BdwjFLbPMMkUkcHmEt4g7xzBW+4j1heMRKuMirb322kWICf0SmMLANeqtFjA1XBlbqNSvCcs1cQGEBTAS2jlQyqEvrLfeev8lxoRie9eBtlQHtcvqs9QjMdiZW9jtKLvzwPGb0PipDgi33otx1EF/ZVOP6q93PYX/JluThKFikmS7pGxNEvpjuFuT5A4QkxCOJSFSO4K9/x4MDk4tlJuMyYmyj6Qcyq8eCKHe4qdJKAeXThmaXI4mYGFJ7gARhoJFZfvvv3/pIyZZIfRmprwDRHgu9eKHmt5/D0ZfjlUTIV5GWg715X+bLoCUg9sYIRdC9yAP2Wrz8VyoFyY3EXMhhBDCOCL1Rf6qlJwQRoOIuRBCCGEcES0Z63zuMHMRMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBRMyFEEIIITSYiLkQQgghhAYTMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBRMyFEEIIITSYiLkQQgghhAYTMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBRMyFEEIIITSYiLkQQgghhAYTMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBRMyFEEIIITSYiLkQQgghhAYTMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBRMyFEEIIITSYiLkQQgghhAYTMRdCCCGE0GAi5kIIIYQQGkzEXAghhBBCg4mYCyGEEEJoMBFzIYQQQggNJmIuhBBCCKHBDFvMzTrrrNVss83W/iuEEAJcF5/3vP9cUp///OeXnyGEMBJcT+itoTJLq4f279VTTz1V/fWvf61e//rXt5/5D7feemu16667VldffXW12267VQcccEA155xzVs8880z7HSGEMPMyyyyzVI8//nj1/ve/vzrvvPOqQw45pNp+++3br4YQwtAxKXzkkUeq/fffv7rsssuqww47rNpoo43ar/bNsMXcIossUi233HLV7LPPHjEXQgg9EHP//Oc/q6uuuqq69957q5VWWql605veVJ7vuMSGEMKgEHP02LXXXlv9+9//rr773e9WG2ywQfvVfiDmap588snWtGnT2n89y5QpU1orr7yyK1IeeeSRRx555JFHHuPwWGCBBVrnnXdeW431z7CduTXWWKPabLPNqjnmmCMzzhBC6IED1zMZro477rjq5ptvLiHWnglwnLkQwrBx3XjiiSeqU045pbr//vurQw89dJAwa1X9f6ZS3hizb81EAAAAAElFTkSuQmCC)\n\nFigure A-1 shows:\n- The mathematical expression for the square wave\n- Its expansion into the Fourier’s series\n- Its graphic representation.", "_____no_output_____" ] ], [ [ "fw = abs( sin( 2*pi*t ));\n\ng2 = 2/pi;\n\nfor i = 2:2:5\n g2 = g2 - 4*cos( 2*pi*i*t )/ pi / (i^2-1);\nend\n\nfigure();\nplot(t,fw);hold on; plot(t,g2); hold off;grid on; title('Full Wave Rectified')", "_____no_output_____" ] ], [ [ "---\n\n### SYNTHESIS OF A SAWTOOTH WAVE\n![SW.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnMAAADXCAYAAABxotffAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAE1DSURBVHhe7Z0JvJVz/sd/9rGTEhWKMkRjm1ZjS2imsoRslRbbSPVHJWuoZCs1SjMkhJQlZZtkT8tUKKa6kYiUaBIyxMTzf97fnien65xzz733bM85n/frdV51zzn3nuf8nuf5/T6/77rZunXrvNWrV7saNWq4WBYuXOguuOACN3PmTFe3bl3XoEEDt80227iff/45eIcQQhQvm222mVu/fr2bNWuWW758uWvYsKGrXbu2Pe95XvAuIYQom80339zmk7lz57pvv/3W3XXXXa59+/bBq2WTkpjjQ4QQQmwKwi3c4PJ/HkIIUVF++eUXt/vuu7vBgwenT8x17drV/etf/3LHHXecO+2009x2220ny5wQQviwyf3+++/dQw89ZLvpjh07umbNmtnzTMhCCJEqzBv/+9//3BNPPOHee+89s8x16NAheLVsUhJz3bp1c4MGDXI77LBD8KoQQojvvvvOderUyT399NNu1KhRrnPnzsErQghRPn766SfXt29f2yAOGzasXGIuJf8proMtt9xyoxtBDz300EOPX+dF2GKLLX7zuh566KFHqo+tttqqwmFtKf0WwbxyrwohxKYwL4bJDnKtCiEqQ+x8Ul6U2SCEEEIIEWEk5oQQQgghIozEnBBCCCFEhJGYE0IIIYSIMBJzQgghhBARRmJOCCGEiADffPONmzNnjlu5cmXwTGJoCfXGG2+k9F4RfSTmhBBCiDxgzZo1buLEiVaAuqSkZJMyFZ988okbPXq09QFOpRYZdctWrVrlHnvsMetQIgobiTkhhBAixyDeHn30UWsRt2zZMvfZZ58Frzj36aefurvvvtvtttturkWLFta7syx23HFH16ZNG7fXXnu58ePHu8WLFweviEJEYk4IIYTIIbRxGjt2rFu7dq0788wzXc+ePV3Dhg3Nuka/zilTppiL9eyzzy5XW81tttnGnXLKKW7nnXd2w4cPD54VhYjEnBBCCJEhfvjhB/f+++9bH99EYJWjD3qDBg2spVOVKlXcLrvsYq+tWLHCvfnmm6558+Zu6623tudC6DpSumMAz8V2bOLvNW7c2C1ZssS9++67wbOi0JCYE0IIITIEYmzIkCEJ3ZxLly61GDnE1j//+U83YcIE99///jd41bmPPvrIffnll2api2X16tUWQ4cVL4yJe/nll+1nxGMse++9t7ldZ8+eHTwjCg2JOSGEECJD4EIleQELXTyqV69urtTDDjvMnXPOOa5Ro0bmHg0hXo7frVatWvDMhn7pxNXVrVvXROCzzz7rZs6caWKNv7frrrsG79wAsXZ77rmn/Y4oTCTmhBBCiAxB3Bvu0UQZqNtuu639e+CBB7pmzZq5WrVquS233NKegx9//NH+3WKLLexf4G8ecMAB7uijj3bHH3+8Wf1wxbZq1cpdccUVbo899gjeuQE+H8FIJqwoTCTmhBBCiByBm/Wrr75yderUSVpypHRs3O9+9zt7f7169exv7Lfffu6QQw5x2223nYm9WIihQyCSCCEKE4k5IYQQIkfgJiVTtWbNmsEzm0L2KkkN8dy0PE8pE1y5iLhEEINHpmzVqlWDZ0ShITEnhBBC5IgPPvjAkhOwrMWDuDgsasTOhWBpQ6ARJ4d1jtf5O4g6RF9pKx4JFF988UXCzxDRR2JOCCGEyBH//ve/zWVao0aN4JlN2Xfffc1q99JLLwXPOMteve2229y0adPcEUcc4f70pz+56dOnu0mTJln26/r164N3boCYOj7jqKOOCp4RhYbEnBBCCJEDcH1iMaN0yPbbbx88uym4Rv/yl7+4t99+e2M2Kr9HaRLEGWKvXbt2FjtH7B3Cj9pyIXSSoEcryRKJBKOIPhJzQgghRA6gWDCuUaxrySBjlUzVhx56yCx5ZL2OGDHC/gUyWwcOHOguvvjijcWGccVSb+6pp54yQdihQwd7XhQmEnNCCCFElvn666+tSDB14Zo2bRo8Gx+sdtSgQ7TNmzfPEibKgtg53svfp1xJKPJEYSIxJ4QQQmQRar4tWrTISoicd955Kbk/iXk744wzXNu2bVPqz0rh4ZNOOsn6uZJgIQobiTkhhBAiiyDi9tprL9e9e3d33HHHBc+mBla6ZGVIQoibkzWueJCYE0IIIbIIBXxJVCjddkuIiiIxF1HIWkrU6y8WgmDXrFmzsSWMEEIIIQoLibk84/PPP7caQmQhxRNr1BD6xz/+4V5//fXf1BJKBNlPw4cPdzNmzDBxJ4QQQojCQWIuTyA9/YUXXrCK3rNmzTLxFdYUCiEzaeTIkfb/Ro0apRQ3QXPm+vXrWxVxMqeee+65lEWgEEIIIfKfyIs5UrSp1ZNKqnYILkesXw8++KAbPXq0mz9/fs4tVlT3xtpGu5U///nPVuAxNngVi924ceNMwJH9VKtWLRNqqUCNodatW7uGDRu6KVOmmGAU2YHrCgtrrq4v2vrw+WTPCSGEKEwiL+boR4cV66233gqeSQ4WsPfee8+98sor7umnn3bXXnut1eB55513gndkn2+//dbasBAM+/vf/97ts88+ln5erVq14B3Ovfnmm9aQ+fTTT08pLb00CL9jjz3Wbbvttu7ZZ5+1z8w3SvcTzAWxx8D/K3pMbBhoofPYY4+5Z555xiq2ZxPEI5bd559/3gqNUmW+osQbk4qOixBCiPQTaTHHgjJ79myzsL366qvul19+CV5JDBa8rbfe2rVv395EHOnhWMVwYeYCmiUj1HCtIkwRmEuXLjXxRfo6kOwwdepUE3cHHXSQPReCOCUW7sknn3QrV66057DE0LMPt21s3N1OO+3kDjvsMPuchQsXBs9mF74LAoMx/+677+w5mkDjAmYMsikS+CwsnowT19ADDzxgwocHP99///1u4sSJbtWqVcFvlA3XF213iGu86667zGq85557JmzVA4wDmwvGhRY9gPhjjGjDEy92MhHff/+9xVuOHz/eDRo0yO4LLLM04k4VEmY4HsYh3pg88cQTvwkBEEIIkTsiLeZY+LB+sNix6H344YfBK/Fh8cZ1eeCBB7o99tjDYslwVyLoiCnLNlhPsAiOHTvWLGWIMUTbf/7zn+AdG0BMIPBq1669iWuV74O7mMX15ptvdhMmTLCxwI2KRYi/Vdq9Rh8/BCQFKxOBKKY6OceT7IEQQjykCseGKxlhcNNNN9mxI34QU4gPhE8oYLMB3wGxxlgNHTrUxpHjGzVqlLv11lvNWvruu++W+R0ZY8YCF/aQIUMsrhHL3Pnnn+/69OljTbBjeyXGQvwiGxLG5JZbbnGTJ0+23+V6fvjhh22TUZaLlusAkYyo//vf/27fhbE8+eST3VVXXeVOPfXUlIuGck8hrNkc8F0eeeQRE3ZcowMGDLBrbM6cOXlp2RVCiGJli+uuu+5GFtnSkz0CAvcQTXoJtm/RooVZtPIFFrBPPvnErBoID0TR/vvvbz3uEgkCnkcM8WAx4vuxaNFKpVOnTiklFKSTzTff3Cp/Ixio7n3bbbdZyxbqD/FaCFY0LHYszgjREBZ5BAliYd26dSaOEA0IpNNOO83i7qpUqbLJePAagoHPDfv6lYb3IC7uu+8+Eyg8XnzxxU0eLPhhKxoEYipwnvheHNf06dPNqshnIbCpbH7IIYdk7RwgWBk7xgFRz/WApbZz5872PJa0wYMHW4Nr3N/xrinGnOxiLGj0PyRrGDc5PRCpvI67nPOaTKAirBGLnMNPP/3Uffzxx3ZsXN+42rk2uTdjr4eQUERiheXzEYBY4c4880wTcFwruOTj/W4icMfyO1wbK1ascF26dLHCpoDFjjHB1c95T/a9igWuYTYEbI5OOeUUs3wLEQubc+YImuJjPBAiEcz9L7/8sm3iiZ1nTUyVyFrmWHxZeE444QSbRFlYsSCwuKUCiybuTRZuxAoTci6C1FmQcTNiKURY0IIl1voGLPY8SotpCk/+8Y9/NAFx5JFH2uKLdYZEB57j75VeyMO/wXglAgHC36MNDI+zzjrrNw+eR3iSsJEqu+22mzv88MPdoYce6ho0aGAiBIsSggVByOvlASsSVj0saoiZeA9e4+ZAiMTCuGCN5TgQlNxE/IxVjA3M7rvvbhZbxHE80YLYwor317/+1VyZxCP279/fXXjhhfZ3Ut34IJwYkz/84Q/2L8IdNzjnlT6MjEnp6yGExJ0rr7zSXJ+ch379+pkgRVBUVBQjQDkORCZChTHgmsQyjOjee++97buVRyAKIYTILJGdkREuLLrEAuF+xHqCkIlNhCB2iMUcdxUus9ggcBZKFj4WYxYuLF8Iu2yD+ORzWSQTLcAs5iye8cRmuKjWqVPHBAv/JhNYiBZIthizeGOtOv74463VTKJH8+bNTTCmSuxnIt4QUVhTsR5WBKzHlFrhHGNhjffAfYtYRzSWhuNBsCxfvtzENDGFWNrYSSOGk4GYw0XKv4gcRF9FSr7ECkWOgb9BjB0iqix4L9cEgpDElvB4KkN4jki2IUYTMclmAzHHucon67wQQogNRFLMYc1isUEIER+G1YrEAAQeVpIwsB5xd91117kePXq4gQMH2kIdgmDhdy699FKzRLCAY5XJNrjWsDIiSLG0xQOhitBIlpGIy5mFGHGQbEFnrBB0yTJiCb5HIF1//fXuhhtuSPhgbFPNIo4Ftz5CrrJuOgQh1ihixIjvive4++67Xe/evROKI4Q81xLuD8aPGEyun4MPPjh4R3x4L6EHd955pwlx3ChkVY8ZM8bi1ULRnCpc01yDZZ2/WLDAkeSAqCbp4p577rHEC2LaOM+VYcGCBSbmEKqIXe41xiQUe0IIIfKHSM7MxF7hYmMxxYJ0zDHHmOuP2CJcaixEwGJ32WWXWRA67i9i0cjCI2uSRRyIMcK1hOuPOKBsg5DArZnMEoPLD6tNogQPSq2w4LLY8t0RhwiyeK5Uxg3RmOzzWLAZS6wyPBijRA+OvTxgSSIxg7+PNYnvhEUJcZfIsoUI5zuWFihYiThnWAexZiV6IEoSWZTYAGB1CmNZEMwIGMabawSRlQi+OxuBnj172qYAyxWuT2oXjhgxwjYT4caiLBBjnC++D2PC5/K7yTYYnMd69epZvCdWZlzjiK7HH3/chCUxjVjVygvXCK7e8PzyN9n8cM1wTFgzhRBC5A+RS4BgcXvttddskSIhALHB4osAIWaOmCMWRI4Zi0njxo0t8JTECL4jhXevvvpqs4KwYPN3EAlt2rQpV/xXOmBRJEsQCFovfQ5CGHdEH4s8rk9EEBBzhkUIaxILOc9zzrDscP4Qr7GuW6xFxAYi9Ih5S+Te5PMInkck8yBhofSD54kTQyylAmKNGDPOEdfbiSeeaMKTQE/GAeHE+YoXHxaWw+Accr7TRTj+XDPnnnuuiRcEGAkpJEAgqLiWEmWixsJYEqzK9cb7OSckpJAUAVxrpS2vjAOfhxuY65AxZaOC2OWa5MHnpyKYsbRyzrjWGSPEPZsWPh/3O2IVQVYWXDvUIeQ8kUSD2x4hzbXGZyDmuK5S+VvFANeQEiBEMpIlQJC4RygIyUvMj2S2MyewxjEHMb9W1oMhogNrdNEkQLDoccGz+CM0gUUXsYDblMkU6wrWnNLuKn7GkoEFhYU2zKhs166diaFsQxwX8XLcsLhRE4FIa9mypd3U3OQhnHCSOAiexypHHBuWIqwqTASlxSELDt+7SZMm5tbNJpwvJiysYIjBsDAyAoHnsD4mEk2cT6xF6XbxIWoRPVwziCZg/LD04f7GvR0K51ThPHKucFEjEDkP/C2EW2m49hBciCWEKmKMLFgEFOeQeNBk7vB4MIYkwFx++eWWhcp5xhrNd00FrIMkB3EsYZYy8aU8wiSIbGd9C1GIcJ8h3NioYmRgw06ZJMoLpVISSYhYNvMnb4+FsnQgO9aKrl27un/961+uW7duVnervAtLJkC0cdGzaHHMWCFwz7Eoh0HuLPqhe630rgZBhyBEILDQ8jfKu2CnC0TpjTfeaBaQCy64IOkODIskViTEG+eDxR+RgCBg8Q8tWghcxC0usVhLEKKRArAs6rgES+8QMw0igeNFYIbWPI6T5zgPWK7iwXklLgyrHbGP5XXrJgOBhUDhmLA2Mf6MM65XPidVq2MyuN4Yc66x0mKV6xahxeeG4ho3NOeQa5hzlOyaSBU+H2trKtY0rifiL0NXPP9ynLha+S6cB/ErWCqpJ0gCFRbkzp07B68IsQE20YT6XHPNNbaRBu4l1jHuc0pFMT9w7ZARzzpLYl465zoRDVgTqQ1KgfZhw4ZZmatUiZxlDmsF1hNKSoQuNxYcLD1YWHhgmsTSE28h5DmsMFizcKvmSsgBNzk3LMdS1qLNQowlCwsiCwcmeRZbymDEuiYRAIi7WCGHO5PAfCwqCPRsCznge2IRjRVIiBvOQSIhB4gIJkA6dqR7cuPcY4GKFU2MM+OXDiEH/F2sdfGsjpwjPivWSsq55Frm3JZ1TaQKYjVVtyiiD+t17DXEvxyPhJwQ5Se8j2PvZ/7P3M09xWaLMA02khhRmA8k5ER5UWpaDsD6QtkURBYuR4RpKiBkabJPQVgW3VRcZ1hV2AUSy8XvZjsusLIgLLBclrYcCyFEFGAzhAUudoMdwhqA14J5jtAgPE8YIoQoLxJzOYDkBHZgJJV07NixXDFI7NgoKEuCB+7JsmASoWYawfUE+EcNBGwq31MIUdwQY0bYBtn8+QDxzZRNolwQCVF01bnjjjs2/gxhZQW8FsTJ4iXI9MaV0AD6QBNCxUZfxIdxIhwL12cUkJjLAQSRI67ITK1owVx2efEyP0uDmIvn4hNCiEKCNoN4H+69996kJX2yBZnpxL8Sr4z1jVhtxCYxuXhn+JmkMNYD5nPmaZ5H4IUxqpmAQuv/93//Z4X0iZEVv4WsUpJSMLYwXlFAYi4HYF1DxCG0hBBCVA7cldR2pIQO1pRUNrqZhvJLlMEimQE3KgkOFDlHSBHuQhwryWzEAwPx0LyXGGoEXrpiZmOhmgE1KMP6npn4jEKAwutkFZNtjOU0CkhNCCGEiCzEmZGpz8JLUhwJBPHi07IN4SEcDzFwbOD5P6EulDzCCkdiVLNmzTYmFhFSQggNhfAzkZhHvTuslnRNIuaa48uHcco3sJziCqc8DAXZqfcWBSTmhBBCRJYXXnjBsvWphXjLLbe4E044IXglPyDmCrcdlrBYsmkVw2VLQXnq2FHtAZGCdVAxc5vCuaKnN7X/qKJw0UUXbayake9IzAkhhIgkdDi56667LPaLhZcHyQRiU6hpSt0y3M8UFB85cqTVMMsHd3Q+gXUX9yq1/qjHmmqliXxAYk4IIUTkoEg8Qo6+xnRboT+3+C0UXierlr7RxO3ROhJIthC/QnIKzREo5kw9VtrzRQmJOSGEEJECtyXuMMp9NG3a1BIJiEXDUkd7PLEB3IZkZeJipd3jJZdcYvF7dJ2gXyzjKDZ0vuFaevXVVy1xpUuXLpFrWygxJ4QQIlLgNsTaRDwTGaD05Qb6bdPyUGxg6tSpFsxP4gVuQ7pOELtHByHGSWJuA6+88oq5num+c9lll23sSx0lJOaEEEJEBuq2DRkyxJWUlJgFhTaHIVhYqOkmNrgNEbzUuqN/cJiVSdIDIk5CbgO4VXHXkxVNzCUWzCgiMSeEECISIEAeffRR609NK0SsTbFlPMho5VHsWZqMExY5Mn0pTk8MWDhOZNHSazmd/Z+jCl1DKNeCZa5169ZWJJiyLVFEYk4IIUQkoMsDRW+rV6/u+vTps7FGWwiddZo3bx78VLzQrmvUqFHmLiR7FfdhCBmsp59+ugX4F3vhemIJiZWj5SXjFOW+uBJzQggh8h7chvQ2JTuTOLmWLVsGr/wKXRWwQBWzxYker7fddpv1Fu3Zs6cJ3FgYG3rA1qxZs6jH6Z133nF/+9vf3A8//OB69OhhdeWijMScEEKIvIasTCxytOtq1aqVxcrFq5FGP9Q1a9YEPxUfCDjqpNFE/4wzzrByLaXHCRcs9dR4T7HGzXGNkPBAWRtKtYTlWqKMxJwQQoi8hmbndHnAbUhv0z333DN4ZVMoLfHyyy8HPxUXxAnSPH/s2LGWtUpWZrVq1YJXf4X6csVemoRyLU899ZT1waVcS1S6PCRDYk4IIUTe8tFHH7k777zTigRTT46G9IlYtmyZPYqRefPmmdsQ69wVV1zhjjjiiOCV3/Ljjz/aoxihrA3JIVgsca8edthhwSvRRmJOCCFEXkI8E+U1cAvS5eHss89O2hyeQq8UxS22bNbVq1e7ESNGuDlz5tgY4TZMltxArFwxxsutWrXKricKS1Ou5eSTTy6YlmYSc0IIIfISGp4//PDD1hwe92o8t2EsLVq0sAr+xQYN9BmrZs2ambWJbhiJQOSRvUkP22LKZqVYMtcSZW2oJXfBBRe4nXfeOXg1+kjMCSGEyDuwMtEcnlpgvXr1coceemjwSmLq1Klj8WLFZHV644033D333GOZvAg54sCSgSWKxAjKkxSTmCNGEOtllSpVrAwJ9QgLCYk5IYQQecV//vMfy16lfMTFF1/s2rRpE7ySHAL6iymo/8svvzQht2jRIisMHK9cSzwo3xJbbLnQocsDcXKffPKJu/DCC62QcqEhMSeEECJvQIyFbsNGjRpZi6VU3WHEQiEAi0HQ8R0peDthwgRzL3fu3NnttNNOwauJIZuVosJktBbDOJHowTj985//tC4PxMoVopCVmBNCCJE3kOyAtYlkhr59+7qDDz44eKVsqJ02a9asonCzUoIFaxNlNSgOTAxcKiDgpk+f7mbOnFkUYo6WZvfdd591wejevbu54gsRiTkhhBB5wRdffGFNz+liQN/VVN2GIYgTLDGFLlLCci1Lly61+K/yJn389NNPVoi50CkpKXFDhgyxJvqUtTnqqKOCVwoPiTkhhBA5Z926ddZPFLfhSSedZF0eyusOO/DAA62wMJmLhQoJIViaCOgnlpCSLVtttVXwatlgtSyGdl6ME90w6BpCuRaSPqLaRD8VJOaEEELknFdeecWSHvbaay+zNtWuXTt4JXWaNm1qVqpCXbSpn0e827333mvZmBQHZrzKQzFksxIXyKbgwQcfdIcffrh1edhjjz2CVwuTlM4kJ7+QFa0QQlQE5sViKu+QKRYvXuzuvvtuK35LE33qgFUECgYTa1eoFifcz7ihKaaMGxrxWl4Ymxo1alhLtEIdJxJhcEPjbqet2R//+MfglcJls3Xr1nncQJzcWBYuXGipzgSUkv1BIOqOO+5YFAGTQghRFiyE//3vf62Y7Ysvvujuv/9+yygU5YMx7N+/v7vtttusOTyirqJWFFp+YZUhKSBfRDZlQ/r06eOuueYa16RJk+DZ8rN27Vpbh0kO6dixoxs8eLCrWrVq8GrqsIa/9dZbNj60/Co0QYeeYbxHjx5tZW3uuOMO0y5RgDjGq666yiyK1Fjs0KFD8ErZpCTmMOdiqqQoITeKEEIUOyyGBJJPnTrVrVixwhYPNr4idRAWlCAJm8ITC5as92pZIKqpUXfaaaeZhS4fSIeYY5weffRRq5FWv3592zhUtKcoguHmm2+2ODuOKVl7tKhBrCRxcldeeaVr2LChXU/EUUaFjIs5CuzR62377beXmBNCCB8sGri7iF+aO3euiblOnToFr4pUwB1GHTnWm1tvvdVimypjURszZow12qcERSo117JBOsTc22+/bRuFzz77zJrpY5mrKAiGa6+91kIEbrzxxoIScyQ7kDhD9ioW3nbt2gWvRIPKiDkyiLzly5d7pVmwYIHnX3h0K/Z69uzp+SIueEUIIQSsX7/e8ze6ni/sPF/MBc+KVFizZo3nizdvq6228rp27eqtWrUqeKXiPPXUU96IESO8tWvXBs/knpKSEq9NmzbezJkzg2fKxxdffOH5Qs7W4h49enjfffdd8ErF+Omnn7x+/fp5/fv393zxEDwbfVasWOG1bdvWrqfevXt733//ffBKdODcXH755d6uu+7q+RuT4NnUSGkLhOmSHagQQohfoZyGvBXlhzXFF17W6YGeq1jkKhL/VRoC3emGQCJEIYAbf/z48Rub6GNxxENWGUhoxA198skn2/8LAa4nyto899xzljyDO7qY2pVByvZsX/gF/xNCCAGaFysGbumhQ4eamECgEJOdDvbee2/rhFAorkO6NIwcOdLtsMMOlmhTt27d4JWKgxv7kEMOsYb8hZL8gHuVcWJD0KNHD1evXr3gleJBOfVCCCGyxueff27xQMSSUfD21FNPzZvM03yCpBrivj7++GOLlzvllFOCV0QsxBHecsstlsVKl4fydsMoFHQHCSGEyAqE64wdO9ZNmjTJWithbUpn2Yj58+db0h7uyShDSzIyMWkOf8wxx1hNuXTVeiUs4JlnnjGXZNRLjdHlYcSIEe61116zbhgkP5SnG0YhITEnhBAiK+AOI7Zpl112sebw++23X/BKeliwYIGJOcRQlKFVF2KOensV7YaRCAQcYzRnzpzIizma6JP5iXu9d+/eBd/lIRkSc0IIITIOzeGJa1q+fLmVXMiEOwyrE+UdogzjRPmRNWvWmKUJy1y6YYyiPk6Us8ENTTFl4uSoK1fMSMwJIYTIKGT9YkGhqC/iJFPZhvvss48V1Y2qq41uGFjkXn/9dde6dWvrKJLuzFySHrBgVa9ePXgmelBHjuLAWBiJuWzfvn3Rx11KzAkhhMgokydPdg8//LD1AyVIvU6dOsEr6aVRo0buhBNOiGxpEmLkEL24VYknLF3MPx2QQXz22WdbI4ColiahVMtDDz3kfv/735sbmvZtxY7EnBBCiIzx4YcfWvYqbbZo29W8efPglfSDRS5diQLZ5t1337Vx+vbbb60dFcI0E2CZq1mzpgnFKJYmoa/s8OHDLd7viiuuSFtZm6gjMSeEECIjfPfdd9YYnsSHtm3bmkUok2KL8hQrV66MXGD/V199ZQJlxowZNka0ocqU1Yyxoc4f4jFqdRKJIxwyZIgdO656ih8XSq28yiIxJ4QQIu0gGiZOnOgeeeQRa3b+17/+NePZhrNnz3ZTpkyJVMciuhc8/vjj1hGD5vlYL3fdddfg1fRDkgifxbmJUvcSjhVX/dNPP21lbejpu/POOwevCok5IYQQaQfrCUHq1AJDoNBqK9OsWrXKishGSaRQIoQsXyxxxMnRmSHTRDGbFesu19N2221ncXIHHHBA8IoAiTkhhBBphfg4Fl7aUZ1zzjkWbJ+NFls77bSTBcNHJbORccIN/d5771m5lmz1S2V8opT9GXbDKCkpMQtvsXZ5SIbEnBBCiLSB25BsQ9yruA3JXs2k2zCWxo0bu5NOOikS2ax0qRgzZoy5WI899lhzGyJGMw1iEXFNDGMUslkp1zJ69GjrGhI20d9+++2DV0WIxJwQQoi0QRA/RW9pDk+2IU3dswWlTyjrEYVG+2+++aZZ5apVq2bjlC23IRY5MkAPPfTQSCQPvPrqq2bl3WuvvVzfvn2tlqD4LRJzQggh0gLxagg5ypHQvQC3YbYFQxQyNJcsWWLjRBN9LE3ZdhsSx/j111/nfWzh+++/b+VayPa95JJLzDIn4iMxJ4QQotLQD3XcuHHWxL1FixauU6dOZp3LJvPmzTOLVz73ZsVt+Oijj1pf0VatWlmXh2222SZ4NfMgdidMmGBZofks5mjTRQFlmuizKTjvvPOKtol+KkjMCSGEqDRTp061WmmUH7n00kutOn+2WbRokRWVJR4t3wjj08Im+jSHZ5z4N5sQ0zh//nyzDuazm5USM/fee6+1Z2OcKHQsEiMxJ4QQolIgDCjmShP9iy++2P35z38OXskuWJoQK/kGogkxF3bDwMVJYggWzFxADUAsdPkq5v7973+7O++80yysZK/+6U9/Cl4RiZCYE0IIUWFwh2FpwpKC25BYuVwlIOy3336uQYMGeZcAwfHQnYKEB9zAdC7AbZir49xxxx2z7gJPFcq1YOGliT5jRMmWYmyiX16hLTEnhBCiwtAcnqbnuMPIyiSjNFdQmBhr17bbbhs8k3sQIliYiCWkFAlFgQnmr169evCO7ELcGW5LLKj5JnopZEx3CmIvOZdY5RCexQQiDuvyunXrgmdSQ2KukmDWJzMoatW0KwMTUz4HGGcbbjzaB0Wp6rwQ6QB32NChQ23hISsz1+4wxEm+BclzPF9++aX7xz/+YcKue/furkmTJsGruYHCylWqVAl+yh+wxlEcmDpydHmgfEqxwTWCtfvzzz8vV49hiblKsnjxYtevXz+rhVMMcJHReubll1+ucAkAhA8p+QsWLLBJLspws73xxhvmZiIOpiIwjlQ4Jyj5k08+ycuYHyFKQxN9BArdC2gOT/Zqrt1h3377rbX0yie4nxkrOPfcc91ZZ52V83EiUeSDDz4IfsoPVq5caWvLp59+6s4//3wrbFyMhO7V8ibxbObvqDx8+TVq1Aie2sDChQtd165dTSkTqHnrrbfmrY89lyBICNRs3bq1O/3004NnCxfE3O233+6OOOIId+qppwbPpgaiZe7cuXZNIeiY4LBqtmzZ0nb0UU07f+6550yI4boobwX3L774wkoUfPPNN2bt5F+Kh7Zp0yZrVfNFxeEaZuGhzMP9999vZSYyCRYwam4hEHIVvM7nIkaef/5516dPH7PysD40bdq0XJaEdMNxsclEzHH/5LpLQHh+KANy44032hpLkshBBx2U03HCizR48GCzYmL9ynUXCMaJBy7ogQMHunr16rk77rjDHXjggTkdp0SwjtHgn7k+PMfpgnNz7bXX2ve//vrr7bpJVfhLzKUIEyiBmbjStt56a0uTpjYQizETLFW8mTwQKUwm3ChVq1a1Ipq8ThxJvMWZ13gPY0tKP2CtohYR5ySb9YdiwcSLsOBi5aLl2PhufF++P/EefEcWlaVLl1r7nFq1atnv8H2Y4HlP7MXOjclOnoy3Cy64wG6Iq6++2t5PhhfBy7BmzRobQ8aMeAnGnPcg9jiOXOxq2TXy3biROabwu3Gu2EHxfTmH4bHyPirRI1axtvEa10zpY+f+YjdKKx9aEdEGCTcDDbfbtWtn7+Hvs1vlWqAKOnAtYgnkGqHxtMgN2RZzbB7Hjh1r92GuNj/c91jA8EZwX7Cx48G8wPXPnMD1n204Lty+bDhxzzEn5VIMcH64T0l4wPJODBjjBOH4MB9w3BxnNizyfB6CgXmHz23UqJGJuVyOE8fBfI8QZ05jHqQNXCgyszU2qcB54/GXv/zFkn049nQiMZdBOHHvvvuumzx5stt3331NaFDHCBMwgbbsurDMMIkfc8wxJvpGjRplbkQsTtzEFLLcfffdLaYktqYQi/Tbb79tlhlEHbEUXMD0ocNK06NHD/u9bMJNw/ebM2eOCSdS6bmZEBcITOJj+JkdHWKV2lIUdWRxIVh12bJl5nbkPZdddpmNWQhjyeRPcPIuu+xiz9Gmhd6E/F0Cg1kkCKhmAiQrjYWSY6HI5lFHHWUCJ9uLGAvE66+/boKN84uAQ3xxbljEOf+cO1rk8F6e454iY42bnd9F0PXq1ct25bGw8CASEcLA9x4wYIC5rGhQzvl455137Prj2rvyyittbNnFcv0girNdp0r8SrbFHPcX9xzXQroXkrJg88L8xL1O/S82GHzf448/3p5jruO+Zi3JxQLMeHD/cU5oIZYrMcc4cSzc8w888IBt7tjIs45yr4bHhPhlTmVzy+sYBFjMM3nMCAPOy4svvmjn8oQTTsiZmAuvp48++sjWAc4b1xPzPOsf1xdzHBtnNv78PxTBuYRjCDcwHH86kZjLIFxUDCiCDNGBdYqdcd26dc21Om3aNHfzzTdbOj5xI5xoLC3EUDF+zZs3twUagYY4I9U6hBuZcWYS5HWUPlYwdnNYYGjxku2sLD570KBBJi569uzpZs2aZTExHTt2tIsKEcd35HrAEolwY4Hp37+/vQcBy8151113WSwh3ykRCKMbbrjBJjKy4PiXRaGkpMRELmOOIOJ9jPvRRx9tgi/W2pdpOP+cX84Dgo17BWGFi5l7Bksar9NUHLcA55N7BrGFcOO64G8g/m666SYTaPFAIDJujCeLEHE1iGlEPuPP53JdMZ4cC4s5n4drq9iyvfKJbIu5fAAhd80117iDDz7Y5grmMRZarHJcx9wXuaozx33C/VZ6PcsFbFAZHyznJBsg7Ji/QpjXmOc4XjZ0iINs9Wdlg8h8ng8JBowR6wdrBy3OWF9mz55t8xqbe+a6I4880rKlC53KiLns+6siBgPJbpNJipRpdjCIsrDYIwsuj1Bg8C9CjGbAzZo1swsQix1WHf5GLCzKmN4Rhfvvv7+5LbjxseCdcsopWRdywPflZkKUECCLWECkcSxMODywtiE62X2yK+e7sssk1gETOd+fMUsWP8YEhhDi7xIUHLqg2Z0ee+yxJmTZQTImZH4hpNhtZ1PIAZ+HJZCAYR58d0Q655xrAZFZp04dWzwYAyZHJmxew23MePA7TOaJxoO/gfV3/PjxNskynqFFlr/JJM8OGnH4yiuv2C6V0gaMkYScyCZY4Yn7wiJx1VVX2XzFxpWfuU65X4ipxduQC5hn80HIPfvss1YrLbxXCbFgYxbCPU85F0QLHghECws4G9lsEDbazzVsgrDKEVLDusq/XDtYDpk/mUsRvMTS5VtiS74hMVcGLORYU3Cr4lLFlYgIYZEFFu3SJmp+5iYOF1p+RiSVVtj8zOQXmrkRONzYuYqTA0QVViGOCcsaAhZLERBHF8ZpxYoqXBuIUQQHQpC4Hqx2xIzFg+9JHSEselg1sDCFhGPEZzLBIYBKuyazCec5tJLdcsst1iswPJ9YZUh8QMSGYhRoDo2YJa6C982cOdO+RxjvVho+gx05VlvE82OPPWYFWIHf51wgnLGM45ZF1ObyGhHFCR4Eit5iQQ69DrgNCcHg3mdeIJGJa5Y5oVjBrTpixAhzszJObLq4f2PXCcZnxowZFhvG5o0NPZv96dOn2xyaSTgOjg03ZqY/KxkIV8aJ781mPbx22MwTToKYY11hjOgwgtdGJCbyYg5TNdYMdji4CCsCkxM7BP4tDRc7Exa7UFyCuB8xBbNDBURN+AgJfw5vlLJuGDI8ETj8DmbWXIM16brrrjORgusIqxHg7kPcIjhDwrg/xAjWJ3ac3KRYluIVD2UiIQuOmEIqe+OqwQLH3wlhB8tnMh6lJ8Fsw7lDbJKowSTDbhpxBghbvgcud44zBPcwQj50mTBujAWCsDS4ppjAEHqhlRbRjHs7FiZ/rnXGKh+ukYrCIkKYAkkzIjpwT2KBIz4UEYf3gA0Fmxg2IFzDgPWEe4ZFGLj2CdM544wzLDShbdu2Fo6C54GMSq7/dMJcRCA981BpuHe4f9u3b2+VBzgGNuk8+JnwicrCOBEOgQWd5vB4NdjIlf6eiBPmj3BO4F/mT56PteBlAo4Fw8SkSZM2zq08x0YVwwXniLEhPpmx4TsQTpRO2JSythCCxHWBd4Y5lDmfnr6xfX1xncfOnxwnG+zSx8nfIMSlWImkmOOm5KRhesWvTPYfcUtUtCZQHotJKiAguKj5fW6seFmBWIdwG+BiY7HlYmeyYmEFRBg+/djJgwxGngsFCu/nmGNFCf8nueDJJ5+0Cxp3JYs0uzW+Wy52TbhQsQphMUO0slMKrYYQLsDskBAsfGfGgZ0VkzfvDb87QauIOianWAjeJnYEaxS/y8RLhivv4+8hlLDaMe4EwiKWEM4I9oqK9YrChMOEh7DEfRNWlg/PK7FCnH8sZowH34drj/HBtcp3ZJLkeawWnG/GMPY6QMhz/YUuBH6XRTJMiGCDwQJKLB0xgxwH1wjHxHmKGtxjjBEbImJQoyxMiwncXtynXMe9e/e20ILScF65TnHhMZ8R2M+8yfXMvYTLDCsUG2+sy7j5WLzTCZ9HAle86wphyXHwGuKTY+T+ZfPEvR3ec5UB9yqil40w8cWIEOb+0nM5Y4XICy36bJJ5MFdkWswBVla+dwhrDmPBnMZGFIHOPM68xDoQb2NeURgL1r2HH37YNrzJEv2Y55g38eCQcMdxMp6sD8y7hDAh9jhWwpribZiLhUiKOTIiEXHspLAi4fok+48bgSDzVAr4hkKOBYXJh51jvAuByQZhxUSGi+Gll16y3QC7UyYFRAZiBncb/yI4WNi5CZhUmDxCyxvHHZqKES60dyFzk4mNgGFiTvj7uOkQDeme6MqCm4TJkBgGHkx+7KJDNydWNCYcXAHA//nO3GThe3CPYqnC1UDMXaxAZpLC4sQkj6i77bbbrEYfu1g+m8llzJgxNt4nnXSS7Wz5DG58Jr9s141CdCEm2c2H554dPKIemIxZnDjPnFe+K5MkLvaGDRvae1g0mLD4zpxXvk/seeV1nme8+QzGgqQPXLSMCYsoYo6/gSuLv8t7sNSFrv4oQUwkGyK+By58QhZEfsOmDPHNoonowYJChnnshjO09nDfI2K4N5jDmCcpf8FizIab3+f6xvKPmAoteumEv8nGknuOxZ8NYugtwJrDpoiEK6yLzGlYnlhD+H9l4J5kY8ZcRfIYggjibcqZW3g+nAvC9/BzNuZ9xodH+LkIWcaG+GSy7jmHrEu4O0neCstGpQOuHdyrzG9UPOD6iAdWfIwzjCMeCyD2ECsc55BYRM4jx4mrn7+VKLSnKPAXWM+/6P1zuikLFizw/BPLmfa6devm+aIkeCX3LF682DvxxBM9/8bx/JsieNbzhg0b5vkLvudPFJ4vnoJn4zNt2jTPv3jtd3xhFzz7W/j7S5cu9WbMmOH5AsTzL0Syf+01/2L0/F2DvebvIDz/Jvb8nY39359A7DXGl399EeP5gs/eA3ymL5Y8X/ht/Pxly5bZccU7H9nAF6eeL17sGPiuHC/fJ+Sbb76x78/z4TF//fXX3sKFCzd+L8aL78H398WZPRfC3+fc8Rqf8eabb9rn+GLX+/777+3v8xrjHb7fnyA9f0Iu9/XnT95ey5YtveHDhwfPlB/Ob/hdOE7Oq78jDF7dcHy+YLVz628k7Dle98XZxp/BF/F2jfiLn+cvesGzG+D6WLJkib3OeHBNhJ/B53Mfco3wPuDaYOw4rthrP2rw3XyB4PmTtI1vVOG6ZB7xF2Bv9OjRwbOFg7/p9Hr37u35Ask79dRTve7du9s58zef3n333bfxep4yZYrnL6aev7Gxn2PxN96eL9zt/5deeqk3ePBgu3cygb+59vzNvB0X58PfFHr+JsiOgXvI33x6vljxfNFp7/VFpt2/lYU1oUePHp4vdO1v+pu74JUNc1GbNm02uc4ffPBBOy7mPuB4u3btamMdzqWZgrn71ltv9fyNtN2HIczXfAdfRNma1axZM8/f3AevpgfGpUuXLjZOvoC28xEP1hWOz98AbLIGAfPpVVddZfPiRx995B133HE2NxcCnBuuAXQX12zp9SIZkRRzLPwc35o1a4JnNjBu3DivSpUqNhhMQolgseRiOOOMMzYKB1FYzJ4929thhx08f4cZPCPyDQQpwsDfXSe9X/OZQhdzzKlVq1b16tev77322mveihUrbLOCQAkXYuZiNtBsSIBNRng+masReQgsFmXm3IkTJ9prmQCRGG7kEQ4cJ5vNcPODoEKQ8r4nnnjCxB6byMqCOKtWrZp3+OGH2+Yzlnhi7rnnnvNatGhh4wKIl86dO5tAzvQmjb+PmC297r/00ksmINiIclzHH3+8HVe6QJiMGDHC23HHHb2jjz7axHQ8wnNzww03bNQdiJzwvD777LPewIED7fpjE8HYljYcRJXKiLlIulkx31PyIyw8G4IbgNdwfSXL9qOfKoGyuFd5ryg8cCEQU6Gsz/wF1w5xLripfUEQPCvyBdyGlNfwF1TLNsS1RewU8bG4/XGZE9qCK4zwCuLoCLsgbowkJ/AFjIWU8F7mZ1yIxH36C7a9nm5wAYfhB7hVOU4SmLjWCHUhBIewEN5H2Iu/Blrow/r1FS9wTAwgHWz8hdha+iVyG8aCa5C4WmLCcMsSisNx8Hym3az8fdyVsSVcOB/ELzM2zJm+0LPEP8bHFxTBuyoH4Tm4V7lWcGsnKo1C6BIhGGHxYMJQKNtEuSzOE8fJMXIdEYPMGs53StdxRpVIirl4EL9B8C2xOMQWJbsh/J2STTqxJTFEYcGEwTXABCnyF+5BFnkmbJE/EFBOViYLMPFuiZrDEyNHLCkZhsy7xx13nBU9JUAdeA3RgHBho01cMos1QhGhkG64loi9jSfOOFY+m9hkIL4KgwDHiHipCKwjCF7ipBknYmpTEWOIS7q3kHhHxyBiiYlPxsCQaX755RcbfzZQ4Tng/iPZivsxPE+8j+S8MM67MhD/RjwhMdlhHdV444QgY3M3bdo0i4MjwYH3Pv744xYzzTrPeSTBhOsRUcq55jiznSCXbxREBwh2fmScsgskeJMg20Swe+K9BJUzWVHDJhHsVgg2Z2eKpSebcCOx+2D3Uro+GRNIWAIgGdws7FKZtMoSNdxETHTLli2zv5vKhJRNwu/KeJQOxuWckmwRC9cvuz+CeglGjoXfZ8JKBp/H30D4QyYCtSsD1wcwqbGbLw2TJzv+VK4RgtXj9Y0NYXFkEmZBDi3Z/F2uFSZWxpPs5fB5FgUWAErYlFXUmPmFkgIshJT+ybfrriwYY469kDpAcG2R8NW7d2/zgJCck0hk+OuHLaIsqKFlBKsXFh4SgniNeYxrjGsD8cL7+DkTCTxYBLG4IT5Lr1dYeRCpZJByTMwbHA//573lneP5Hgg51hyslljn4iVRcO/06dPHumaQYBALopcMUjafCMtsXP+sa9TM5DxRcot/WUM5FsaG88J7GCuOi7GpzPzHNUJHIBJgEPuME/NWIqiqwINrKpy/uJawtHJcHCdJZuE55L0cM3NNvs3T5YXvV9EOEJGMmYvF//KeP4l6rVq18p588skyfczEbeCTbt68uQXjJ4OYPH+nYvEw/kWd1Yc/sXj+7shiVkrj34ief+NbfGCyh39jeo0aNSrzewLxCASm8tn+jfGb48n1g2Pyb17vnnvuCY74V0gg2HvvvTd+7912283iV/wJwH6H/8eOSyrBsv6C5vXr188+N1/Hw5+EPX/3Ghzxr3Ds7dq1S+ka4T3t27dPGBvjT6iePxF7tWrV8vr27bsxYJpYKIKT69SpY3E+IcQfESO1//77b4wHSgaB1sQZEXjtT8zBs9GhEGPmpk+fbvMe18fYsWODZ6MBa8CgQYM8X0QGz2QOru+6det6/kbImzBhQvDsb4kXM5dLuM9IArn66qszlogSwlz0zDPP2DzBg/+LxHBuKhozF2nLHDtILGyYqf2bxcqTlGWBYheE4qWUCPWAksXMsevGuoNpN9vuOnYl7NSowUN6fyykdmOKZmeSDHaO7GDYNZZVf4cdAW16cDn410XeWUjCYyL9nCKTsWA5JRYGKwDvYSfD7g3LK+eX0gPsDvkbPCiNQLxFMngf9fCIB4J8HA8eWB4Zk1g478SVUFbFF33Bs/HhvGP5pZYU1pLS8LcoS0GBVzpUUNuRnTz3EaVlcK9RZoLyBcAumQLLXEdYdCg/kQzcW1hRsLjyGWVd0/lGoVnmsPoz51AyBos2XWCYQ6IC7jZ/g2ClLIjTyxRYpRkf6lBiSfE3OpuUYYolmWUuF3DPY5ljTeO4uJ8zBR4T4ghnzJhh8wSlRPzNaPCqKA3npigb7RPPgfmWG4RjLMt1BixCDBAJEDSMpiZaIhCLoVsgFyAgMB1nY4FDGLAwIXryGcYjlcmAmBBieIhLGTBgQPBs+eBaoQhyPsM1n+n7ks1M2EMytjI7iUQ8KAZLDBBwzxCLw2JHcgMbkmSw8FLBHTGJmItajGMhiTk2kLgNqS3GBpK6h8ncYfkIGzruWxI1ytrIVBTctSy2rCPMMWwak9U3yzcxxz2KQQCRQM3MTG1Ucd0yRqzRxL1RU1QJh8mpjJiLrJuVemikmJOiHGuKJCWetGu+VzxwJ950002efxHHrYkkCgNfrJubGtegyF9w6fkLobkUcMlEjUJys77++usWruALISv/IOKDq3D33Xe3sZo8eXLwbGLyzc2aDbiXKS9CGMfBBx9sdTpF2VTGzRrJaEF2RvR3xNWIiwirAWZcLHXsLGkTgrsnHuz8SbnGAoX1QAiRO0iUwIJCEkW+ubKLCeZCLCe4vQktoSNNFMG6yPqA9SkTYElmnAjjYJySJdvlK/66b9Z2zjn/zwRYI0k0xKvFOGGlF5klkmKObDncGtTowYWGmwP3Bg9iesiwS2RiZ8Eg/ZqYKdKxM3Uxi9zCpE74AC4Xkb8QU0PmI/XARG7AHUbs4wsvvOBatmxppSOynb2fLriepk6damIr3TBO1EnDcEAJEmK0ozhOCCxaJE6cODEjopcsWIwq1M4jhIKSK1G9nqJEJMUccUJcJPSPI5idgHge/J9gS3p6JgpGBYpeEnCNGCSNPdMUq2Bkh5yreENiM4YOHWo9d/MF0uhzNR75CMH2xOQSixf29hXZhbmJ/r8kq+CxYE4tK1kqn8HaxLyeiU0cvbRJisIYgLUpquPEOWfdC4smpxPEIYYWvGPc1yRnValSJXhVZJJIijkWaoJ0qVtDVs6gQYM2Pvr3729BqcmCX6mlQyNzdhA0us8UVKeePHmy1YQrFhBwuL9xg5NgQvArtfqyLWKoe4awJ5M3l2AhJHOa7ECyrtnZk+mZ74kV2YDFkYB1MnLzLbmqWCCBhWxlzgMWOeqARRnEBI90u+wZJ7K3uZ+ZV/IhkaEyEFSfcmB9OSCxEKscljiEXKIuDyL9RFLMcaOGF2O8Ryo3Mq1CsODReoZHOmFiZKGiCCoFEnH3FQsUq+U7Uy6ETClEHD8j6IoRrE9YPbAYUHiVbFyyBLGGFLOLn+/PPYKFPdeCu1jBesW1SKwxXg0yv5k/owwbfTo8pHNzQMYyVn7chhS4PueccyLvNqQMUVkFvcsL6x5dHti8EvpEySGRPaJ951YCXLUdOnQwqwDWM6wm6Whbwq6QIFlq91ANnJ1cMbnWqEnFmBJT0rRpU4vBQdjRXaIY4To76aSTzN3Lbh6LMIsDAcLF6HLFSo24pT4kY0Kcq+pO5Qb6Xz7yyCNWbob6X1TYjzqE0LBRT5dQYT7HyzBmzBjrFkSLqSjV3YsHa1P79u3TKkrZmD700EMWh8e8jxua9lsiexStmAN2b1zQ1NPDLUeNl8qCVZBgboqlYnEoq85WocE4sssnqB1oxM0ukFZQxQjnn4LWxCPhWiWeh/8j7IoxKJi4QRbDLl26mJVD7tXcgPUE9yqbTZro09O6EECoEGKTLjcrlRIYJ0QdxbBpbxZ1GBvmICpBpGucXnvtNWsBx/2M4I2tRymyQ1GLOeDmx/1FKn46Alq5OVjAKfSL5aWYXWkrVqywTgTETZx44onBs8VFOFkSR4gLYty4cVaQtXHjxmmbSKMEYh/LLVaOTFaeF4lhg0Us63vvvWeusFSbw0cBLL9UOyB2t7KQIICQI0OWDT+bskIAYcq5R9CnY30Kx4kQG1z1xTrX55qiF3MhLCzpXlyKWchxg+OeoK4fgdXFZqEsDdYosuDYDb/xxhsWf1OMcI9FrWVXIcEGE/cqCTl07qBzTiFlG9K5iA1kojqjqRK6V4mnxopOi7NCCQfgGsAdSsxqZUM9sOzSFpNEQpJnKNci92pukJjLILFJGcUELdDGjx9vdZkuueSSja2eihnaItEvklgSLJak7zM+QmQTSsFgRWFO6tWrlwm6QoJQBqoUVDZkhlp1uA0RJlROSNb2MYpQh49Y5spCZj7jRN3W3r17F204TT4gMZch2NlxsxDsjsm/WKx0xERRkJIUdcztWGHmzZtnCRDFGvBP828yvVhAqX+IdaqYrbYiN9DdgbIRJN+wySKbv9Ag8YFEjsp4WcJuGEuWLLFxIoGp0CC2rbLxqmE3DLwwCF5lpecWibkMQEzKrFmzrBwHcQShWy0TVcnzDWrrERfGhEjjcRpM9+vXz8ofFCNcC2TCPfrooyZwcUeQHEKMJpmuQmQDNlkUcsVtSLbnRRddlLSwelTB0tiqVSuzFFUExglL0/PPP29/h0SdQsu2JvGKrHpEakW9RlhAR44c6V599VUrL0RlCEJqRO6QmMsA1G+i9x3xUUwGXOT8TD/YQgdLHN04mCxwTdBz85hjjrF09WLM3uQaoOYS44Kg5Rro2bNn5IuzimjBxpLNFSIH92qhusPInK+MZQ4RR2sz6tVxn5L1WWgg4Bo1amR1QCsq5oi5oxQJczzZq4VQ1iby+IuLt3z5cq80CxYs8Jo0aYIvyOvWrZu3du3a4BUhREX4+eefg/+JQoF5sW3btt5mm23mjR49Ong2v1iyZInXunVrz99QeP379/f8zWbwSuHxv//9z2NN++WXX4JnUmfRokVe8+bNPX/z7Q0bNsz76aefglcqR0lJidemTRtv5syZwTO5hbH57LPP4q77qTB37lyvcePG3rbbbuuNHDnSW79+ffCKqCxcc7179zbddf3115drzZBlTogsUdFdsBAVhXhdX2SaxQnXPu6wQnSvhrz//vsWo1rekBa8KWRl4jakVAslWwrVbUgGKglqxDaXN46ZGHBaEhJGRJ1IXKzF6HHJR7S6CCFEgfLCCy+YSMFtiDuMfwsZwlmo6YjoSBWS1XAb4l6l2O3ll19u9RALGbod0WrQK0ciFu8lHpoHtVkpNq0m+vmDxJwQQhQgCxYscEOGDDFhg0AhbrUYKG8BZLqyUEQZKyblNejtWuhgneNRHhDJd9xxh8X/XnPNNe6QQw4JXhH5gMScEEIUGNRaIytz2rRp5jLEvUqbq0KnTp06Ftyfam9WSgcNGzbMSicxRvQLLnS3IeEedKEhMS1V4YsVD8EbdsPAZS/yC4k5IYQoIIiDogQJpUgaNGhghap33XXX4NXCBjcpIoWs1rLAMkXc2GOPPWbt9RinqDfRTwXEKmVXEGSpCFcslpRWYqwYW8apkOMuo4rEnBBCFBBz5swxKwrQhqoY3IYhWJpSTTSiiT7jhNBlnA4++ODgFRHLzJkzLemB+phXXHGFq127dvCKyCck5oQQokCg08jf//5367qCe5Vsw2LKoqZVXklJSZmt8njf0KFD3eLFi62fKLUxyxtrF1VI+OD6oNl+WQkQJJQQJ/fxxx9bzdSWLVsGr4h8Q2JOCCEKAPqR4lql7MSxxx7rLrvsspTcjYUEpUlef/31pI32EXqPPPKI9Uc+4YQT3KWXXlpU3Vhwwz/33HPWjSZZaRK6PJAJTUY0ReAZp2KIu4wqEnNCCFEA0ESf3qu4w4rVbUiHFWrMJRMpdMPAbVizZk0r11KIXR7KArGbTPDClClTrGXX/vvvb90watSoEbwi8hGJOSGEiDi4w4j/wn3YuXPngmwOnwpYIqkRl6idF25VhBxZrBdeeGHRjhPlRXgkAgsnTfTpVdutWze1H4wAEnNCCBFhcBuOGTPG3GaUnEDMFWu2IY32cQnGK2ZLvb1Ro0a5qVOnWiZnx44di7J7Ad+ZGMGTTz457vfHYnfvvfeapZf3nXPOOUU5TlFDYk4IISIMLaho2YUbDLchtdaKFerL0fQ9XisuxC4lNvbbbz9zQ++1117BK8UFCTE02T/ssMN+k/RBcgRlbdgcHHDAARZ3Wa1ateBVkc9IzAkhRERZtGiRGzx4sBV1pZgrlrlihiQQLJWIklgYJ9yG9GC96KKL3JFHHhm8UnyQwfrVV19ZYenSUDyZcaIGX69evawAs4gGEnNCCBFB1q5da10eZsyYYUVgL7jggqJ3hy1cuNC99NJL7uuvvw6e2eA2pMvD3LlzrYn+2WefnTCmrhhAqFG+hkzV2EQR+rXefffdJnzphMFYieggMSeEEBGE5vB0L8AdRtkI3IvFzsqVKy14P6wzh1iZMGGCe+KJJ9yhhx5q3QvI9i12ELhk/YZ15hB4NNDHxUo3DLJXU22JJvIDiTkhhIgYFHylbARuRQRKs2bNgleKG2LAwgfQ5YHsVdyuZGUi6MQGVyuPcJzo4Yuljnp7bAzq169vz4voIDEnhBARgrive+65x4RK27Zt3WmnnRY34L8YodUU7ct22WUX64aB4MW9SkYmmZnKyvQX/c03N/HfpEkTGw/iLalPSNmWDh06WKZvMXUNKRR0xoQQIiJgTSEjE5cYogWrnNyrv0KBW2qi0amAbhiTJk1yTZs2LZom+qmAgEPYtmnTxiy7ZELTEYJxI+6y2LqGFAoSc0IIERGo/fW3v/3NRN3ll19u5SXEb6GlF0H+xH1deeWVkemG8eOPP7olS5a4+fPn24OEjvCxevXq4F2Vg2uHmEK6ZYRdHvbYYw+7nurWrRu8S0QNiTkhhIgAn3/+ucV/EeBP9wKK45auE1bs0NmBXqKU11i2bJkVUKb/ahTg/FKsF1F1zTXXuAEDBri+ffvaz5SfWbBgQfDOykH84DPPPONuv/122xjQNYQ4Ofr5iugiMSeEEHkOVhSaw5PBevTRR1utNLnDfsvHH3/s7rjjDvfiiy9aq67zzz/f7bDDDsGruYH4s9ikjER88sknVqAXNyiu4fPOO89iIffZZx+L+atXr17wzsozb948E3N0wyDm8txzz03a3ktkDyynFUFiTggh8pw33njDrDYE9pOVmc6FvZCgRhquaLpgXHzxxTl1G4aFi4lL4/+U/+DfcLGePn26Wd8GDhxoMWskb5DQQtzfmWeeaTFttGU766yzXIsWLdyee+5pv1dZKNfCsZBIs++++1qXh1q1agWvilyC4P/d735n/y+vqJOYE0KIPIeG57vttpt1ecDiJPdqfLCCIXRxQ+e6OTyJKvR/7d27t2XU3nzzzWZh6969u3vrrbfsnOJa5UHdN6xyuImxwvI9PvjgA4ttq1mzZvAX0wMiAYFJ/1oEr8ra5A9YZatXr24iPtwMpIrEnBBC5DnEM2GZIytz++23D54VpaGsBtmZXbp0ybnbMMyspQQIQo1+qMcff7yJJ37GlXrjjTe6fv36bWx6j9sT0Y4LndIzdKpId29UXLe4VR944AFz5RZzN4x8I7TMcY5kmRNCiAKDrMw//OEPlnUoEoMQQjRh3cg1dFIgAQOXKU39W7dubRZDrHPEwbFoI9R4cH5xe+JOxxKHO532Wrvuuqu1bcNyly6w+tFzFQHJeIn8AhFXXiEHEnNCCCFEhsDihsUl/DeRi/yzzz4zq2uDBg3MWkaiC2KupKSkQou7KC4k5oQQQogMESY9lBUDtd9++7n+/fubZRH497rrrnPNmzdXwWNRJhJzQgghRI7BGkfge9hKCwsePxP7l8iaJ0SIxJwQQgghRISRmBNCCCGEiDASc0IIIYQQEUZiTgghhBAiwkjMCSGEEEJEGIk5IYQQQogIIzEnhBBCCBFhJOaEEEIIISKMxJwQQgghRISRmBNCCCGEiDASc0IIIYQQEUZiTgghhBAiwkjMCSGEEEJEGIk5IYQQQogIIzEnhBBCCBFhJOaEEEIIISKMxJwQQgghRISRmBNCCCGEiDASc0IIIYQQEUZiTgghhBAiwkjMCSGEEEJEGIk5IYQQQogIIzEnhBBCCBFhJOaEEEIIISKMxJwQQgghRISRmBNCCCGEiDASc0IIIYQQEUZiTgghhBAiwkjMCSGEEEJEGIk5IYQQIkN4nufWr19v/wqRKSTmhBBCiAyxxRZbuJ133tlttdVWwTNCpB+JOSGEECJDVK9e3V144YWudu3awTNCpB+JOSGEECJDYJVr3ry5q1q1avCMEOlHYk4IIYQQIsJIzAkhhBBCRBiJOSGEEEKICCMxJ4QQQggRYSTmhBBCCCEijMScEEIIIUSEkZgTQgghhIgwEnNCCCGEEBFGYk4IIYQQIsJIzAkhhBBCRBiJOSGEEEKICCMxJ4QQQggRYSTmhBBCCCEijMScEEIIIUSESUnMbbbZZm7zzaX7hBAiFuZF5kcI/xVCiIpS0XkkZTG3xRZbBD8JIYQA5sVw8tUcKYSoLBU1nG22bt06b/Xq1a5GjRrBUxuYP3++69Spk3v77bfdkUce6Vq1auW23XZb9/PPPwfvEEKI4gUR58+fbty4cTZftmvXzjVs2NCe9zwveJcQQpRNuCmcOXOmmzRpkuvVq5cbMGBA6uIOMbd8+XJ/7tmUDz/80Dv11FOZkfTQQw899NBDDz30yMJj66239oYPH+798ssvgSIrm4SWuR9++MFNnTrVDRo0yJWUlLgtt9wyeEUIIURIuKMGf04N/ieEEOUH72f9+vXdrbfe6ho1ahQ8WzYJxRwwMc2bN8+tXLlSYk4IIYQQIoMg5qpUqeIOOuggt/322wfPlk1SMSeEEEIIIfIb1RsRQgghhIgwEnNCCCGEEJHFuf8Hffz3aLpXTboAAAAASUVORK5CYII=)\n\nFigure A-3 shows:\n- The mathematical expression for the square wave\n- Its expansion into the Fourier’s series\n- Its graphic representation.", "_____no_output_____" ] ], [ [ "st = [ 2*pi*( t( 1 : fs/2) ) 2*pi*( t( fs/2+1 : end ) - 1) ];\n\ng3 = 0;\n\nfor i = 1:5\n g3 = g3 + (-1)^(i-1) * 2 * sin( 2*pi*i*t )/i;\nend\n\nfigure();\nplot(t,st);hold on; plot(t,g3); hold off;grid on; title('Sawtooth Wave')", "_____no_output_____" ] ], [ [ "---\n\n## Discussion\n- Are synthesized waves equal their original waves?\n- Now, try to increase the number of synthesized harmonics. Is there any difference?\n- Among the three synthesized waves, which one is closer to its original wave? and Why?\n- Using ```fft```, sketch the frequency spectrum of the three synthesized waves. for more information see ```Welcome to EE472.ipynb``` file.\n- Examine the form of the synthesized square wave at each discontinuity. What do you notice? *Search for Gibbs phenomena.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e724b160bc84df6b08d541255f51b32019b45878
20,043
ipynb
Jupyter Notebook
class 3.ipynb
suraviregmi/GIT_pythonsession
9d4eca7b364b8308afcd64a9d47a59d5cf878435
[ "MIT" ]
null
null
null
class 3.ipynb
suraviregmi/GIT_pythonsession
9d4eca7b364b8308afcd64a9d47a59d5cf878435
[ "MIT" ]
null
null
null
class 3.ipynb
suraviregmi/GIT_pythonsession
9d4eca7b364b8308afcd64a9d47a59d5cf878435
[ "MIT" ]
null
null
null
17.520105
530
0.485756
[ [ [ "# session 3", "_____no_output_____" ] ], [ [ "range(0,9)", "_____no_output_____" ], [ "for i in range(0,9):\n print(i)", "0\n1\n2\n3\n4\n5\n6\n7\n8\n" ], [ "list(range(0,9))\n#range is iterator type\n", "_____no_output_____" ], [ "int('1')", "_____no_output_____" ], [ "str(1)", "_____no_output_____" ], [ "int(a)\n#throws error", "_____no_output_____" ], [ "list ('abcd')\n#iteratorr type", "_____no_output_____" ], [ "tuple('kathmandu')\n#iterator type can only be passed", "_____no_output_____" ], [ "list(('255','255','255','0'))", "_____no_output_____" ], [ "tuple([255,255,254,0])", "_____no_output_____" ], [ "#tuple and list are linked data structure", "_____no_output_____" ], [ "tuple({'abc':123})", "_____no_output_____" ], [ "dict(('name','hari'))", "_____no_output_____" ], [ "list({'abc':123})", "_____no_output_____" ], [ "for item in {'a':1}:\n print(item)", "_____no_output_____" ], [ "dict([('name','hari')])", "_____no_output_____" ] ], [ [ "# list comprehension", "_____no_output_____" ] ], [ [ "lst=[]\nfor i in range (0,50):\n if i % 5==0:\n lst.append(i**2)", "_____no_output_____" ], [ "lst", "_____no_output_____" ], [ "[i**2 for i in range (0,50) if i % 5 == 0]\n# this is list comprehension\n# initially for loop then goes to if and if statement the condition is true it comes to the 1st place\n# working as sme as the upper loop", "_____no_output_____" ], [ "lst =[i**2 if i%2==0 else i for i in range (0,9)]\n#everything before for is a single expression\n\n", "_____no_output_____" ], [ "lst", "_____no_output_____" ], [ "i=1\ni**2 if i%2==0 else i\n# this is list comprehension", "_____no_output_____" ], [ "i=2\ni**2 if i%2==0 else i", "_____no_output_____" ], [ "{i for i in range(0,19)if i % 2 == 0 or i % 3 == 0 }\n#this is set comprehension", "_____no_output_____" ], [ "{i : i**2 for i in range(0,19)if i % 2 == 0 or i % 3 == 0 }\n#this is dictionary comprehension", "_____no_output_____" ] ], [ [ "# library", "_____no_output_____" ] ], [ [ "# to read csv file", "_____no_output_____" ], [ "fp= open('netdata.txt','r')", "_____no_output_____" ], [ "dir (fp)", "_____no_output_____" ], [ "fp.read()", "_____no_output_____" ], [ "fp.read()\n#doesnt show nything because the pointer is atEOF", "_____no_output_____" ], [ "fp.seek(0)\n#it places the pointer to the given position in this case 0", "_____no_output_____" ], [ "fp.read()", "_____no_output_____" ], [ "fp.seek(0)", "_____no_output_____" ], [ "fp.readline()", "_____no_output_____" ], [ "fp= open('notepad1.txt','r')", "_____no_output_____" ], [ "line=fp.readline()", "_____no_output_____" ], [ "line", "_____no_output_____" ], [ "line = line.strip()\n#strips the line feed ie /n", "_____no_output_____" ], [ "line", "_____no_output_____" ], [ "line=line.strip(',')\n#removes comma at the start", "_____no_output_____" ], [ "line", "_____no_output_____" ], [ "items= line.split(',')", "_____no_output_____" ], [ "items", "_____no_output_____" ], [ "import csv\n#", "_____no_output_____" ], [ "fp.seek(0)", "_____no_output_____" ], [ "header='Sort Order,Common Name,Formal Name,Type,Sub Type,Sovereignty,Capital,ISO 4217 Currency Code,ISO 4217 Currency Name,ITU-T Telephone Code,ISO 3166-1 2 Letter Code,ISO 3166-1 3 Letter Code,ISO 3166-1 Number,IANA Country Code TLD\\n'", "_____no_output_____" ], [ "headers=header.strip().split(',')", "_____no_output_____" ], [ "headers", "_____no_output_____" ], [ "reader =csv.DictReader(fp)", "_____no_output_____" ], [ "reader", "_____no_output_____" ], [ "dir(reader)", "_____no_output_____" ], [ "reader.fieldnames", "_____no_output_____" ], [ "countries = list(reader)\n#because is used bfore it was overwritten\n\n", "_____no_output_____" ], [ "countries", "_____no_output_____" ], [ "for line in reader:\n print(line)\n #changed every line to dictionary\n ", "_____no_output_____" ], [ "countries=list(reader)", "_____no_output_____" ], [ "var= none\nfor row in countries:\n if row['Type']=='Independent State':\n var= row", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e724cf97c74f12babca5f5ae7b3188c474080d5b
5,108
ipynb
Jupyter Notebook
notebooks/textract.ipynb
RyoWakabayashi/jupyter-boto3-examples
9fb2e0d887cd3b2ee18fc7b5264d95213fd610c9
[ "MIT" ]
null
null
null
notebooks/textract.ipynb
RyoWakabayashi/jupyter-boto3-examples
9fb2e0d887cd3b2ee18fc7b5264d95213fd610c9
[ "MIT" ]
null
null
null
notebooks/textract.ipynb
RyoWakabayashi/jupyter-boto3-examples
9fb2e0d887cd3b2ee18fc7b5264d95213fd610c9
[ "MIT" ]
null
null
null
27.170213
113
0.412099
[ [ [ "# Rekognition によるテキスト検出", "_____no_output_____" ], [ "## 必要なパッケージのインポート", "_____no_output_____" ] ], [ [ "import io\nimport boto3\nfrom PIL import Image, ImageDraw", "_____no_output_____" ] ], [ [ "## クライアントの作成", "_____no_output_____" ] ], [ [ "client = boto3.client('textract', region_name='us-west-2')", "_____no_output_____" ] ], [ [ "## ローカル画像からの検出", "_____no_output_____" ] ], [ [ "img_pil = Image.open('imgs/running.jpg')\nimg_pil", "_____no_output_____" ], [ "img_bytes_io = io.BytesIO()\nimg_pil.save(img_bytes_io, format=\"jpeg\")\nimg_bytes = img_bytes_io.getvalue()\n\nresponse = client.analyze_document(\n Document={\n 'Bytes': img_bytes\n },\n FeatureTypes=[ \n 'TABLES',\n 'FORMS'\n ]\n)\n\nresponse", "_____no_output_____" ], [ "def show_texts(img, blocks):\n img_width, img_height = img.size\n draw = ImageDraw.Draw(img)\n\n for block in blocks:\n if 'Text' in block.keys():\n print(block['BlockType'], ':', \n block['Text'], ':', str(round(block['Confidence'], 1)) + ' %')\n elif 'Relationships' in block.keys():\n text = ''\n confidence = 0\n for relationship in block['Relationships']:\n if relationship['Type'] == 'CHILD':\n for id in relationship['Ids']:\n for rel_block in blocks:\n if rel_block['Id'] == id and 'Text' in rel_block.keys():\n text = rel_block['Text']\n confidence = rel_block['Confidence']\n break\n if text != '':\n break\n if text != '':\n break\n if text != '':\n if block['BlockType'] == 'CELL':\n print(block['BlockType'], ':',\n block['RowIndex'], '行 :',\n block['ColumnIndex'], '列 :',\n text, ':', str(round(confidence, 1)) + ' %')\n else:\n print(block['BlockType'], ':', text, ':', str(round(confidence, 1)) + ' %')\n else:\n if block['BlockType'] == 'CELL':\n print(block['BlockType'], ':',\n block['RowIndex'], '行 :',\n block['ColumnIndex'], '列')\n else:\n print(block['BlockType'])\n else:\n if block['BlockType'] == 'CELL':\n print(block['BlockType'], ':',\n block['RowIndex'], '行 :',\n block['ColumnIndex'], '列')\n else:\n print(block['BlockType'])\n box = block['Geometry']['BoundingBox']\n draw.rectangle(\n (box['Left'] * img_width, box['Top'] * img_height, # 左上座標\n (box['Left'] + box['Width']) * img_width, (box['Top'] + box['Height']) * img_height), # 右下座標\n outline=(255, 0, 0) # 色\n )\n\n return img", "_____no_output_____" ], [ "show_texts(img_pil.copy(), response['Blocks'])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e724d8f7192b4011ef8b6874d0359e9b762a3bde
4,423
ipynb
Jupyter Notebook
python-standard-library/collections/namedtuple.ipynb
bugbytes-io/notebooks
7ebf6ca0a56a01aa04295cfa5c91d55f6a01a315
[ "MIT" ]
3
2021-12-08T19:36:13.000Z
2022-03-15T06:21:19.000Z
python-standard-library/collections/namedtuple.ipynb
bugbytes-io/notebooks
7ebf6ca0a56a01aa04295cfa5c91d55f6a01a315
[ "MIT" ]
null
null
null
python-standard-library/collections/namedtuple.ipynb
bugbytes-io/notebooks
7ebf6ca0a56a01aa04295cfa5c91d55f6a01a315
[ "MIT" ]
7
2021-11-16T22:31:07.000Z
2022-03-30T09:49:08.000Z
4,423
4,423
0.678273
[ [ [ "import math\n\npoint = (24.6, 10.2)\n\n# calculate the hypotenuse\n# hypotenuse = sqrt((x**2 + y**2))\nhypot = math.sqrt(point[0] ** 2 + point[1] ** 2)\n\n# math.hypot - returns Euclidean Norm!\nassert hypot == math.hypot(*point)", "_____no_output_____" ] ], [ [ "`namedtuple` - assigns meaning to each position in a tuple. This leads to more readable and maintainable code, as instead of using integer-indexing, we can extract the attributes from a tuple by name.\n\nLike normal tuples, namedtuples are immutable and hashable! They are also ordered.", "_____no_output_____" ] ], [ [ "from collections import namedtuple\n\nPoint = namedtuple('Point', 'x y')\npoint = Point(x=24.6, y=10.2)\n\nhypot = math.sqrt(point.x**2 + point.y**2)\nassert hypot == math.hypot(*point)\n\n# you CAN still index into the namedtuple\npoint[0]", "_____no_output_____" ] ], [ [ "We can easily convert a dictionary to a namedtuple, and vice versa.", "_____no_output_____" ] ], [ [ "# convert namedtuple to dict\nd = point._asdict()\n\n# convert dict to namedtuple\nPoint(**d) # .x\n\n# convert namedtuple to list/set\nlist(point)\n\n# sum a namedtuple elements (if all numerical)\nsum(point)", "_____no_output_____" ] ], [ [ "A `namedtuple` is a regular Python class, and can be subclassed", "_____no_output_____" ] ], [ [ "class Point(namedtuple('Point', 'x y')):\n def hypot(self):\n return math.sqrt(self.x**2 + self.y**2)\n\npoint = Point(x=24.6, y=10.2)\npoint.hypot() == math.hypot(*point)", "_____no_output_____" ] ], [ [ "Finally, we can use the Python `typing` module to define a `NamedTuple` type.", "_____no_output_____" ] ], [ [ "from typing import NamedTuple\n\nclass Point(NamedTuple):\n x: int\n y: int\n\n def hypot(self):\n return math.sqrt(self.x**2 + self.y**2)\n\npoint = Point(x=24.6, y=10.2)\npoint.hypot()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e724df70524ec533145901d41d61a3671fd51be1
285,343
ipynb
Jupyter Notebook
EDA/Exploratory Data Analysis for GDP _ (1).ipynb
direct2subhajit/Machine-Learning-for-Statistical-Arbitrage---Using-News-Media-to-Predict-Currency-Exchange-Rates
cdd7acdce087b84fc2e0f202bbb3cee4e9190c1b
[ "Apache-2.0" ]
1
2021-09-05T22:31:35.000Z
2021-09-05T22:31:35.000Z
EDA/Exploratory Data Analysis for GDP _ (1).ipynb
direct2subhajit/Machine-Learning-for-Statistical-Arbitrage---Using-News-Media-to-Predict-Currency-Exchange-Rates
cdd7acdce087b84fc2e0f202bbb3cee4e9190c1b
[ "Apache-2.0" ]
null
null
null
EDA/Exploratory Data Analysis for GDP _ (1).ipynb
direct2subhajit/Machine-Learning-for-Statistical-Arbitrage---Using-News-Media-to-Predict-Currency-Exchange-Rates
cdd7acdce087b84fc2e0f202bbb3cee4e9190c1b
[ "Apache-2.0" ]
null
null
null
201.65583
34,112
0.894695
[ [ [ "## Importing Packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "# Uploading data\ngdp = pd.read_csv ('GDP_YearWise.csv')\ngdp", "_____no_output_____" ], [ "# Dropping column\ngdp= gdp.drop('Unnamed: 0',axis=1)\ngdp", "_____no_output_____" ], [ "# Checking the data type\ngdp.dtypes", "_____no_output_____" ], [ "gdp.columns", "_____no_output_____" ], [ "gdp.shape", "_____no_output_____" ], [ "duplicate_rows_gdp = gdp[gdp.duplicated()]\nprint(\"number of duplicate rows: \", duplicate_rows_gdp.shape)", "number of duplicate rows: (0, 8)\n" ], [ "gdp.count()", "_____no_output_____" ] ], [ [ "## Finding the null values", "_____no_output_____" ] ], [ [ "gdp.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4431 entries, 0 to 4430\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Country Code 4431 non-null object \n 1 Country Name 4431 non-null object \n 2 Subject 4431 non-null object \n 3 Measure 4431 non-null object \n 4 Year 4431 non-null int64 \n 5 Unit 4431 non-null object \n 6 PowerCode 4431 non-null object \n 7 Value 4431 non-null float64\ndtypes: float64(1), int64(1), object(6)\nmemory usage: 277.1+ KB\n" ] ], [ [ "### Dropping the missing or null values", "_____no_output_____" ] ], [ [ "# Finding the null values.\nprint(gdp.isnull().sum())", "Country Code 0\nCountry Name 0\nSubject 0\nMeasure 0\nYear 0\nUnit 0\nPowerCode 0\nValue 0\ndtype: int64\n" ] ], [ [ "### Detecting Outliers", "_____no_output_____" ] ], [ [ "gdp.boxplot(column=[\"Year\"])\nplt.show()", "_____no_output_____" ], [ "gdp.boxplot(column=[\"Value\"])\nplt.show()", "_____no_output_____" ], [ "gdp.describe()", "_____no_output_____" ], [ "sns.pairplot(gdp,hue ='Year')", "_____no_output_____" ] ], [ [ "### Countplot", "_____no_output_____" ] ], [ [ "gdp['Value'].value_counts()", "_____no_output_____" ], [ "sns.countplot(x= gdp['Value'])", "_____no_output_____" ], [ "gdp['Country Name'].value_counts()", "_____no_output_____" ], [ "plt.figure(figsize=(12,10))\ngdp['Country Name'].value_counts().plot.pie(autopct=\"%1.1f%%\")\nplt.show()", "_____no_output_____" ], [ "gdp['Country Name'].value_counts()", "_____no_output_____" ], [ "plt.figure(figsize=(12,10))\ngdp['Country Code'].value_counts().plot.pie(autopct=\"%1.2f%%\")\nplt.show()", "_____no_output_____" ], [ "sns.countplot(x= gdp['Country Code'])", "_____no_output_____" ], [ "gdp.hist(figsize=(10, 8), bins=20, xlabelsize=8, ylabelsize=8)", "_____no_output_____" ] ], [ [ "### Violinplot", "_____no_output_____" ] ], [ [ "sns.violinplot(x=\"Year\",y=\"Country Name\",data=gdp)", "_____no_output_____" ] ], [ [ "### Heat Maps", "_____no_output_____" ] ], [ [ "corr = gdp.corr()\ngdp.corr()", "_____no_output_____" ], [ "# Finding the relations between the variables.\nplt.figure(figsize=(10,8))\nc= gdp.corr()\nsns.heatmap(c,cmap=\"BrBG\",annot=True)", "_____no_output_____" ] ], [ [ "### Group plot", "_____no_output_____" ] ], [ [ "gdp.groupby('Country Name')[['Year','Value']].sum().plot(kind='bar',color=['yellow','blue'],figsize=(15,6))\nplt.ylabel('Year and Value')\nplt.legend()", "_____no_output_____" ], [ "gdp.groupby('Country Code')[['Year','Value']].sum().plot(kind='bar',color=['yellow','red'],figsize=(10,6))\nplt.ylabel('Year and Value')\nplt.legend()", "_____no_output_____" ] ], [ [ "### scatter plot", "_____no_output_____" ] ], [ [ "# Plotting a scatter plot\nfig, ax = plt.subplots(figsize=(10,6))\nax.scatter(gdp['Country Name'], gdp['Value'])\nax.set_xlabel('Country Name')\nax.set_ylabel('Value')\nplt.show()", "_____no_output_____" ], [ "# Plotting a scatter plot\nfig, ax = plt.subplots(figsize=(10,6))\nax.scatter(gdp['Subject'], gdp['PowerCode'])\nax.set_xlabel('Subject')\nax.set_ylabel('PowerCode')\nplt.show()", "_____no_output_____" ], [ "# Let's specifically look at the 2 variables we know\n\nplt.figure(figsize = (10,4))\n\n# Year\nplt.subplot(121)\nsns.distplot(gdp['Year'])\n\n# Values\nplt.subplot(122)\nsns.distplot(gdp['Value'])\n\nplt.tight_layout(pad = 4)\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e724f0df681b69bd6e1405d9001bd379591d8882
59,738
ipynb
Jupyter Notebook
inst/prototypes/mash_regression_univariate_degenerated.ipynb
zouyuxin/mmbr
7a7ab16386ddb6bb3fdca06b86035d66cde19245
[ "MIT" ]
null
null
null
inst/prototypes/mash_regression_univariate_degenerated.ipynb
zouyuxin/mmbr
7a7ab16386ddb6bb3fdca06b86035d66cde19245
[ "MIT" ]
null
null
null
inst/prototypes/mash_regression_univariate_degenerated.ipynb
zouyuxin/mmbr
7a7ab16386ddb6bb3fdca06b86035d66cde19245
[ "MIT" ]
null
null
null
34.292767
194
0.511048
[ [ [ "# Running \"degenerated\" MASH computation\n\nThis is prototype to a unit test to verify implementation of mash computation is correct, by comparing it to univariate case when $Y$ has one column and prior covariance matrices is fixed.", "_____no_output_____" ], [ "## Simulate data", "_____no_output_____" ] ], [ [ "set.seed(1)\nn = 1000\np = 1000\nbeta = rep(0,p)\nbeta[1:4] = 1\nX = matrix(rnorm(n*p),nrow=n,ncol=p)\ny = X %*% beta + rnorm(n)\n#' res =susie(X,y,L=10)\nlibrary(mmbr)", "Loading required package: mashr\nLoading required package: ashr\n" ] ], [ [ "## Run univariate computation", "_____no_output_____" ] ], [ [ "prior_var = 0.2 * as.numeric(var(y))\nresidual_var = as.numeric(var(y))\ndata = mmbr:::DenseData$new(X,y)", "_____no_output_____" ], [ "residual_var", "_____no_output_____" ], [ "prior_var", "_____no_output_____" ], [ "m1 = mmbr:::BayesianSimpleRegression$new(ncol(X), residual_var, prior_var)\nm1$fit(data, save_summary_stats = T)", "_____no_output_____" ], [ "head(m1$posterior_b1)", "_____no_output_____" ], [ "m1", "_____no_output_____" ] ], [ [ "## Run multivariate computation", "_____no_output_____" ] ], [ [ "# Assuming 1 out of $J$ are causal, we place a null weight $1-1/J$ a priori.\n# This will lead to some shrinkage\n# null_weight = 1 - 1 / ncol(X)\nnull_weight = 0\nprior_covar = mmbr:::MashInitializer$new(list(0.2 * cov(y)), 1, prior_weights=1 - null_weight, null_weight=null_weight, alpha = 0)\nresidual_covar = cov(y)", "_____no_output_____" ], [ "prior_covar$prior_variance", "_____no_output_____" ], [ "residual_covar", "_____no_output_____" ], [ "m2 = mmbr:::MashRegression$new(ncol(X), residual_covar, prior_covar)", "_____no_output_____" ], [ "m2$fit(data, save_summary_stats = T)", "_____no_output_____" ], [ "head(m2$posterior_b1)", "_____no_output_____" ], [ "m2", "_____no_output_____" ] ], [ [ "All quantities seem to agree now.", "_____no_output_____" ], [ "## Run ASH to confirm\n\nASH works well.", "_____no_output_____" ] ], [ [ "library(ashr)", "_____no_output_____" ], [ "a.out = ash(as.vector(m1$bhat), as.vector(m1$sbhat), mixcompdist = 'normal')\nhead(get_pm(a.out))", "_____no_output_____" ] ], [ [ "## Run with fixed prior directly from MASH", "_____no_output_____" ] ], [ [ "prior_covar$mash_prior", "_____no_output_____" ], [ "library(mashr)", "_____no_output_____" ], [ "data = mash_set_data(m2$bhat, m2$sbhat)\nm.c = mash(data, g = prior_covar$mash_prior, fixg = TRUE, algorithm ='Rcpp')", " - Computing 1000 x 2 likelihood matrix.\n - Likelihood calculations took 0.00 seconds.\n - Computing posterior matrices.\n - Computation allocated took 0.00 seconds.\n" ], [ "head(get_pm(m.c))", "_____no_output_____" ] ], [ [ "## Run from MASH with canonical priors but weights learned from data \n\nVery similiar results to what I got with fixed `g` earlier.", "_____no_output_____" ] ], [ [ "U.c = cov_canonical(data)\nprint(names(U.c))", "[1] \"identity\" \"singletons_1\" \"equal_effects\" \"simple_het_1\" \n[5] \"simple_het_2\" \"simple_het_3\" \n" ], [ "m.c = mash(data, U.c, , algorithm ='Rcpp')", " - Computing 1000 x 109 likelihood matrix.\n - Likelihood calculations took 0.01 seconds.\n - Fitting model with 109 mixture components.\n - Model fitting took 0.17 seconds.\n - Computing posterior matrices.\n - Computation allocated took 0.00 seconds.\n" ], [ "head(get_pm(m.c))", "_____no_output_____" ], [ "m.c$fitted_g", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e725146282e731686079b2aa091739c6a4058b1a
55,331
ipynb
Jupyter Notebook
examples/notebooks/conversation_v1.ipynb
scottdangelo/python-sdk
511b19f69c19b1e7f4eb671c3be611f11f86859b
[ "Apache-2.0" ]
1
2018-10-04T19:13:44.000Z
2018-10-04T19:13:44.000Z
examples/notebooks/conversation_v1.ipynb
ricardyn/python-sdk
9a4ee5b630c325bb551de0ceffeeceda40c704f7
[ "Apache-2.0" ]
5
2020-03-24T16:26:02.000Z
2021-04-30T20:44:47.000Z
examples/notebooks/conversation_v1.ipynb
PaperBag42/python-sdk
9d8ea23d1919c06175dd1865db9b1868d8466c6f
[ "Apache-2.0" ]
3
2019-08-20T11:37:29.000Z
2020-07-18T11:22:14.000Z
29.684013
239
0.429831
[ [ [ "# Conversation Service", "_____no_output_____" ] ], [ [ "import json\nimport sys\nimport os\nsys.path.append(os.path.join(os.getcwd(),'..','..'))\nimport watson_developer_cloud", "_____no_output_____" ], [ "USERNAME = os.environ.get('CONVERSATION_USERNAME','<USERNAME>')\nPASSWORD = os.environ.get('CONVERSATION_PASSWORD','<PASSWORD>')", "_____no_output_____" ], [ "conversation = watson_developer_cloud.ConversationV1(username=USERNAME,\n password=PASSWORD,\n version='2017-04-21')", "_____no_output_____" ] ], [ [ "## Pizza Chatbot", "_____no_output_____" ], [ "Let's create and chat with a simple pizza bot. We'll start by defining the bot's workspace, then add intents and examples to recognize pizza orders. Once the chatbot is configured, we'll send a message to converse with our pizza bot.", "_____no_output_____" ] ], [ [ "# define the dialog for our example workspace\ndialog_nodes = [{'conditions': '#order_pizza',\n 'context': None,\n 'description': None,\n 'dialog_node': 'YesYouCan',\n 'go_to': None,\n 'metadata': None,\n 'output': {'text': {'selection_policy': 'random',\n 'values': ['Yes You can!', 'Of course!']}},\n 'parent': None,\n 'previous_sibling': None,}]\n\n# create an example workspace\nworkspace = conversation.create_workspace(name='example_workspace',\n description='An example workspace.',\n language='en',\n dialog_nodes=dialog_nodes)\n\n# print response and save workspace_id\nprint(json.dumps(workspace, indent=2))\nworkspace_id=workspace['workspace_id']", "{\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:44:35.869Z\",\n \"updated\": \"2017-05-04T19:44:35.869Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace.\",\n \"workspace_id\": \"e326ef7b-bde6-4c6c-9573-aedd9e5a70e8\"\n}\n" ], [ "# add an intent to the workspace\nintent = conversation.create_intent(workspace_id=workspace_id,\n intent='order_pizza',\n description='A pizza order.')\nprint(json.dumps(intent, indent=2))", "{\n \"intent\": \"order_pizza\",\n \"created\": \"2017-05-04T19:44:36.371Z\",\n \"updated\": \"2017-05-04T19:44:36.371Z\",\n \"description\": \"A pizza order.\"\n}\n" ], [ "# add examples to the intent\nexample1 = conversation.create_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='Can I order a pizza?')\nexample2 = conversation.create_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='I want to order a pizza.')\nexample3 = conversation.create_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='pizza order')\nexample4 = conversation.create_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='pizza to go')\n\nprint(json.dumps(example1, indent=2))\nprint(json.dumps(example2, indent=2))\nprint(json.dumps(example3, indent=2))\nprint(json.dumps(example4, indent=2))", "{\n \"text\": \"Can I order a pizza?\",\n \"created\": \"2017-05-04T19:44:36.915Z\",\n \"updated\": \"2017-05-04T19:44:36.915Z\"\n}\n{\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n}\n{\n \"text\": \"pizza order\",\n \"created\": \"2017-05-04T19:44:37.932Z\",\n \"updated\": \"2017-05-04T19:44:37.932Z\"\n}\n{\n \"text\": \"pizza to go\",\n \"created\": \"2017-05-04T19:44:38.404Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\"\n}\n" ], [ "# check workspace status (wait for training to complete)\nworkspace = conversation.get_workspace(workspace_id=workspace_id)\nprint('The workspace status is: {0}'.format(workspace['status']))\nif workspace['status'] == 'Available':\n print('Ready to chat!')\nelse:\n print('The workspace should be available shortly. Please try again in 30s.')\n print('(You can send messages, but not all functionality will be supported yet.)')", "The workspace status is: Training\nThe workspace should be available shortly. Please try again in 30s.\n(You can send messages, but not all functionality will be supported yet.)\n" ], [ "# start a chat with the pizza bot\ninput = {'text': 'Can I order a pizza?'}\nresponse = conversation.message(workspace_id=workspace_id,\n input=input)\nprint(json.dumps(response, indent=2))", "{\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"2fa01d07-f0cb-4b35-b158-d4448016bb5b\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1\n }\n }\n}\n" ], [ "# continue a chat with the pizza bot\n# (when you send multiple requests for the same conversation,\n# then include the context object from the previous response)\ninput = {'text': 'medium'}\nresponse = conversation.message(workspace_id=workspace_id,\n input=input,\n context=response['context'])\nprint(json.dumps(response, indent=2))", "{\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"medium\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"2fa01d07-f0cb-4b35-b158-d4448016bb5b\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 2,\n \"dialog_request_counter\": 2,\n \"branch_exited_reason\": \"fallback\"\n }\n }\n}\n" ] ], [ [ "## Operation Examples", "_____no_output_____" ], [ "The following examples demonstrate each operation of the Conversation service. They use the pizza chatbot workspace configured above.", "_____no_output_____" ], [ "### Message", "_____no_output_____" ] ], [ [ "input = {'text': 'Can I order a pizza?'}\nresponse = conversation.message(workspace_id=workspace_id,\n input=input)\nprint(json.dumps(response, indent=2))", "{\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"19de6478-695a-4440-8d76-eaa82b22ea69\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1\n }\n }\n}\n" ] ], [ [ "### Workspaces", "_____no_output_____" ] ], [ [ "response = conversation.create_workspace(name='test_workspace',\n description='Test workspace.',\n language='en',\n metadata={})\nprint(json.dumps(response, indent=2))\ntest_workspace_id = response['workspace_id']", "{\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T19:44:41.152Z\",\n \"updated\": \"2017-05-04T19:44:41.152Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Test workspace.\",\n \"workspace_id\": \"f21667ef-ba51-4e20-9f46-784970d7b663\"\n}\n" ], [ "response = conversation.delete_workspace(workspace_id=test_workspace_id)\nprint(json.dumps(response, indent=2))", "{}\n" ], [ "response = conversation.get_workspace(workspace_id=workspace_id, export=True)\nprint(json.dumps(response, indent=2))", "{\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:44:35.869Z\",\n \"intents\": [\n {\n \"intent\": \"order_pizza\",\n \"created\": \"2017-05-04T19:44:36.371Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\",\n \"examples\": [\n {\n \"text\": \"Can I order a pizza?\",\n \"created\": \"2017-05-04T19:44:36.915Z\",\n \"updated\": \"2017-05-04T19:44:36.915Z\"\n },\n {\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n },\n {\n \"text\": \"pizza order\",\n \"created\": \"2017-05-04T19:44:37.932Z\",\n \"updated\": \"2017-05-04T19:44:37.932Z\"\n },\n {\n \"text\": \"pizza to go\",\n \"created\": \"2017-05-04T19:44:38.404Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\"\n }\n ],\n \"description\": \"A pizza order.\"\n }\n ],\n \"updated\": \"2017-05-04T19:44:38.404Z\",\n \"entities\": [],\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace.\",\n \"dialog_nodes\": [\n {\n \"go_to\": null,\n \"output\": {\n \"text\": {\n \"values\": [\n \"Yes You can!\",\n \"Of course!\"\n ],\n \"selection_policy\": \"random\"\n }\n },\n \"parent\": null,\n \"context\": null,\n \"created\": \"2017-05-04T19:44:35.869Z\",\n \"updated\": \"2017-05-04T19:44:35.869Z\",\n \"metadata\": null,\n \"conditions\": \"#order_pizza\",\n \"description\": null,\n \"dialog_node\": \"YesYouCan\",\n \"previous_sibling\": null\n }\n ],\n \"workspace_id\": \"e326ef7b-bde6-4c6c-9573-aedd9e5a70e8\",\n \"counterexamples\": [],\n \"status\": \"Training\"\n}\n" ], [ "response = conversation.list_workspaces()\nprint(json.dumps(response, indent=2))", "{\n \"workspaces\": [\n {\n \"name\": \"Car_Dashboard\",\n \"created\": \"2016-07-19T16:31:17.236Z\",\n \"updated\": \"2017-05-04T17:11:53.494Z\",\n \"language\": \"en\",\n \"metadata\": {\n \"runtime_version\": \"2016-09-20\"\n },\n \"description\": \"Cognitive Car workspace which allows multi-turn conversations to perform tasks in the car.\",\n \"workspace_id\": \"8d869397-411b-4f0a-864d-a2ba419bb249\"\n },\n {\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:37:05.567Z\",\n \"updated\": \"2017-05-04T19:37:23.463Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace for ordering pizza.\",\n \"workspace_id\": \"745166ef-ff47-4a02-888e-eb145fbc22dd\"\n },\n {\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:44:09.895Z\",\n \"updated\": \"2017-05-04T19:44:17.600Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace for ordering pizza.\",\n \"workspace_id\": \"2a01cb92-43b9-48a0-8402-f62b41bad8ca\"\n },\n {\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:44:35.869Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace.\",\n \"workspace_id\": \"e326ef7b-bde6-4c6c-9573-aedd9e5a70e8\"\n },\n {\n \"name\": \"LaForge POC\",\n \"created\": \"2016-09-07T19:01:46.271Z\",\n \"updated\": \"2016-11-29T21:46:38.969Z\",\n \"language\": \"en\",\n \"metadata\": {\n \"runtime_version\": \"2016-09-20\"\n },\n \"description\": null,\n \"workspace_id\": \"4f4046a6-50c5-4a52-9247-b2538f0fe7ac\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T17:29:04.680Z\",\n \"updated\": \"2017-05-04T17:29:12.185Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Updated test workspace.\",\n \"workspace_id\": \"8d822b56-3888-4aae-87a6-45275857778f\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T18:11:41.417Z\",\n \"updated\": \"2017-05-04T18:11:44.593Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Test workspace.\",\n \"workspace_id\": \"8a4c1701-3762-4933-9d48-bd516cab0988\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T18:14:18.699Z\",\n \"updated\": \"2017-05-04T18:14:22.215Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Test workspace.\",\n \"workspace_id\": \"3a914174-5fb1-4fb8-839b-e71d5397aa8e\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T18:29:44.304Z\",\n \"updated\": \"2017-05-04T18:29:53.657Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Test workspace.\",\n \"workspace_id\": \"85930ecd-7081-4c78-820d-0cfa0f5ccdd2\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T18:33:50.416Z\",\n \"updated\": \"2017-05-04T18:33:59.867Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Test workspace.\",\n \"workspace_id\": \"6af48bf2-3f7b-49f5-983b-d680a9769b72\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T19:38:24.485Z\",\n \"updated\": \"2017-05-04T19:38:25.924Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Updated test workspace.\",\n \"workspace_id\": \"69ba1a3f-70ef-45c8-a960-ad9e5a0e956a\"\n },\n {\n \"name\": \"test_workspace\",\n \"created\": \"2017-05-04T19:39:49.051Z\",\n \"updated\": \"2017-05-04T19:39:50.524Z\",\n \"language\": \"en\",\n \"metadata\": {},\n \"description\": \"Updated test workspace.\",\n \"workspace_id\": \"0e0acca5-34a2-441f-8423-53d2074c0801\"\n },\n {\n \"name\": \"\",\n \"created\": \"2017-04-21T16:29:28.652Z\",\n \"updated\": \"2017-04-21T18:25:56.762Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": null,\n \"workspace_id\": \"a09fc044-af0a-478e-8a76-f869dc34c88a\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces?include_count=none&version=2017-04-21\"\n }\n}\n" ], [ "response = conversation.update_workspace(workspace_id=workspace_id,\n description='An example workspace for ordering pizza.')\nprint(json.dumps(response, indent=2))", "{\n \"name\": \"example_workspace\",\n \"created\": \"2017-05-04T19:44:35.869Z\",\n \"updated\": \"2017-05-04T19:44:43.174Z\",\n \"language\": \"en\",\n \"metadata\": null,\n \"description\": \"An example workspace for ordering pizza.\",\n \"workspace_id\": \"e326ef7b-bde6-4c6c-9573-aedd9e5a70e8\"\n}\n" ] ], [ [ "### Intents", "_____no_output_____" ] ], [ [ "response = conversation.create_intent(workspace_id=workspace_id,\n intent='test_intent',\n description='Test intent.')\nprint(json.dumps(response, indent=2))", "{\n \"intent\": \"test_intent\",\n \"created\": \"2017-05-04T19:44:43.723Z\",\n \"updated\": \"2017-05-04T19:44:43.723Z\",\n \"description\": \"Test intent.\"\n}\n" ], [ "response = conversation.delete_intent(workspace_id=workspace_id,\n intent='test_intent')\nprint(json.dumps(response, indent=2))", "{}\n" ], [ "response = conversation.get_intent(workspace_id=workspace_id,\n intent='order_pizza',\n export=True)\nprint(json.dumps(response, indent=2))", "{\n \"intent\": \"order_pizza\",\n \"created\": \"2017-05-04T19:44:36.371Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\",\n \"examples\": [\n {\n \"text\": \"Can I order a pizza?\",\n \"created\": \"2017-05-04T19:44:36.915Z\",\n \"updated\": \"2017-05-04T19:44:36.915Z\"\n },\n {\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n },\n {\n \"text\": \"pizza order\",\n \"created\": \"2017-05-04T19:44:37.932Z\",\n \"updated\": \"2017-05-04T19:44:37.932Z\"\n },\n {\n \"text\": \"pizza to go\",\n \"created\": \"2017-05-04T19:44:38.404Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\"\n }\n ],\n \"description\": \"A pizza order.\"\n}\n" ], [ "response = conversation.list_intents(workspace_id=workspace_id,\n export=True)\nprint(json.dumps(response, indent=2))", "{\n \"intents\": [\n {\n \"intent\": \"order_pizza\",\n \"created\": \"2017-05-04T19:44:36.371Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\",\n \"examples\": [\n {\n \"text\": \"Can I order a pizza?\",\n \"created\": \"2017-05-04T19:44:36.915Z\",\n \"updated\": \"2017-05-04T19:44:36.915Z\"\n },\n {\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n },\n {\n \"text\": \"pizza order\",\n \"created\": \"2017-05-04T19:44:37.932Z\",\n \"updated\": \"2017-05-04T19:44:37.932Z\"\n },\n {\n \"text\": \"pizza to go\",\n \"created\": \"2017-05-04T19:44:38.404Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\"\n }\n ],\n \"description\": \"A pizza order.\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/intents?include_count=none&version=2017-04-21&export=true\"\n }\n}\n" ], [ "response = conversation.update_intent(workspace_id=workspace_id,\n intent='order_pizza',\n new_intent='order_pizza',\n new_description='Order a pizza.')\nprint(json.dumps(response, indent=2))", "{\n \"intent\": \"order_pizza\",\n \"created\": \"2017-05-04T19:44:36.371Z\",\n \"updated\": \"2017-05-04T19:44:45.616Z\",\n \"description\": \"Order a pizza.\"\n}\n" ] ], [ [ "### Examples", "_____no_output_____" ] ], [ [ "response = conversation.create_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='Gimme a pizza with pepperoni')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"Gimme a pizza with pepperoni\",\n \"created\": \"2017-05-04T19:44:46.237Z\",\n \"updated\": \"2017-05-04T19:44:46.237Z\"\n}\n" ], [ "response = conversation.delete_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='Gimme a pizza with pepperoni')\nprint(json.dumps(response, indent=2))", "{}\n" ], [ "response = conversation.get_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='I want to order a pizza.')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n}\n" ], [ "response = conversation.list_examples(workspace_id=workspace_id,\n intent='order_pizza')\nprint(json.dumps(response, indent=2))", "{\n \"examples\": [\n {\n \"text\": \"Can I order a pizza?\",\n \"created\": \"2017-05-04T19:44:36.915Z\",\n \"updated\": \"2017-05-04T19:44:36.915Z\"\n },\n {\n \"text\": \"I want to order a pizza.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:37.393Z\"\n },\n {\n \"text\": \"pizza order\",\n \"created\": \"2017-05-04T19:44:37.932Z\",\n \"updated\": \"2017-05-04T19:44:37.932Z\"\n },\n {\n \"text\": \"pizza to go\",\n \"created\": \"2017-05-04T19:44:38.404Z\",\n \"updated\": \"2017-05-04T19:44:38.404Z\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/intents/order_pizza/examples?include_count=none&version=2017-04-21\"\n }\n}\n" ], [ "response = conversation.update_example(workspace_id=workspace_id,\n intent='order_pizza',\n text='I want to order a pizza.',\n new_text='I want to order a pizza with pepperoni.')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"I want to order a pizza with pepperoni.\",\n \"created\": \"2017-05-04T19:44:37.393Z\",\n \"updated\": \"2017-05-04T19:44:48.409Z\"\n}\n" ] ], [ [ "### Counterexamples", "_____no_output_____" ] ], [ [ "response = conversation.create_counterexample(workspace_id=workspace_id,\n text='I want financial advice today.')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"I want financial advice today.\",\n \"created\": \"2017-05-04T19:44:48.903Z\",\n \"updated\": \"2017-05-04T19:44:48.903Z\"\n}\n" ], [ "response = conversation.get_counterexample(workspace_id=workspace_id,\n text='I want financial advice today.')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"I want financial advice today.\",\n \"created\": \"2017-05-04T19:44:48.903Z\",\n \"updated\": \"2017-05-04T19:44:48.903Z\"\n}\n" ], [ "response = conversation.list_counterexamples(workspace_id=workspace_id)\nprint(json.dumps(response, indent=2))", "{\n \"counterexamples\": [\n {\n \"text\": \"I want financial advice today.\",\n \"created\": \"2017-05-04T19:44:48.903Z\",\n \"updated\": \"2017-05-04T19:44:48.903Z\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/counterexamples?include_count=none&version=2017-04-21\"\n }\n}\n" ], [ "response = conversation.update_counterexample(workspace_id=workspace_id,\n text='I want financial advice today.',\n new_text='I want financial advice for tomorrow.')\nprint(json.dumps(response, indent=2))", "{\n \"text\": \"I want financial advice for tomorrow.\",\n \"created\": \"2017-05-04T19:44:48.903Z\",\n \"updated\": \"2017-05-04T19:44:50.444Z\"\n}\n" ], [ "response = conversation.delete_counterexample(workspace_id=workspace_id,\n text='I want financial advice for tomorrow.')\nprint(json.dumps(response, indent=2))", "{}\n" ] ], [ [ "## Entities", "_____no_output_____" ] ], [ [ "values = [{\"value\": \"juice\"}]", "_____no_output_____" ], [ "response = conversation.create_entity(workspace_id=workspace_id,\n entity='test_entity',\n description='A test entity.',\n values=values)\nprint(json.dumps(response, indent=2))", "{\n \"entity\": \"test_entity\",\n \"created\": \"2017-05-04T19:44:51.395Z\",\n \"updated\": \"2017-05-04T19:44:51.395Z\",\n \"metadata\": null,\n \"description\": \"A test entity.\"\n}\n" ], [ "response = conversation.get_entity(workspace_id=workspace_id,\n entity='test_entity',\n export=True)\nprint(json.dumps(response, indent=2))", "{\n \"entity\": \"test_entity\",\n \"values\": [\n {\n \"value\": \"juice\",\n \"created\": \"2017-05-04T19:44:51.395Z\",\n \"updated\": \"2017-05-04T19:44:51.395Z\",\n \"metadata\": null\n }\n ],\n \"created\": \"2017-05-04T19:44:51.395Z\",\n \"updated\": \"2017-05-04T19:44:51.395Z\",\n \"metadata\": null,\n \"description\": \"A test entity.\"\n}\n" ], [ "response = conversation.list_entities(workspace_id=workspace_id)\nprint(json.dumps(response, indent=2))", "{\n \"entities\": [\n {\n \"entity\": \"test_entity\",\n \"created\": \"2017-05-04T19:44:51.395Z\",\n \"updated\": \"2017-05-04T19:44:51.395Z\",\n \"metadata\": null,\n \"description\": \"A test entity.\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/entities?include_count=none&version=2017-04-21&export=none\"\n }\n}\n" ], [ "response = conversation.update_entity(workspace_id=workspace_id,\n entity='test_entity',\n new_description='An updated test entity.')\nprint(json.dumps(response, indent=2))", "{\n \"entity\": \"test_entity\",\n \"created\": \"2017-05-04T19:44:51.395Z\",\n \"updated\": \"2017-05-04T19:44:53.110Z\",\n \"metadata\": null,\n \"description\": \"An updated test entity.\"\n}\n" ], [ "response = conversation.delete_entity(workspace_id=workspace_id,\n entity='test_entity')\nprint(json.dumps(response, indent=2))", "{}\n" ] ], [ [ "## Synonyms", "_____no_output_____" ] ], [ [ "values = [{\"value\": \"orange juice\"}]\nconversation.create_entity(workspace_id, 'beverage', values=values)", "_____no_output_____" ], [ "response = conversation.create_synonym(workspace_id, 'beverage', 'orange juice', 'oj')\nprint(json.dumps(response, indent=2))", "{\n \"created\": \"2017-05-04T19:44:54.613Z\",\n \"synonym\": \"oj\",\n \"updated\": \"2017-05-04T19:44:54.613Z\"\n}\n" ], [ "response = conversation.get_synonym(workspace_id, 'beverage', 'orange juice', 'oj')\nprint(json.dumps(response, indent=2))", "{\n \"created\": \"2017-05-04T19:44:54.613Z\",\n \"synonym\": \"oj\",\n \"updated\": \"2017-05-04T19:44:54.613Z\"\n}\n" ], [ "response = conversation.list_synonyms(workspace_id, 'beverage', 'orange juice')\nprint(json.dumps(response, indent=2))", "{\n \"synonyms\": [\n {\n \"created\": \"2017-05-04T19:44:54.613Z\",\n \"synonym\": \"oj\",\n \"updated\": \"2017-05-04T19:44:54.613Z\"\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/entities/beverage/values/orange%20juice/synonyms?include_count=none&version=2017-04-21\"\n }\n}\n" ], [ "response = conversation.update_synonym(workspace_id, 'beverage', 'orange juice', 'oj', 'OJ')\nprint(json.dumps(response, indent=2))", "{\n \"created\": \"2017-05-04T19:44:54.613Z\",\n \"synonym\": \"OJ\",\n \"updated\": \"2017-05-04T19:44:56.139Z\"\n}\n" ], [ "response = conversation.delete_synonym(workspace_id, 'beverage', 'orange juice', 'OJ')\nprint(json.dumps(response, indent=2))", "{}\n" ], [ "conversation.delete_entity(workspace_id, 'beverage')", "_____no_output_____" ] ], [ [ "## Values", "_____no_output_____" ] ], [ [ "conversation.create_entity(workspace_id, 'test_entity')", "_____no_output_____" ], [ "response = conversation.create_value(workspace_id, 'test_entity', 'test')\nprint(json.dumps(response, indent=2))", "{\n \"value\": \"test\",\n \"created\": \"2017-05-04T19:44:58.202Z\",\n \"updated\": \"2017-05-04T19:44:58.202Z\",\n \"metadata\": null\n}\n" ], [ "response = conversation.get_value(workspace_id, 'test_entity', 'test')\nprint(json.dumps(response, indent=2))", "{\n \"value\": \"test\",\n \"created\": \"2017-05-04T19:44:58.202Z\",\n \"updated\": \"2017-05-04T19:44:58.202Z\",\n \"metadata\": null\n}\n" ], [ "response = conversation.list_values(workspace_id, 'test_entity')\nprint(json.dumps(response, indent=2))", "{\n \"values\": [\n {\n \"value\": \"test\",\n \"created\": \"2017-05-04T19:44:58.202Z\",\n \"updated\": \"2017-05-04T19:44:58.202Z\",\n \"metadata\": null\n }\n ],\n \"pagination\": {\n \"refresh_url\": \"/v1/workspaces/e326ef7b-bde6-4c6c-9573-aedd9e5a70e8/entities/test_entity/values?include_count=none&version=2017-04-21&export=none\"\n }\n}\n" ], [ "response = conversation.update_value(workspace_id, 'test_entity', 'test', 'example')\nprint(json.dumps(response, indent=2))", "{\n \"value\": \"example\",\n \"created\": \"2017-05-04T19:44:58.202Z\",\n \"updated\": \"2017-05-04T19:44:59.661Z\",\n \"metadata\": null\n}\n" ], [ "response = conversation.delete_value(workspace_id, 'test_entity', 'example')\nprint(json.dumps(response, indent=2))", "{}\n" ], [ "conversation.delete_entity(workspace_id, 'test_entity')", "_____no_output_____" ] ], [ [ "## Logs", "_____no_output_____" ] ], [ [ "response = conversation.list_logs(workspace_id=workspace_id)\nprint(json.dumps(response, indent=2))", "{\n \"logs\": [\n {\n \"request\": {\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n }\n },\n \"response\": {\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"2fa01d07-f0cb-4b35-b158-d4448016bb5b\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1\n }\n }\n },\n \"request_timestamp\": \"2017-05-04T19:44:39.535Z\",\n \"response_timestamp\": \"2017-05-04T19:44:39.583Z\",\n \"log_id\": \"df29092c-f530-4a5f-b443-83df3532ab79\"\n },\n {\n \"request\": {\n \"input\": {\n \"text\": \"medium\"\n },\n \"context\": {\n \"conversation_id\": \"2fa01d07-f0cb-4b35-b158-d4448016bb5b\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node_s\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1\n }\n }\n },\n \"response\": {\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"medium\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"2fa01d07-f0cb-4b35-b158-d4448016bb5b\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 2,\n \"dialog_request_counter\": 2,\n \"branch_exited_reason\": \"fallback\"\n }\n }\n },\n \"request_timestamp\": \"2017-05-04T19:44:40.097Z\",\n \"response_timestamp\": \"2017-05-04T19:44:40.120Z\",\n \"log_id\": \"7e811f99-2ef1-4a54-9b98-5bfc1307c8df\"\n },\n {\n \"request\": {\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n }\n },\n \"response\": {\n \"intents\": [],\n \"entities\": [],\n \"input\": {\n \"text\": \"Can I order a pizza?\"\n },\n \"output\": {\n \"log_messages\": [\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node matched for the input at a root level.\"\n },\n {\n \"level\": \"warn\",\n \"msg\": \"No dialog node condition matched to true in the last dialog round - context.nodes_visited is empty. Falling back to the root node in the next round.\"\n }\n ],\n \"text\": []\n },\n \"context\": {\n \"conversation_id\": \"19de6478-695a-4440-8d76-eaa82b22ea69\",\n \"system\": {\n \"dialog_stack\": [\n {\n \"dialog_node\": \"root\"\n }\n ],\n \"dialog_turn_counter\": 1,\n \"dialog_request_counter\": 1\n }\n }\n },\n \"request_timestamp\": \"2017-05-04T19:44:40.634Z\",\n \"response_timestamp\": \"2017-05-04T19:44:40.683Z\",\n \"log_id\": \"28a2014c-27bb-4e6a-950d-83024824a7ba\"\n }\n ],\n \"pagination\": {}\n}\n" ] ], [ [ "## Cleanup (Delete Pizza Chatbot)", "_____no_output_____" ], [ "Let's cleanup by deleting the pizza chatbot, since it is no longer needed.", "_____no_output_____" ] ], [ [ "# clean-up by deleting the workspace\nconversation.delete_workspace(workspace_id=workspace_id)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e725171b60372c2f2974366b50249340c5faff5e
76,320
ipynb
Jupyter Notebook
test.ipynb
CheerfulUser/tesscvs
6a48527b3ca5f2cb6af44157621ecc74f125b447
[ "MIT" ]
1
2021-08-13T10:17:42.000Z
2021-08-13T10:17:42.000Z
test.ipynb
CheerfulUser/tesscvs
6a48527b3ca5f2cb6af44157621ecc74f125b447
[ "MIT" ]
null
null
null
test.ipynb
CheerfulUser/tesscvs
6a48527b3ca5f2cb6af44157621ecc74f125b447
[ "MIT" ]
null
null
null
112.070485
24,088
0.735181
[ [ [ "import tessreduce as tr\nimport numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\nimport lightkurve as lk\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\n\n\n#where we're going we dont need warnings!!\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\ncvs = pd.read_csv('./data/cataclysmic_variables.csv')\n\n# don't want to deal with the crowded Tuc, Pav, or Sgr zones for now\nind = (cvs['GCVS'].values == 'Tuc ') | (cvs['GCVS'].values == 'Pav ') | (cvs['GCVS'].values == 'Sgr ')\ncvs = cvs.iloc[~ind]", "_____no_output_____" ], [ "cvs['GCVS'].values", "_____no_output_____" ], [ "import matplotlib\nmatplotlib.use('Agg')\nimport tessreduce as tr\nimport numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\nimport lightkurve as lk\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\n\nimport os\ndirname = os.path.dirname('/Users/rridden/Documents/work/code/tess/tesscvs/')\n\n#where we're going we dont need warnings!!\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfile = os.path.join(dirname,'/data/cataclysmic_variables.csv')\ncvs = pd.read_csv('./data/cataclysmic_variables.csv')\n\n# don't want to deal with the crowded Tuc, Pav, or Sgr zones for now\nind = (cvs['GCVS'].values == 'Tuc ') | (cvs['GCVS'].values == 'Pav ') | (cvs['GCVS'].values == 'Sgr ')\ncvs = cvs.iloc[~ind]\n\n\nfor j in range(len(cvs)):\n cv = cvs.iloc[j]\n print('NAME: ',cv['Names'])\n ra = cv['RAJ2000']\n dec = cv['DEJ2000']\n\n obs = tr.spacetime_lookup(ra=ra, dec=dec)\n lcs = []\n t1 = []\n t2 = []\n sectors = []\n for ob in obs:\n try:\n tt = tr.tessreduce(obs_list=ob,reduce=True)\n tt.to_flux()\n lcs += [tt.lc]\n sectors += [tt.tpf.sector]\n except:\n print('Failed for ', ob)\n\n\n name = cv['Names']\n print('MAKE FIGURE')\n plt.figure(figsize=(6.5,8))\n plt.subplot(311)\n plt.title(name)\n for i in range(len(lcs)):\n plt.plot(lcs[i][0],lcs[i][1],label='S ' + str(sectors[i]))\n plt.legend()\n plt.ylabel('mJy')\n\n plt.subplot(312)\n plt.title('trend method 1')\n for i in range(len(lcs)):\n plt.plot(t1[i][0],t1[i][1])\n plt.ylabel('mJy')\n\n plt.subplot(313)\n plt.title('trend method 2')\n for i in range(len(lcs)):\n #plt.fill_between(lcs[i][0],lcs[i][1]-trends2[i]-err[i],lcs[i][1]-trends2[i]+err[i],alpha=.5)\n plt.plot(t2[i][0],t2[i][1])\n plt.ylabel('mJy')\n plt.xlabel('MJD')\n plt.tight_layout()\n\n savename = name.replace('/',' ').replace(' ','_')\n plt.savefig('./figs/{}.pdf'.format(savename))\n\n # save to cvs\n print('SAVE TO CSV')\n mjd = lcs[0][0].copy()\n flux = lcs[0][1].copy()\n e = lcs[0][2].copy()\n s = np.ones(len(lcs[0][0])) * sectors[0]\n for i in range(len(lcs)-1):\n i += 1\n mjd = np.append(mjd,lcs[i][0])\n flux = np.append(flux,lcs[i][1])\n e = np.append(e,lcs[i][2])\n\n ss = np.ones(len(lcs[i][0])) * sectors[i]\n s = np.append(s,ss)\n df = pd.DataFrame(columns=['mjd','flux','err','trend1','trend2','sector'])\n df['mjd'] = mjd\n df['flux'] = flux\n df['err'] = e\n df['sector'] = s\n \n df.to_csv('./lcs/{}.csv'.format(savename),index=False)\n\n print('finished {}'.format(name))", "NAME: 1RXS J0001-6707\n!!! WARNING no MJD time specified, using default of 59000\n| Sector | Covers | Time difference |\n| | | (days) |\n|----------+----------+--------------------|\n| 1 | False | -648 |\n| 27 | False | 34 |\n| 28 | False | 60 |\ngetting TPF from TESScut\nmade reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\n!!Re-running for difference image!!\nshifting images\nremade mask\nbackground\nBackground correlation correction\nField calibration\nTarget is below -30 dec, calibrating to SkyMapper photometry.\ngetting TPF from TESScut\nmade reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\n!!Re-running for difference image!!\nshifting images\nremade mask\nbackground\nBackground correlation correction\nField calibration\nTarget is below -30 dec, calibrating to SkyMapper photometry.\ncould not cluster\ngetting TPF from TESScut\nmade reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\n!!Re-running for difference image!!\nshifting images\nremade mask\nbackground\nBackground correlation correction\nField calibration\nTarget is below -30 dec, calibrating to SkyMapper photometry.\nMAKE FIGURE\nSAVE TO CSV\n" ], [ "len(lcs)", "_____no_output_____" ], [ "for j in range(len(cvs)):\n cv = cvs.iloc[j]\n\n ra = cv['RAJ2000']\n dec = cv['DEJ2000']\n\n c = SkyCoord(ra=float(ra)*u.degree, dec=float(dec) *u.degree, frame='icrs')\n\n tess = lk.search_tesscut(c,sector=None)\n\n if len(tess) > 0:\n lcs = []\n zps = []\n err = []\n sectors = []\n trends1 = []\n trends2 = []\n\n if len(tess) > 1:\n tpfs = []\n for t in tess:\n tpf = t.download(cutout_size=90)\n aper_b18 = np.zeros(tpf.shape[1:], dtype=bool)\n aper_b18[44:48, 44:47] = True\n res = tr.Quick_reduce(tpf,aper=aper_b18)\n lcs += [res['lc']]\n err += [res['err']]\n zps += [res['zp']]\n sectors += [tpf.sector]\n trends1 += [tr.Remove_stellar_variability(lcs[-1],err[-1],variable=True)]\n trends2 += [tr.Remove_stellar_variability(lcs[-1],err[-1],variable=False)]\n \n\n name = cv['Names']\n plt.figure(figsize=(6.5,8))\n plt.subplot(311)\n plt.title(name)\n for i in range(len(lcs)):\n plt.plot(lcs[i][0],lcs[i][1],label='S ' + str(sectors[i]))\n plt.legend()\n\n plt.subplot(312)\n plt.title('trend method 1')\n for i in range(len(lcs)):\n plt.fill_between(lcs[i][0],lcs[i][1]-trends1[i]-err[i],lcs[i][1]-trends1[i]+err[i],alpha=.5)\n plt.plot(lcs[i][0],lcs[i][1]-trends1[i])\n plt.subplot(313)\n plt.title('trend method 2')\n for i in range(len(lcs)):\n plt.fill_between(lcs[i][0],lcs[i][1]-trends2[i]-err[i],lcs[i][1]-trends2[i]+err[i],alpha=.5)\n plt.plot(lcs[i][0],lcs[i][1]-trends2[i])\n savename = name.replace('/',' ').replace(' ','_')\n plt.savefig('./figs/{}.pdf'.format(savename))\n\n # save to cvs\n mjd = lcs[0][0].copy()\n flux = lcs[0][1].copy()\n e = err[0].copy()\n t1 = trends1[0].copy()\n t2 = trends2[0].copy()\n z = np.ones(len(lcs[0][0])) * zps[0]\n s = np.ones(len(lcs[0][0])) * sectors[0]\n for i in range(len(lcs)-1):\n i += 1\n mjd = np.append(mjd,lcs[i][0])\n flux = np.append(flux,lcs[i][1])\n e = np.append(e,err[i])\n t1 = np.append(t1,trends1[i])\n t2 = np.append(t2,trends2[i])\n\n zz = np.ones(len(lcs[i][0])) * zps[i]\n ss = np.ones(len(lcs[i][0])) * sectors[i]\n z = np.append(z,zz)\n s = np.append(s,ss)\n df = pd.DataFrame(columns=['mjd','flux','err','trend1','trend2','zp','sector'])\n df['mjd'] = mjd\n df['flux'] = flux\n df['err'] = e\n df['trend1'] = t1\n df['trend2'] = t2\n df['zp'] = z\n df['sector'] = s\n\n df.to_csv('./lcs/{}.csv'.format(savename),index=False)\n\n print('finished {}'.format(name))\n", "made reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\nimages shifted\nTarget is too far south with Dec = -67.1286944 for PS1 photometry. Can not calibrate at this time.\nmade light curve\n2\nmade reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\nimages shifted\nTarget is too far south with Dec = -67.1286944 for PS1 photometry. Can not calibrate at this time.\nmade light curve\n2\n3\nmade reference\nmade source mask\ncalculating background\nbackground subtracted\ncalculating centroids\nimages shifted\nTarget is too far south with Dec = -67.1286944 for PS1 photometry. Can not calibrate at this time.\nmade light curve\n20\n93\nfinished 1RXS J0001-6707\n" ], [ "len(trends2[1])", "_____no_output_____" ], [ "plt.tight_layout()", "_____no_output_____" ], [ "len(err[2])", "_____no_output_____" ], [ "a = tr.Remove_stellar_variability(lcs[1],err[1],variable=True)", "2\n" ], [ "len(a)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e72520150ec26ea907b1e8a3c5d534fa31bf7f79
742,975
ipynb
Jupyter Notebook
PS7/problem_set_7.ipynb
Oscarmason20/Exercises
0cea4e596cda6624d907e12844bb9e3c2c4c16cf
[ "MIT" ]
4
2019-02-28T07:45:15.000Z
2019-06-27T19:42:01.000Z
PS7/problem_set_7.ipynb
Oscarmason20/Exercises
0cea4e596cda6624d907e12844bb9e3c2c4c16cf
[ "MIT" ]
null
null
null
PS7/problem_set_7.ipynb
Oscarmason20/Exercises
0cea4e596cda6624d907e12844bb9e3c2c4c16cf
[ "MIT" ]
19
2019-01-09T15:32:14.000Z
2020-01-13T10:55:09.000Z
450.834345
77,992
0.939466
[ [ [ "# Problem set 7: Solving the consumer problem with income risk", "_____no_output_____" ] ], [ [ "import numpy as np\nimport scipy as sp\nfrom scipy import linalg\nfrom scipy import optimize\nfrom scipy import interpolate\nimport sympy as sm\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nfrom matplotlib import cm\nfrom mpl_toolkits.mplot3d import Axes3D", "_____no_output_____" ] ], [ [ "# Tasks", "_____no_output_____" ] ], [ [ "sm.init_printing(use_unicode=True)", "_____no_output_____" ] ], [ [ "## Optimization problem I", "_____no_output_____" ], [ "Consider the function", "_____no_output_____" ], [ "$$ \nf(\\boldsymbol{x}) = f(x_1,x_2) = (x_1^2 - x_1x_2 + x_2^2)^2\n$$", "_____no_output_____" ], [ "Define it in **sympy** by:", "_____no_output_____" ] ], [ [ "x1 = sm.symbols('x_1')\nx2 = sm.symbols('x_2')\nf = (x1**2 - x1*x2 + x2**2)**2\nf", "_____no_output_____" ] ], [ [ "The **Jacobian** is", "_____no_output_____" ] ], [ [ "f1 = sm.diff(f,x1)\nf2 = sm.diff(f,x2)\nsm.Matrix([f1,f2])", "_____no_output_____" ] ], [ [ "The **Hessian** is", "_____no_output_____" ] ], [ [ "f11 = sm.diff(f,x1,x1)\nf12 = sm.diff(f,x1,x2)\nf21 = sm.diff(f,x2,x1)\nf22 = sm.diff(f,x2,x2)\nsm.Matrix([[f11,f12],[f21,f22]])", "_____no_output_____" ] ], [ [ "**Question A:** Create a 3D plot and a contour plot of $f(x_1,x_2)$ such as those in the answer below.", "_____no_output_____" ] ], [ [ "_f = sm.lambdify((x1,x2),f)\n\n# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# a. grids\nx1_vec = np.linspace(-2,2,500)\nx2_vec = np.linspace(-2,2,500)\nx1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')\nf_grid = _f(x1_grid,x2_grid)\n\n# b. main\nfig = plt.figure()\nax = fig.add_subplot(1,1,1,projection='3d')\ncs = ax.plot_surface(x1_grid,x2_grid,f_grid,cmap=cm.jet)\n\n# c. add labels\nax.set_xlabel('$x_1$')\nax.set_ylabel('$x_2$')\nax.set_zlabel('$f$')\n\n# d. invert xaxis\nax.invert_xaxis()\n\n# e. add colorbar\nfig.colorbar(cs);", "_____no_output_____" ], [ "fig = plt.figure()\nax = fig.add_subplot(1,1,1)\nlevels = np.sort([j*10**(-i) for i in [-1,0,1,2,3,4] for j in [0.5,1,1.5]])\ncs = ax.contour(x1_grid,x2_grid,f_grid,levels=levels,cmap=cm.jet)\nfig.colorbar(cs);", "_____no_output_____" ] ], [ [ "**Question B:** Construct python functions for the jacobian and the hessian.", "_____no_output_____" ] ], [ [ "f_python = lambda x: _f(x[0],x[1])\n\n# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "_f1 = sm.lambdify((x1,x2),f1)\n_f2 = sm.lambdify((x1,x2),f2)\n_f11 = sm.lambdify((x1,x2),f11)\n_f12 = sm.lambdify((x1,x2),f12)\n_f21 = sm.lambdify((x1,x2),f21)\n_f22 = sm.lambdify((x1,x2),f22)\n\ndef f_jac(x):\n return np.array([_f1(x[0],x[1]),_f2(x[0],x[1])])\n\ndef f_hess(x):\n row1 = [_f11(x[0],x[1]),_f12(x[0],x[1])]\n row2 = [_f21(x[0],x[1]),_f22(x[0],x[1])]\n return np.array([row1,row2])\n", "_____no_output_____" ] ], [ [ "**Question C:** Minimize $f(x_1,x_2)$ using respectively\n\n1. Nelder-Mead,\n* BFGS without analytical jacobian,\n* BFGS with analytical jacobian, and\n* Newton-CG with analytical jacobian and hessian\n\nCompare the results and discuss which optimizer you prefer.\n\n**Optional:** If you wish, you can use the functions defined in the hidden cells below to also track how the optimizers converges to the solution.", "_____no_output_____" ] ], [ [ "def collect(x):\n \n # globals used to keep track across iterations\n global evals # set evals = 0 before calling optimizer\n global x0\n global x1s\n global x2s\n global fs\n \n # a. initialize list\n if evals == 0:\n x1s = [x0[0]] \n x2s = [x0[1]]\n fs = [f_python(x0)]\n \n # b. append trial values\n x1s.append(x[0])\n x2s.append(x[1])\n fs.append(f_python(x))\n \n # c. increment number of evaluations\n evals += 1", "_____no_output_____" ], [ "def contour():\n \n global evals\n global x1s\n global x2s\n global fs\n \n # a. contour plot\n fig = plt.figure(figsize=(10,4))\n ax = fig.add_subplot(1,2,1)\n levels = np.sort([j*10**(-i) for i in [-1,0,1,2,3,4] for j in [0.5,1,1.5]])\n cs = ax.contour(x1_grid,x2_grid,f_grid,levels=levels,cmap=cm.jet)\n fig.colorbar(cs)\n ax.plot(x1s,x2s,'-o',ms=4,color='black')\n ax.set_xlabel('$x_1$')\n ax.set_ylabel('$x_2$')\n \n # b. function value\n ax = fig.add_subplot(1,2,2)\n ax.plot(np.arange(evals+1),fs,'-o',ms=4,color='black')\n ax.set_xlabel('iteration')\n ax.set_ylabel('function value')", "_____no_output_____" ], [ "x0 = [-2,-1] # suggested initial guess\n\n# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "print('Nelder-Mead:')\nevals = 0\nresult = optimize.minimize(f_python,x0,method='Nelder-Mead',callback=collect,options={'disp':True})\ncontour()", "Nelder-Mead:\nOptimization terminated successfully.\n Current function value: 0.000000\n Iterations: 43\n Function evaluations: 83\n" ], [ "print('BFGS without analytical gradient:')\n\nevals = 0\nresult = optimize.minimize(f_python,x0,method='BFGS',callback=collect,options={'disp':True})\ncontour()", "BFGS without analytical gradient:\nOptimization terminated successfully.\n Current function value: 0.000000\n Iterations: 26\n Function evaluations: 112\n Gradient evaluations: 28\n" ], [ "print('BFGS with analytical gradient:')\n\nevals = 0\nresult = optimize.minimize(f_python,x0,jac=f_jac,method='BFGS',callback=collect,options={'disp':True})\ncontour()", "BFGS with analytical gradient:\nOptimization terminated successfully.\n Current function value: 0.000000\n Iterations: 26\n Function evaluations: 28\n Gradient evaluations: 28\n" ], [ "print('Newton-CG with analytical gradient and hessian:')\n\nevals = 0\nresult = optimize.minimize(f_python,x0,jac=f_jac,hess=f_hess,method='Newton-CG',callback=collect,options={'disp':True})\ncontour()", "Newton-CG with analytical gradient and hessian:\nOptimization terminated successfully.\n Current function value: 0.000000\n Iterations: 15\n Function evaluations: 16\n Gradient evaluations: 30\n Hessian evaluations: 15\n" ] ], [ [ "## Optimization problem II", "_____no_output_____" ], [ "Consider the function", "_____no_output_____" ], [ "$$\nf(x_1,x_2) = (4-2.1x_1^2 + \\frac{x_1^4}{3})x_1^2 + x_1x_2 + (4x_2^2 - 4)x_2^2)\n$$", "_____no_output_____" ], [ "Define it in **sympy** by:", "_____no_output_____" ] ], [ [ "x1 = sm.symbols('x_1')\nx2 = sm.symbols('x_2')\nf = (4-2.1*x1**2 + (x1**4)/3)*x1**2 + x1*x2 + (4*x2**2 - 4)*x2**2\n_f = sm.lambdify((x1,x2),f)\nf", "_____no_output_____" ] ], [ [ "Create **3D plot**:", "_____no_output_____" ] ], [ [ "# a. grids\nx1_vec = np.linspace(-2,2,500)\nx2_vec = np.linspace(-1,1,500)\nx1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')\nf_grid = _f(x1_grid,x2_grid)\n\n# b. main\nfig = plt.figure()\nax = fig.add_subplot(1,1,1,projection='3d')\ncs = ax.plot_surface(x1_grid,x2_grid,f_grid,cmap=cm.jet)\n\n# c. add labels\nax.set_xlabel('$x_1$')\nax.set_ylabel('$x_2$')\nax.set_zlabel('$f$')\n\n# d. invert xaxis\nax.invert_xaxis()\n\n# e. remove background\nax.xaxis.pane.fill = False\nax.yaxis.pane.fill = False\nax.zaxis.pane.fill = False\n\n# f. add colorbar\nfig.colorbar(cs);", "_____no_output_____" ] ], [ [ "**Question A:** Find the minimum of the function starting from each of the suggested initial values below. Print the first 20 solutions, and all solutions aftwards, which is the best yet to be found. Save the solutions and associated function values in `xs` and `fs`.", "_____no_output_____" ] ], [ [ "# a. python function for f\nf_python = lambda x: _f(x[0],x[1])\n\n# b. initial guesses\nnp.random.seed(1986)\nK = 1000\nx0s = np.empty((K,2))\nx0s[:,0] = -2 + 4*np.random.uniform(size=K)\nx0s[:,1] = -1 + 2*np.random.uniform(size=K)\n\n# c. solutions and associated values\nxs = np.empty((K,2))\nfs = np.empty(K)\n\n# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "fopt = np.inf\nxopt = np.nan\nfor i,x0 in enumerate(x0s):\n \n # a. optimize\n result = optimize.minimize(f_python,x0,method='BFGS')\n xs[i,:] = result.x\n fs[i] = result.fun\n \n # b. print first 20 or if better than seen yet\n if i < 20 or fs[i] < fopt: # plot 20 first or if improving\n if fs[i] < fopt:\n xopt = xs[i,:]\n fopt = fs[i]\n print(f'{i:4d}: x0 = ({x0[0]:6.2f},{x0[0]:6.2f})',end='')\n print(f' -> converged at ({xs[i][0]:6.2f},{xs[i][1]:6.2f}) with f = {fs[i]:.12f}')\n \n# best solution\nprint(f'\\nbest solution:\\n x = ({xopt[0]:6.2f},{xopt[1]:6.2f}) -> f = {fopt:.12f}')", " 0: x0 = ( 0.28, 0.28) -> converged at ( 0.09, -0.71) with f = -1.031628453485\n 1: x0 = ( -1.69, -1.69) -> converged at ( -1.70, 0.80) with f = -0.215463824384\n 2: x0 = ( 0.43, 0.43) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 3: x0 = ( 1.59, 1.59) -> converged at ( 1.70, -0.80) with f = -0.215463824384\n 4: x0 = ( 0.18, 0.18) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 5: x0 = ( 0.81, 0.81) -> converged at ( -0.09, 0.71) with f = -1.031628453490\n 6: x0 = ( -0.46, -0.46) -> converged at ( -0.09, 0.71) with f = -1.031628453487\n 7: x0 = ( 0.61, 0.61) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 8: x0 = ( 0.76, 0.76) -> converged at ( -0.09, 0.71) with f = -1.031628453484\n 9: x0 = ( 0.87, 0.87) -> converged at ( -0.09, 0.71) with f = -1.031628453490\n 10: x0 = ( 0.76, 0.76) -> converged at ( -0.09, 0.71) with f = -1.031628453489\n 11: x0 = ( 1.23, 1.23) -> converged at ( 1.70, -0.80) with f = -0.215463824384\n 12: x0 = ( -0.87, -0.87) -> converged at ( 0.09, -0.71) with f = -1.031628453486\n 13: x0 = ( 1.03, 1.03) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 14: x0 = ( -0.77, -0.77) -> converged at ( -0.09, 0.71) with f = -1.031628453490\n 15: x0 = ( -0.25, -0.25) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 16: x0 = ( 0.21, 0.21) -> converged at ( 0.09, -0.71) with f = -1.031628453488\n 17: x0 = ( -0.27, -0.27) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 18: x0 = ( 0.31, 0.31) -> converged at ( -0.09, 0.71) with f = -1.031628453490\n 19: x0 = ( 1.57, 1.57) -> converged at ( 1.61, 0.57) with f = 2.104250310311\n 24: x0 = ( -0.14, -0.14) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 27: x0 = ( -0.13, -0.13) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n 155: x0 = ( 0.60, 0.60) -> converged at ( 0.09, -0.71) with f = -1.031628453490\n\nbest solution:\n x = ( 0.09, -0.71) -> f = -1.031628453490\n" ] ], [ [ "**Question B:** Create a 3D scatter plot of where the optimizer converges, and color the dots by the associated function values.", "_____no_output_____" ] ], [ [ "# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# a. main\nfig = plt.figure()\nax = fig.add_subplot(1,1,1,projection='3d')\ncs = ax.scatter(xs[:,0],xs[:,1],fs,c=fs); \n\n# b. add labels\nax.set_xlabel('$x_1$')\nax.set_ylabel('$x_2$')\nax.set_zlabel('$f$')\n\n# c. invert xaxis\nax.invert_xaxis()\n\n# d. colorbar\nfig.colorbar(cs);", "_____no_output_____" ] ], [ [ "**Question C:** Plot the function values at the solutions as a function of the starting values.", "_____no_output_____" ] ], [ [ "# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# a. main\nfig = plt.figure()\nax = fig.add_subplot(1,1,1,projection='3d')\ncs = ax.scatter(x0s[:,0],x0s[:,1],fs,c=fs); \n\n# b. add labels\nax.set_xlabel('$x_1$')\nax.set_ylabel('$x_2$')\nax.set_zlabel('$f$')\n\n# c. invert xaxis\nax.invert_xaxis()\n\n# d. colorbar\nfig.colorbar(cs);", "_____no_output_____" ], [ "sm.init_printing(pretty_printing=False)", "_____no_output_____" ] ], [ [ "# Problem: Solve the consumer problem with income risk I", "_____no_output_____" ], [ "Define the following **variables** and **parameters**:\n\n* $m_t$ is cash-on-hand in period $t$\n* $c_t$ is consumption in period $t$\n* $y_t$ is income in period $t$\n* $\\Delta \\in (0,1)$ is income risk\n* $r$ is the interest rate\n* $\\beta > 0$, $\\rho > 1$, $\\nu > 0 $, $\\kappa > 0$, $\\xi > 0$ are utility parameters", "_____no_output_____" ], [ "In the **second period** the household solves:\n\n$$\n\\begin{aligned}\nv_{2}(m_{2}) &= \\max_{c_{2}}\\frac{c_{2}^{1-\\rho}}{1-\\rho}+\\nu\\frac{(m_{2}-c_{2}+\\kappa)^{1-\\rho}}{1-\\rho} \\\\\n \\text{s.t.} \\\\\nc_{2} & \\in [0,m_{2}]\n\\end{aligned}\n$$", "_____no_output_____" ], [ "In the **first period** the household solves:\n\n$$\n\\begin{aligned}\nv_{1}(m_{1}) & = \n\\max_{c_{1}}\\frac{c_{1}^{1-\\rho}}{1-\\rho}+\\beta\\mathbb{E}_{1}\\left[v_2(m_2)\\right] \\\\\n\\text{s.t.} \\\\\nm_2 &= (1+r)(m_{1}-c_{1})+y_{2} \\\\\ny_{2} &= \\begin{cases}\n1-\\Delta & \\text{with prob. }0.5\\\\\n1+\\Delta & \\text{with prob. }0.5 \n\\end{cases}\\\\\nc_{1} & \\in [0,m_{1}]\\\\\n\\end{aligned}\n$$", "_____no_output_____" ], [ "The **basic functions** are:", "_____no_output_____" ] ], [ [ "def utility(c,rho):\n return c**(1-rho)/(1-rho)\n\ndef bequest(m,c,nu,kappa,rho):\n return nu*(m-c+kappa)**(1-rho)/(1-rho)\n\ndef v2(c2,m2,rho,nu,kappa):\n return utility(c2,rho) + bequest(m2,c2,nu,kappa,rho)\n\ndef v1(c1,m1,rho,beta,r,Delta,v2_interp):\n \n # a. v2 value, if low income\n m2_low = (1+r)*(m1-c1) + 1-Delta\n v2_low = v2_interp([m2_low])[0]\n \n # b. v2 value, if high income\n m2_high = (1+r)*(m1-c1) + 1+Delta\n v2_high = v2_interp([m2_high])[0]\n \n # c. expected v2 value\n v2 = 0.5*v2_low + 0.5*v2_high\n \n # d. total value\n return utility(c1,rho) + beta*v2", "_____no_output_____" ] ], [ [ "The **solution functions** are:", "_____no_output_____" ] ], [ [ "def solve_period_2(rho,nu,kappa,Delta):\n\n # a. grids\n m2_vec = np.linspace(1e-8,5,500)\n v2_vec = np.empty(500)\n c2_vec = np.empty(500)\n\n # b. solve for each m2 in grid\n for i,m2 in enumerate(m2_vec):\n\n # i. objective\n obj = lambda c2: -v2(c2,m2,rho,nu,kappa)\n\n # ii. initial value (consume half)\n x0 = m2/2\n\n # iii. optimizer\n result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m2])\n\n # iv. save\n v2_vec[i] = -result.fun\n c2_vec[i] = result.x\n \n return m2_vec,v2_vec,c2_vec\n\ndef solve_period_1(rho,beta,r,Delta,v1,v2_interp):\n\n # a. grids\n m1_vec = np.linspace(1e-8,4,100)\n v1_vec = np.empty(100)\n c1_vec = np.empty(100)\n \n # b. solve for each m1 in grid\n for i,m1 in enumerate(m1_vec):\n \n # i. objective\n obj = lambda c1: -v1(c1,m1,rho,beta,r,Delta,v2_interp)\n \n # ii. initial guess (consume half)\n x0 = m1*1/2\n \n # iii. optimize\n result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m1])\n \n # iv. save\n v1_vec[i] = -result.fun\n c1_vec[i] = result.x\n \n return m1_vec,v1_vec,c1_vec", "_____no_output_____" ] ], [ [ "**Question A:** Find optimal consumption in the first period as funcition of cash-on-hand, and plot it.", "_____no_output_____" ] ], [ [ "rho = 8\nkappa = 0.5\nnu = 0.1\nr = 0.04\nbeta = 0.94\nDelta = 0.5\n\n# b. solve\n# write your code here\n\n# c. plot\n# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# b. solve\ndef solve(rho,beta,r,Delta,nu,kappa,v1):\n \n # a. solve period 2\n m2_vec,v2_vec,c2_vec = solve_period_2(rho,nu,kappa,Delta)\n \n # b. construct interpolator\n v2_interp = interpolate.RegularGridInterpolator((m2_vec,), v2_vec,\n bounds_error=False,fill_value=None)\n \n # b. solve period 1\n m1_vec,v1_vec,c1_vec = solve_period_1(rho,beta,r,Delta,v1,v2_interp)\n \n return m1_vec,c1_vec\n\nm1_vec,c1_vec = solve(rho,beta,r,Delta,nu,kappa,v1)\n\n# c. plot\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(m1_vec,c1_vec)\nax.set_xlabel('$m_1$')\nax.set_ylabel('$c_1$')\nax.set_title('consumption function in period 1')\nax.set_xlim([0,4])\nax.set_ylim([0,2.5]);", "_____no_output_____" ] ], [ [ "**Question B:** Find optimal consumption in the first period as funcition of cash-on-hand, and plot it, assuming that\n\n$$ \ny_{2} = \\begin{cases}\n1-\\sqrt{\\Delta} & \\text{with prob. }0.1\\\\\n1-\\Delta & \\text{with prob. }0.4\\\\\n1+\\Delta & \\text{with prob. }0.4\\\\ \n1+\\sqrt{\\Delta} & \\text{with prob. }0.1\n\\end{cases}\n$$\n\nwhich add some low probability tail events, but does not change mean income. Give an interpretation of the change in the consumption function.", "_____no_output_____" ] ], [ [ "# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "def v1_alt(c1,m1,rho,beta,r,Delta,v2_interp):\n \n # a. expected v2 value\n Ra = (1+r)*(m1-c1)\n v2 = 0\n y2s = [1-np.sqrt(Delta),1-Delta,1+Delta,1+np.sqrt(Delta)]\n probs = [0.1,0.4,0.4,0.1]\n for y2,prob in zip(y2s,probs):\n m2 = Ra + y2\n v2 += prob*v2_interp([m2])[0]\n \n # b. total value\n return utility(c1,rho) + beta*v2\n\nm1_vec_alt,c1_vec_alt = solve(rho,beta,r,Delta,nu,kappa,v1_alt)\n\n# plot\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(m1_vec,c1_vec,label='original')\nax.plot(m1_vec_alt,c1_vec_alt,label='new')\nax.legend(loc='upper left')\nax.set_xlabel('$m_1$')\nax.set_ylabel('$c_1$')\nax.set_title('consumption function in periode 1')\nax.set_xlim([0,4])\nax.set_ylim([0,2.5]);", "_____no_output_____" ] ], [ [ "# Problem: Solve the consumer problem with income risk II", "_____no_output_____" ], [ "Define the following **variables** and **parameters**:\n\n* $m_t$ is cash-on-hand in period $t$\n* $c_t$ is non-durable consumption in period $t$\n* $d_t$ is durable consumption in period $t$ (only adjustable in period 1)\n* $y_t$ is income in period $t$\n* $\\Delta \\in (0,1)$ is income risk\n* $r$ is the interest rate\n* $\\beta > 0$, $\\rho > 1$, $\\alpha \\in (0,1)$, $\\nu > 0 $, $\\kappa > 0$, $\\xi > 0$ are utility parameters", "_____no_output_____" ], [ "In the **second period** the household solves:\n\n$$\n\\begin{aligned}\nv_{2}(m_{2},d_{2}) &= \\max_{c_{2}}\\frac{c_{2}^{1-\\rho}}{1-\\rho}+\\alpha\\frac{d_{2}^{1-\\rho}}{1-\\rho}+\\nu\\frac{(m_{2}+d_{2}-c_{2}+\\kappa)^{1-\\rho}}{1-\\rho} \\\\\n\\text{s.t.} \\\\\nc_{2} & \\in [0,m_{2}]\n\\end{aligned}\n$$", "_____no_output_____" ], [ "In the **first period** the household solves:\n\n$$\n\\begin{aligned}\nv_{1}(m_{1}) &= \\max_{c_{1},d_{1}}\\frac{c_{1}^{1-\\rho}}{1-\\rho}+\\alpha\\frac{d_{1}^{1-\\rho}}{1-\\rho}+\\beta\\mathbb{E}_{1}\\left[v_2(m_2,d_2)\\right]\\\\&\\text{s.t.}&\\\\\nm_2 &= (1+r)(m_{1}-c_{1}-d_{1})+y_{2} \\\\\ny_{2} &= \\begin{cases}\n1-\\Delta & \\text{with prob. }0.5\\\\\n1+\\Delta & \\text{with prob. }0.5 \n\\end{cases}\\\\\nc_{1}+d_{1} & \\in [0,m_{1}]\\\\\n\\end{aligned}\n$$", "_____no_output_____" ], [ "Choose **parameters**:", "_____no_output_____" ] ], [ [ "rho = 2\nalpha = 0.1\nkappa = 0.5\nnu = 0.1\nr = 0.04\nbeta = 0.94\nDelta = 0.5\n\n# b. solve\n# write your code here\n\n# c. plot\n# write your code here", "_____no_output_____" ] ], [ [ "The **basic functions** are:", "_____no_output_____" ] ], [ [ "def utility(c,d,alpha,rho):\n return c**(1-rho)/(1-rho) + alpha*d**(1-rho)/(1-rho)\n\ndef bequest(m,c,d,nu,kappa,rho):\n return nu*(m+d-c+kappa)**(1-rho)/(1-rho)\n\ndef v2(c2,d2,m2,alpha,rho,nu,kappa):\n return utility(c2,d2,alpha,rho) + bequest(m2,c2,d2,nu,kappa,rho)\n\ndef v1(c1,d1,m1,alpha,rho,beta,r,Delta,v2_interp):\n \n # a. v2 value, if low income\n m2_low = (1+r)*(m1-c1-d1) + 1-Delta\n v2_low = v2_interp([m2_low,d1])[0]\n \n # b. v2 value, if high income\n m2_high = (1+r)*(m1-c1-d1) + 1+Delta\n v2_high = v2_interp([m2_high,d1])[0]\n \n # c. expected v2 value\n v2 = 0.5*v2_low + 0.5*v2_high\n \n # d. total value\n return utility(c1,d1,alpha,rho) + beta*v2", "_____no_output_____" ] ], [ [ "The **solution function for period 2** is:", "_____no_output_____" ] ], [ [ "def solve_period_2(alpha,rho,nu,kappa,Delta):\n\n # a. grids\n m2_vec = np.linspace(1e-8,5,200)\n d2_vec = np.linspace(1e-8,5,200)\n v2_grid = np.empty((200,200))\n c2_grid = np.empty((200,200))\n\n # b. solve for each m2 in grid\n for i,m2 in enumerate(m2_vec):\n for j,d2 in enumerate(d2_vec):\n\n # i. objective\n obj = lambda c2: -v2(c2,d2,m2,alpha,rho,nu,kappa)\n\n # ii. initial value (consume half)\n x0 = m2/2\n\n # iii. optimizer\n result = optimize.minimize_scalar(obj,x0,method='bounded',bounds=[1e-8,m2])\n\n # iv. save\n v2_grid[i,j] = -result.fun\n c2_grid[i,j] = result.x\n \n return m2_vec,d2_vec,v2_grid,c2_grid", "_____no_output_____" ] ], [ [ "**Question A:** Solve for consumption in period 2 and plot the consumption function.", "_____no_output_____" ] ], [ [ "# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# a. solve\nm2_vec,d2_vec,v2_grid,c2_grid = solve_period_2(alpha,rho,nu,kappa,Delta)\n\n# b. grids\nm2_grid,d2_grid = np.meshgrid(m2_vec,d2_vec,indexing='ij')\n\n# c. main\nfig = plt.figure()\nax = fig.add_subplot(1,1,1,projection='3d')\ncs = ax.plot_surface(m2_grid,d2_grid,c2_grid,cmap=cm.jet)\n\n# d. add labels\nax.set_xlabel('$m_2$')\nax.set_ylabel('$d_2$')\nax.set_zlabel('$c_2$')\n\n# e. invert xaxis\nax.invert_xaxis()\n\n# f. add colorbar\nfig.colorbar(cs);", "_____no_output_____" ] ], [ [ "**Question B:** Find optimal consumption and choices of durables in the first period as a function of cash-on-hand and plot it.", "_____no_output_____" ] ], [ [ "# write your code here", "_____no_output_____" ] ], [ [ "**Answer:**", "_____no_output_____" ] ], [ [ "# a. define solve function\ndef solve_period_1(alpha,rho,beta,r,Delta,v1,v2_interp):\n\n # a. grids\n m1_vec = np.linspace(1e-4,4,100)\n v1_vec = np.empty(100)\n c1_vec = np.empty(100)\n d1_vec = np.empty(100)\n \n # b. solve for each m1 in grid\n for i,m1 in enumerate(m1_vec):\n \n # i. objective\n obj = lambda x: -v1(x[0],x[1],m1,alpha,rho,beta,r,Delta,v2_interp)\n \n # ii. initial guess\n x0 = [m1*1/3,m1*1/3]\n \n # iii. bounds and constraitns\n bound = (1e-8,m1-1e-8)\n bounds = (bound, bound)\n ineq_con = {'type': 'ineq', 'fun': lambda x: m1-x[0]-x[1]} \n \n # iv. optimize\n result = optimize.minimize(obj,x0, method='SLSQP',\n bounds=bounds,\n constraints=[ineq_con])\n \n #result = optimize.minimize(obj,x0, method='Nelder-Mead')\n \n # v. save\n v1_vec[i] = -result.fun\n c1_vec[i] = result.x[0]\n d1_vec[i] = result.x[1]\n \n return m1_vec,v1_vec,c1_vec,d1_vec\n\n# b. construct interpolator\nv2_interp = interpolate.RegularGridInterpolator((m2_vec,d2_vec), v2_grid,\n bounds_error=False,fill_value=None)\n \n# c. solve period 1\nm1_vec,v1_vec,c1_vec,d1_vec = solve_period_1(alpha,rho,beta,r,Delta,v1,v2_interp)\n\n# d. plot\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(m1_vec,c1_vec,label='non-durable consumption')\nax.plot(m1_vec_alt,d1_vec,label='durable consumption')\nax.legend(loc='upper left')\nax.set_xlabel('$m_1$')\nax.set_xlim([0,4])\nax.set_ylim([0,2.5]);", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7254af5b83d4d93c666ddf1823d32c91446c052
26,966
ipynb
Jupyter Notebook
jwst_validation_notebooks/cube_build/jwst_cube_build_miri_test/test_cube_build_spec3.ipynb
york-stsci/jwst_validation_notebooks
1a244c927a027f89dcb01d146964fef71b5ea029
[ "BSD-3-Clause" ]
4
2019-02-28T21:21:20.000Z
2022-01-31T04:24:12.000Z
jwst_validation_notebooks/cube_build/jwst_cube_build_miri_test/test_cube_build_spec3.ipynb
york-stsci/jwst_validation_notebooks
1a244c927a027f89dcb01d146964fef71b5ea029
[ "BSD-3-Clause" ]
58
2020-02-17T14:54:30.000Z
2022-03-10T14:53:00.000Z
jwst_validation_notebooks/cube_build/jwst_cube_build_miri_test/test_cube_build_spec3.ipynb
york-stsci/jwst_validation_notebooks
1a244c927a027f89dcb01d146964fef71b5ea029
[ "BSD-3-Clause" ]
20
2019-03-11T17:24:03.000Z
2022-01-07T20:57:13.000Z
35.203655
655
0.564859
[ [ [ "<a id=\"title_ID\"></a>\n# JWST Pipeline Validation Testing Notebook: Cube Build with MIRISIM Data\n\n<span style=\"color:red\"> **Instruments Affected**</span>: MIRI\n\nTested with simulated MIRI MRS data.", "_____no_output_____" ], [ "#### Author: Isha Nayak", "_____no_output_____" ], [ "This notebook is the second one that tests the cube build step of Spec3 of the JWST pipeline. The first notebook (https://github.com/spacetelescope/jwst_validation_notebooks/blob/master/jwst_validation_notebooks/cube_build/jwst_cube_build_miri_test/testing_point_source_flux_conservation_modeshep_all.ipynb) looks at data that bypasses MIRISim.", "_____no_output_____" ], [ "In this notebook we use MIRISim data created with a 4 point dither and SLOW mode to see if a flat source with flux of 0.01 Jy is still flat after being processed by Detector 1, Spec 2, and Spec 3 steps of the pipeline. We skip outlier detection so there are outliers that do not get flagged, resulting in noise spikes in the spectrum. ", "_____no_output_____" ], [ "In the Detector 1 step we turn off reference pixels because reference pixels are not accounted for in a consistent manner. In the Spec 2 step we turn off straylight because it can sometimes cause problems, and we skip extract 1D and cube build and rely on Spec 3 for those steps instead. In Spec 3 we turn off master background because it does not work well for point sources and the extract 1D step does background subtraction, we turn off mrs imatch because it takes too much computational time, and we turn off outlier detection since that will be a separate notebook that tests the functionality of noise spikes and cosmic rays getting detected.", "_____no_output_____" ], [ "In the first cube build notebook we inject a point source onto the MRS detector. We find that if we input a flat point source of 0.01 Jy, the standalone cube build function in Spec 3 will output a flat point source of 0.01 Jy within 2%. In each channel the spectrum is flat, but there is a mismatch of up to 2% within bands: i.e. the spectra is flat in Channel 1A and is flat is Channel 1B, but there is a small offset between Channel 1A and 1B. ", "_____no_output_____" ], [ "In this notebook the critera for passing in Channel/Band 1A through 3C is the median value in those channels has to be within 20% of the input flux of 0.01 Jy. Flux conservations is not perfect with mirisim data and the pipeline adds factors/corrections that cannot actually be tested until we get real data. Therefore the passing criteria in Channel/Band 1A through 3C is that the flux has to be conserved within 20% when using mirisim data and processing it through the pipeline, which is different from the 2% flux conservation criteria when we can bypass mirisim and test cube build.", "_____no_output_____" ], [ "The critera for passing in 4A through 4C is the median value has to be within 160%. Channel 4 flux calibration is incorrect in mirisim and channel 4 data processes through the pipeline is also incorrect.", "_____no_output_____" ], [ "#### Update CRDS Context", "_____no_output_____" ] ], [ [ "#Update the context\n%env CRDS_CONTEXT=jwst_0719.pmap", "_____no_output_____" ] ], [ [ "#### Import necessary packages", "_____no_output_____" ] ], [ [ "# Box download imports\nfrom astropy.utils.data import download_file\nfrom pathlib import Path\nfrom shutil import move\nfrom os.path import splitext", "_____no_output_____" ], [ "#JWST functions\nimport jwst\nfrom jwst.pipeline import Detector1Pipeline\nfrom jwst.pipeline import Spec2Pipeline\nfrom jwst.pipeline import Spec3Pipeline", "_____no_output_____" ], [ "#Mathematics and plotting functions\nfrom astropy.io import fits\nimport matplotlib.pyplot as plt\nfrom astropy.utils.data import get_pkg_data_filename\nimport numpy as np", "_____no_output_____" ], [ "#Check JWST version\nprint(jwst.__version__ )", "_____no_output_____" ] ], [ [ "#### Setup Box Files", "_____no_output_____" ] ], [ [ "def get_box_files(file_list):\n for box_url,file_name in file_list:\n if 'https' not in box_url:\n box_url = 'https://stsci.box.com/shared/static/' + box_url\n downloaded_file = download_file(box_url)\n if Path(file_name).suffix == '':\n ext = splitext(box_url)[1]\n file_name += ext\n move(downloaded_file, file_name)", "_____no_output_____" ], [ "#****\n#\n# Set this variable to False to not use the temporary directory\n#\n#****\nuse_tempdir = True\n# Create a temporary directory to hold notebook output, and change the working directory to that directory.\nfrom tempfile import TemporaryDirectory\nimport os\nimport shutil\nif use_tempdir:\n data_dir = TemporaryDirectory()\n # If you have files that are in the notebook's directory, but that the notebook will need to use while\n # running, copy them into the temporary directory here.\n #\n # files = ['name_of_file']\n # for file_name in files:\n # shutil.copy(file_name, os.path.join(data_dir.name, file_name))\n # Save original directory\n orig_dir = os.getcwd()\n # Move to new directory\n os.chdir(data_dir.name)\n# For info, print out where the script is running\nprint(\"Running in {}\".format(os.getcwd()))", "_____no_output_____" ] ], [ [ "#### Setup Box Urls", "_____no_output_____" ] ], [ [ "seq1_file_urls = ['https://stsci.box.com/shared/static/bfdwxoi5d1bigpcrm8oax4gs7zvrx7jl.fits',\n 'https://stsci.box.com/shared/static/prkl3tr4tvi19f069kc0mdzfyrq9zqq9.fits',\n 'https://stsci.box.com/shared/static/tt81e5rp4325qn6qjndzr1tygwcd8mi3.fits',\n 'https://stsci.box.com/shared/static/oji8v81g5fvqnz2mplz7vrrxzf09j17i.fits',\n 'https://stsci.box.com/shared/static/qorfegleqsp47wk0c895xl0zzjlsc9za.fits',\n 'https://stsci.box.com/shared/static/55pcsp5867p7a5dg5qx8yazqkuamjwdb.fits']\n\nseq1_file_names = ['det_image_seq1_MIRIFULONG_34LONGexp1.fits',\n 'det_image_seq1_MIRIFULONG_34MEDIUMexp1.fits',\n 'det_image_seq1_MIRIFULONG_34SHORTexp1.fits',\n 'det_image_seq1_MIRIFUSHORT_12LONGexp1.fits',\n 'det_image_seq1_MIRIFUSHORT_12MEDIUMexp1.fits',\n 'det_image_seq1_MIRIFUSHORT_12SHORTexp1.fits']\n\nbox_download_list_seq1 = [(url,name) for url,name in zip(seq1_file_urls,seq1_file_names)]\n\nget_box_files(box_download_list_seq1)", "_____no_output_____" ], [ "seq2_file_urls = ['https://stsci.box.com/shared/static/l0338f4vx7lbt6nacmu5e3qn0t63f0hg.fits',\n 'https://stsci.box.com/shared/static/zbeijqyendw0q3du7ttks4s3a88wulng.fits',\n 'https://stsci.box.com/shared/static/426lhx21n77hyaur9zzi8fclqj6brlnq.fits',\n 'https://stsci.box.com/shared/static/3h2dbj7xrmz0sqtqurmxqzoys48ccguf.fits',\n 'https://stsci.box.com/shared/static/954nenk3p81wes4uo0izvydmuq23pswi.fits',\n 'https://stsci.box.com/shared/static/b20bea8ymgk6gp4bj7i307dgwgyp6bj8.fits']\n\nseq2_file_names = ['det_image_seq2_MIRIFULONG_34LONGexp1.fits',\n 'det_image_seq2_MIRIFULONG_34MEDIUMexp1.fits',\n 'det_image_seq2_MIRIFULONG_34SHORTexp1.fits',\n 'det_image_seq2_MIRIFUSHORT_12LONGexp1.fits',\n 'det_image_seq2_MIRIFUSHORT_12MEDIUMexp1.fits',\n 'det_image_seq2_MIRIFUSHORT_12SHORTexp1.fits']\n\nbox_download_list_seq2 = [(url,name) for url,name in zip(seq2_file_urls,seq2_file_names)]\n\nget_box_files(box_download_list_seq2)", "_____no_output_____" ], [ "seq3_file_urls = ['https://stsci.box.com/shared/static/jtaxk9nka3eiln5s583csvoujvkoim0h.fits',\n 'https://stsci.box.com/shared/static/mqvcswxmdermtj2keu979p35u9f4yten.fits',\n 'https://stsci.box.com/shared/static/nrxoryzu2vy264w5pr11ejeqktxicl7u.fits',\n 'https://stsci.box.com/shared/static/0zdaj8qxj5a2vovtejj15ba12gq4coio.fits',\n 'https://stsci.box.com/shared/static/x7rquwb5o9ncxsm550etnpig68bhqlxc.fits',\n 'https://stsci.box.com/shared/static/1vr3h1rmy7n7e9jq5rum13g496anqn39.fits']\n\nseq3_file_names = ['det_image_seq3_MIRIFULONG_34LONGexp1.fits',\n 'det_image_seq3_MIRIFULONG_34MEDIUMexp1.fits',\n 'det_image_seq3_MIRIFULONG_34SHORTexp1.fits',\n 'det_image_seq3_MIRIFUSHORT_12LONGexp1.fits',\n 'det_image_seq3_MIRIFUSHORT_12MEDIUMexp1.fits',\n 'det_image_seq3_MIRIFUSHORT_12SHORTexp1.fits']\n\nbox_download_list_seq3 = [(url,name) for url,name in zip(seq3_file_urls,seq3_file_names)]\n\nget_box_files(box_download_list_seq3)", "_____no_output_____" ], [ "seq4_file_urls = ['https://stsci.box.com/shared/static/mj13u4iqlqp8altu47wief0pa4nie05z.fits',\n 'https://stsci.box.com/shared/static/9gca440lueo3yl5qwi4h72w1z4imkw07.fits',\n 'https://stsci.box.com/shared/static/4jp6trj5vcu957v7mgrk3x0xswspjsyp.fits',\n 'https://stsci.box.com/shared/static/o0sxswa1fcykbll7ynqobc6xj7pinr72.fits',\n 'https://stsci.box.com/shared/static/c8udg4depbxoha6m7vbntmq1j2vcqljj.fits',\n 'https://stsci.box.com/shared/static/sgp67c1gotmp4kyc4okzjkxpdxcvv2g8.fits',\n 'https://stsci.box.com/shared/static/csgnwqoos19wt9tkavpb0ml75p2j6m0q.json']\n\nseq4_file_names = ['det_image_seq4_MIRIFULONG_34LONGexp1.fits',\n 'det_image_seq4_MIRIFULONG_34MEDIUMexp1.fits',\n 'det_image_seq4_MIRIFULONG_34SHORTexp1.fits',\n 'det_image_seq4_MIRIFUSHORT_12LONGexp1.fits',\n 'det_image_seq4_MIRIFUSHORT_12MEDIUMexp1.fits',\n 'det_image_seq4_MIRIFUSHORT_12SHORTexp1.fits',\n 'spec3_updated.json']\n\nbox_download_list_seq4 = [(url,name) for url,name in zip(seq4_file_urls,seq4_file_names)]\n\nget_box_files(box_download_list_seq4)", "_____no_output_____" ] ], [ [ "#### Setup Loops", "_____no_output_____" ] ], [ [ "#Band Loop 1\nband_loop1 = ['12SHORT', '12MEDIUM', '12LONG']\n\n#Band Loop 1\nband_loop2 = ['34SHORT', '34MEDIUM', '34LONG']\n\n#Dither Loop\ndit_loop = ['seq1', 'seq2', 'seq3', 'seq4']\n\n#Channels\nchan_type1=['ch1-short', 'ch1-medium', 'ch1-long', 'ch2-short', 'ch2-medium', 'ch2-long', 'ch3-short', 'ch3-medium', 'ch3-long', 'ch4-short', 'ch4-medium', 'ch4-long']\nchan_type2=['1a', '1b', '1c', '2a', '2b', '2c', '3a', '3b', '3c', '4a', '4b', '4c']\n\n#Colors\ncolor_list=['black', 'red', 'blue', 'black', 'red', 'blue', 'black', 'red', 'blue', 'black', 'red', 'blue']", "_____no_output_____" ] ], [ [ "#### Setup Detector 1 File Names", "_____no_output_____" ] ], [ [ "name1 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name1[count] = 'det_image_' + dit_loop[i] + '_MIRIFUSHORT_' + band_loop1[j] + 'exp1.fits'\n count=count+1\n\nname2 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name2[count] = 'det_image_' + dit_loop[i] + '_MIRIFULONG_' + band_loop2[j] + 'exp1.fits'\n count=count+1\n \nname_det1=np.concatenate((name1,name2))", "_____no_output_____" ] ], [ [ "#### Run Detector 1 and Skip Reference Pixel", "_____no_output_____" ] ], [ [ "for i in range(0,24):\n result_det1 = Detector1Pipeline.call(name_det1[i], save_results = True, steps = {\"refpix\" : {\"skip\" : True}})", "_____no_output_____" ] ], [ [ "#### Setup Spec 2 File Names", "_____no_output_____" ] ], [ [ "name3 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name3[count] = 'det_image_' + dit_loop[i] + '_MIRIFUSHORT_' + band_loop1[j] + 'exp1_rate.fits'\n count=count+1\n\nname4 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name4[count] = 'det_image_' + dit_loop[i] + '_MIRIFULONG_' + band_loop2[j] + 'exp1_rate.fits'\n count=count+1\n \nname_spec2=np.concatenate((name3,name4))", "_____no_output_____" ] ], [ [ "#### Run Spec 2 and Skip Straylight, Extract1D, and Cube build", "_____no_output_____" ] ], [ [ "for i in range(0,24):\n result_spec2 = Spec2Pipeline.call(name_spec2[i], save_results = True,steps = {\"straylight\" : {\"skip\" : True}, \"extract_1d\" : {\"skip\" : True}, \"cube_build\" : {\"skip\" : True}})", "_____no_output_____" ] ], [ [ "#### Setup Spec 3 File Names", "_____no_output_____" ] ], [ [ "name5 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name5[count] = 'det_image_' + dit_loop[i] + '_MIRIFUSHORT_' + band_loop1[j] + 'exp1_cal.fits'\n count=count+1\n\nname6 = [' '] * 12\ncount=0\nfor j in range(0, 3): \n for i in range(0, 4):\n name6[count] = 'det_image_' + dit_loop[i] + '_MIRIFULONG_' + band_loop2[j] + 'exp1_cal.fits'\n count=count+1\n \nname_spec3=np.concatenate((name5,name6))", "_____no_output_____" ] ], [ [ "#### Change SRCTYPE from EXTENDED to POINT", "_____no_output_____" ] ], [ [ "for i in range(0, 24):\n hdu=fits.open(name_spec3[i])\n hdu['SCI'].header['SRCTYPE']='POINT'\n hdu.writeto(name_spec3[i],overwrite=True)\n hdu.close()", "_____no_output_____" ] ], [ [ "#### Run Spec 3 and Skip Master Background, MRS imatch, and Outlier Detection", "_____no_output_____" ], [ "As an example, this step is called with an alternate method than Detector1 or Spec2. You use the 'run' option instead of the 'call' option to use Spec3.", "_____no_output_____" ] ], [ [ "results_spec3 = Spec3Pipeline()\nresults_spec3.master_background.skip=True\nresults_spec3.mrs_imatch.skip=True\nresults_spec3.outlier_detection.skip=True\nresults_spec3.save_results=True\nresults_spec3.run(\"spec3_updated.json\")", "_____no_output_____" ] ], [ [ "#### Make figures from extract 1D output which uses EXTENDED as the default source type.", "_____no_output_____" ] ], [ [ "# Open the FITS file\nfor i in range(0, 12):\n globals()['hdul_' + chan_type2[i]] = fits.open('spec3_results_' + chan_type1[i] + '_x1d.fits')", "_____no_output_____" ], [ "# Find length of each channel\nfor i in range(0, 12):\n test = globals()['hdul_' + chan_type2[i]]\n globals()['length_' + chan_type2[i]] = len(test[1].data)", "_____no_output_____" ], [ "# Make empty array for wavelength and flux\nfor i in range(0, 12):\n globals()['wavelength_' + chan_type2[i]] = [0.0] * globals()['length_' + chan_type2[i]]\n globals()['flux_' + chan_type2[i]] = [0.0] * globals()['length_' + chan_type2[i]]", "_____no_output_____" ], [ "# Read wavelength and flux\nfor j in range(0, 12):\n for i in range(0, globals()['length_' + chan_type2[j]]):\n globals()['wavelength_' + chan_type2[j]][i] = globals()['hdul_' + chan_type2[j]][1].data[i][0]\n globals()['flux_' + chan_type2[j]][i] = globals()['hdul_' + chan_type2[j]][1].data[i][1]", "_____no_output_____" ], [ "# Make plot\nfor i in range(0, 12):\n plt.plot(globals()['wavelength_' + chan_type2[i]], globals()['flux_' + chan_type2[i]], '-', color = color_list[i])\n\nplt.xlim(4,30)\nplt.ylim(0,0.15)", "_____no_output_____" ] ], [ [ "#### Make figures from summing up the flux in each channel of the 3D cube.", "_____no_output_____" ] ], [ [ "# Make figure and keep track of mediam value in each band\nmedian_s3d = [0.0] * 12\n\nfor j in range(0, 12):\n image3, header3 = fits.getdata('spec3_results_' + chan_type1[j] + '_s3d.fits', header = True)\n \n num_x3 = header3[\"NAXIS2\"]\n num_y3 = header3[\"NAXIS1\"]\n num_chan3 = header3[\"NAXIS3\"]\n start_wavelength3 = header3[\"CRVAL3\"]\n step_wavelength3 = header3[\"CDELT3\"]\n pix_size3 = header3[\"CDELT1\"]\n \n a3 = [0.0] * num_chan3\n for i in range(0, num_chan3):\n for m in range(0, num_x3):\n for n in range(0, num_y3):\n a3[i] = image3[i, m, n] + a3[i]\n\n d3 = [0.0] * num_chan3\n d3[0] = start_wavelength3\n for i in range(1, num_chan3):\n d3[i] = d3[i-1] + step_wavelength3\n \n for i in range(0, num_chan3):\n a3[i] = (a3[i] * (pix_size3 * 3600) * (pix_size3 * 3600) * (10**6)) / (4.25 * 10**10)\n \n median_s3d[j]=np.median(a3)\n \n plt.plot(d3, a3, '-', color = color_list[j], lw = 1) \n plt.xlim(4, 30)\n plt.ylim(0, 0.15)", "_____no_output_____" ] ], [ [ "#### Define Pass/Fail Criteria For S3D", "_____no_output_____" ] ], [ [ "# Check median is off by less than 20% of expected flux in channel/band 1a though 3c\nfor i in range(0, 8):\n if (100 * ((median_s3d[i]) - (0.01)) / (0.01)) < 20:\n a = (100 * ((median_s3d[i]) - (0.01)) / (0.01))\n print(chan_type2[i] + ': pass' + ',' + np.str(np.round(a)) + '%')\n else:\n a = (100 * ((median_s3d[i]) - (0.01)) / (0.01))\n print(chan_type2[i] + ': fail' + ',' + np.str(np.round(a)) + '%')", "_____no_output_____" ], [ "# Check median is off by less than 160% of expected flux in channel/band 4a though 4c\nfor i in range(9, 12):\n if (100 * ((median_s3d[i]) - (0.01)) / (0.01)) < 160:\n a = (100 * ((median_s3d[i]) - (0.01)) / (0.01))\n print(chan_type2[i] + ': pass' + ',' + np.str(np.round(a)) + '%')\n else:\n a = (100 * ((median_s3d[i]) - (0.01)) / (0.01))\n print(chan_type2[i] + ': fail' + ',' + np.str(np.round(a)) + '%')", "_____no_output_____" ] ], [ [ "#### Summary", "_____no_output_____" ], [ "We plot below the pass/fail criteria. In Channel/Band 1a through 3c, the criteria is that the expected median output after Spec 3 step of the pipeline is within 20% of the 0.01 Jy flat flux that was input. The output from the pipeline ranges from 9%-16% from the expected flux.", "_____no_output_____" ], [ "The flux output from channel 4a through 4c is off by a factor ranging from 36% to 153% off from the input flux. Therefore the set the pass/fail criterial in channel 4 such that the output should be within 160% from the input. Both the data made by mirisim and then processed by the pipeline lead to such a high discrepancy in this particular channel. MIRISim simulations in Channel 4 do not represent the data we expect to get in flight.", "_____no_output_____" ] ], [ [ "# Band 1A through 3C\nx_limit = [4.89, 18.04]\ny_limit = [0.01, 0.01]\n\n#Set 20% Pass/Fail Criteria\ny_20_upper = [0.01 + (0.01 * (0.2)), 0.01 + (0.01 * (0.2))]\ny_20_lower = [0.01 - (0.01 * (0.2)), 0.01 - (0.01 * (0.2))]\n\nplt.plot(x_limit, y_limit, '--', color='black', lw=1, label = 'expected')\nplt.plot(x_limit, y_20_upper, '--', color='cyan', lw=1, label = '20% limit')\nplt.plot(x_limit, y_20_lower, '--', color='cyan', lw=1,)\nplt.plot(globals()['wavelength_' + chan_type2[0]][0], (median_s3d[0]), 'o', color = color_list[0], label = '1A, 2A, 3A')\nplt.plot(globals()['wavelength_' + chan_type2[1]][0], (median_s3d[1]), 'o', color = color_list[1], label = '1B, 2B, 3B')\nplt.plot(globals()['wavelength_' + chan_type2[2]][0], (median_s3d[2]), 'o', color = color_list[2], label = '1C, 2C, 3C')\n\nfor i in range(0, 9):\n plt.plot(globals()['wavelength_' + chan_type2[i]][0], (median_s3d[i]), 'o', color = color_list[i])\n\nplt.xlim(4, 30)\nplt.legend()", "_____no_output_____" ], [ "# Band 4A through 4C\nx_limit = [17.66, 28.31]\ny_limit = [0.01, 0.01]\n\n#Set 160% Pass/Fail Criteria\ny_160_upper = [0.01 + (0.01 * (1.6)), 0.01 + (0.01 * (1.6))]\ny_160_lower = [0.01 - (0.01 * (1.6)), 0.01 - (0.01 * (1.6))]\n\nplt.plot(x_limit, y_limit, '--', color='black', lw=1, label = 'expected')\nplt.plot(x_limit, y_160_upper, '--', color='cyan', lw=1, label = '160% limit')\nplt.plot(x_limit, y_160_lower, '--', color='cyan', lw=1)\nplt.plot(globals()['wavelength_' + chan_type2[9]][0], (median_s3d[9]), 'o', color = color_list[9], label = '4A')\nplt.plot(globals()['wavelength_' + chan_type2[10]][0], (median_s3d[10]), 'o', color = color_list[10], label = '4B')\nplt.plot(globals()['wavelength_' + chan_type2[11]][0], (median_s3d[11]), 'o', color = color_list[11], label = '4C')\n\nfor i in range(9, 12):\n plt.plot(globals()['wavelength_' + chan_type2[i]][0], (median_s3d[i]), 'o', color = color_list[i])\n \nplt.xlim(4, 30)\nplt.legend()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ] ]
e725554902364ea85be613154b4e25cdd0f0c504
18,037
ipynb
Jupyter Notebook
notebooks/D1_L5_Pandas/12-Performance-Eval-and-Query.ipynb
benleedy/DSBC2019
1da7f49b0e7d088395940b6408c7a6e8e1708bf5
[ "MIT" ]
null
null
null
notebooks/D1_L5_Pandas/12-Performance-Eval-and-Query.ipynb
benleedy/DSBC2019
1da7f49b0e7d088395940b6408c7a6e8e1708bf5
[ "MIT" ]
5
2020-01-28T23:05:25.000Z
2022-02-10T00:22:11.000Z
notebooks/D1_L5_Pandas/12-Performance-Eval-and-Query.ipynb
highrain2/myTestProjs
edc3c8bd041110ce79fe2a6dd134dc068205ca52
[ "MIT" ]
null
null
null
29.233387
282
0.579864
[ [ [ "# High-Performance Pandas: eval() and query()", "_____no_output_____" ], [ "As we've already seen in previous sections, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into C via an intuitive syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas.\nWhile these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.\n\nAs of version 0.13 (released January 2014), Pandas includes some experimental tools that allow you to directly access C-speed operations without costly allocation of intermediate arrays.\nThese are the ``eval()`` and ``query()`` functions, which rely on the [Numexpr](https://github.com/pydata/numexpr) package.\nIn this notebook we will walk through their use and give some rules-of-thumb about when you might think about using them.", "_____no_output_____" ], [ "## Motivating ``query()`` and ``eval()``: Compound Expressions\n\nWe've seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:", "_____no_output_____" ] ], [ [ "import numpy as np\nrng = np.random.RandomState(42)\nx = rng.rand(1000000)\ny = rng.rand(1000000)\n%timeit x + y", "_____no_output_____" ] ], [ [ "As discussed in *Computation on NumPy Arrays: Universal Functions*, this is much faster than doing the addition via a Python loop or comprehension:", "_____no_output_____" ] ], [ [ "%timeit np.fromiter((xi + yi for xi, yi in zip(x, y)), dtype=x.dtype, count=len(x))", "_____no_output_____" ] ], [ [ "But this abstraction can become less efficient when computing compound expressions.\nFor example, consider the following expression:", "_____no_output_____" ] ], [ [ "mask = (x > 0.5) & (y < 0.5)", "_____no_output_____" ] ], [ [ "Because NumPy evaluates each subexpression, this is roughly equivalent to the following:", "_____no_output_____" ] ], [ [ "tmp1 = (x > 0.5)\ntmp2 = (y < 0.5)\nmask = tmp1 & tmp2", "_____no_output_____" ] ], [ [ "In other words, *every intermediate step is explicitly allocated in memory*. If the ``x`` and ``y`` arrays are very large, this can lead to significant memory and computational overhead.\nThe Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.\nThe [Numexpr documentation](https://github.com/pydata/numexpr) has more details, but for the time being it is sufficient to say that the library accepts a *string* giving the NumPy-style expression you'd like to compute:", "_____no_output_____" ] ], [ [ "import numexpr\nmask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)')\nnp.allclose(mask, mask_numexpr)", "_____no_output_____" ] ], [ [ "The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays.\nThe Pandas ``eval()`` and ``query()`` tools that we will discuss here are conceptually similar, and depend on the Numexpr package.", "_____no_output_____" ], [ "## ``pandas.eval()`` for Efficient Operations\n\nThe ``eval()`` function in Pandas uses string expressions to efficiently compute operations using ``DataFrame``s.\nFor example, consider the following ``DataFrame``s:", "_____no_output_____" ] ], [ [ "import pandas as pd\nnrows, ncols = 100000, 100\nrng = np.random.RandomState(42)\ndf1, df2, df3, df4 = (pd.DataFrame(rng.rand(nrows, ncols))\n for i in range(4))", "_____no_output_____" ] ], [ [ "To compute the sum of all four ``DataFrame``s using the typical Pandas approach, we can just write the sum:", "_____no_output_____" ] ], [ [ "%timeit df1 + df2 + df3 + df4", "_____no_output_____" ] ], [ [ "The same result can be computed via ``pd.eval`` by constructing the expression as a string:", "_____no_output_____" ] ], [ [ "%timeit pd.eval('df1 + df2 + df3 + df4')", "_____no_output_____" ] ], [ [ "The ``eval()`` version of this expression is about 50% faster (and uses much less memory), while giving the same result:", "_____no_output_____" ] ], [ [ "np.allclose(df1 + df2 + df3 + df4,\n pd.eval('df1 + df2 + df3 + df4'))", "_____no_output_____" ] ], [ [ "### Operations supported by ``pd.eval()``\n\nAs of Pandas v0.16, ``pd.eval()`` supports a wide range of operations.\nTo demonstrate these, we'll use the following integer ``DataFrame``s:", "_____no_output_____" ] ], [ [ "df1, df2, df3, df4, df5 = (pd.DataFrame(rng.randint(0, 1000, (100, 3)))\n for i in range(5))", "_____no_output_____" ] ], [ [ "#### Arithmetic operators\n``pd.eval()`` supports all arithmetic operators. For example:", "_____no_output_____" ] ], [ [ "result1 = -df1 * df2 / (df3 + df4) - df5\nresult2 = pd.eval('-df1 * df2 / (df3 + df4) - df5')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "#### Comparison operators\n``pd.eval()`` supports all comparison operators, including chained expressions:", "_____no_output_____" ] ], [ [ "result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4)\nresult2 = pd.eval('df1 < df2 <= df3 != df4')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "#### Bitwise operators\n``pd.eval()`` supports the ``&`` and ``|`` bitwise operators:", "_____no_output_____" ] ], [ [ "result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4)\nresult2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "In addition, it supports the use of the literal ``and`` and ``or`` in Boolean expressions:", "_____no_output_____" ] ], [ [ "result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)')\nnp.allclose(result1, result3)", "_____no_output_____" ] ], [ [ "#### Object attributes and indices\n\n``pd.eval()`` supports access to object attributes via the ``obj.attr`` syntax, and indexes via the ``obj[index]`` syntax:", "_____no_output_____" ] ], [ [ "result1 = df2.T[0] + df3.iloc[1]\nresult2 = pd.eval('df2.T[0] + df3.iloc[1]')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "#### Other operations\nOther operations such as function calls, conditional statements, loops, and other more involved constructs are currently *not* implemented in ``pd.eval()``.\nIf you'd like to execute these more complicated types of expressions, you can use the Numexpr library itself.", "_____no_output_____" ], [ "## ``DataFrame.eval()`` for Column-Wise Operations\n\nJust as Pandas has a top-level ``pd.eval()`` function, ``DataFrame``s have an ``eval()`` method that works in similar ways.\nThe benefit of the ``eval()`` method is that columns can be referred to *by name*.\nWe'll use this labeled array as an example:", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(rng.rand(1000, 3), columns=['A', 'B', 'C'])\ndf.head()", "_____no_output_____" ] ], [ [ "Using ``pd.eval()`` as above, we can compute expressions with the three columns like this:", "_____no_output_____" ] ], [ [ "result1 = (df['A'] + df['B']) / (df['C'] - 1)\nresult2 = pd.eval(\"(df.A + df.B) / (df.C - 1)\")\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "The ``DataFrame.eval()`` method allows much more succinct evaluation of expressions with the columns:", "_____no_output_____" ] ], [ [ "result3 = df.eval('(A + B) / (C - 1)')\nnp.allclose(result1, result3)", "_____no_output_____" ] ], [ [ "Notice here that we treat *column names as variables* within the evaluated expression, and the result is what we would wish.", "_____no_output_____" ], [ "### Assignment in DataFrame.eval()\n\nIn addition to the options just discussed, ``DataFrame.eval()`` also allows assignment to any column.\nLet's use the ``DataFrame`` from before, which has columns ``'A'``, ``'B'``, and ``'C'``:", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ] ], [ [ "We can use ``df.eval()`` to create a new column ``'D'`` and assign to it a value computed from the other columns:", "_____no_output_____" ] ], [ [ "df.eval('D = (A + B) / C', inplace=True)\ndf.head()", "_____no_output_____" ] ], [ [ "In the same way, any existing column can be modified:", "_____no_output_____" ] ], [ [ "df.eval('D = (A - B) / C', inplace=True)\ndf.head()", "_____no_output_____" ] ], [ [ "### Local variables in DataFrame.eval()\n\nThe ``DataFrame.eval()`` method supports an additional syntax that lets it work with local Python variables.\nConsider the following:", "_____no_output_____" ] ], [ [ "column_mean = df.mean(1)\nresult1 = df['A'] + column_mean\nresult2 = df.eval('A + @column_mean')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "The ``@`` character here marks a *variable name* rather than a *column name*, and lets you efficiently evaluate expressions involving the two \"namespaces\": the namespace of columns, and the namespace of Python objects.\nNotice that this ``@`` character is only supported by the ``DataFrame.eval()`` *method*, not by the ``pandas.eval()`` *function*, because the ``pandas.eval()`` function only has access to the one (Python) namespace.", "_____no_output_____" ], [ "## DataFrame.query() Method\n\nThe ``DataFrame`` has another method based on evaluated strings, called the ``query()`` method.\nConsider the following:", "_____no_output_____" ] ], [ [ "result1 = df[(df.A < 0.5) & (df.B < 0.5)]\nresult2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "As with the example used in our discussion of ``DataFrame.eval()``, this is an expression involving columns of the ``DataFrame``.\nIt cannot be expressed using the ``DataFrame.eval()`` syntax, however!\nInstead, for this type of filtering operation, you can use the ``query()`` method:", "_____no_output_____" ] ], [ [ "result2 = df.query('A < 0.5 and B < 0.5')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.\nNote that the ``query()`` method also accepts the ``@`` flag to mark local variables:", "_____no_output_____" ] ], [ [ "Cmean = df['C'].mean()\nresult1 = df[(df.A < Cmean) & (df.B < Cmean)]\nresult2 = df.query('A < @Cmean and B < @Cmean')\nnp.allclose(result1, result2)", "_____no_output_____" ] ], [ [ "## Performance: When to Use These Functions\n\nWhen considering whether to use these functions, there are two considerations: *computation time* and *memory use*.\nMemory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas ``DataFrame``s will result in implicit creation of temporary arrays:\nFor example, this:", "_____no_output_____" ] ], [ [ "x = df[(df.A < 0.5) & (df.B < 0.5)]", "_____no_output_____" ] ], [ [ "Is roughly equivalent to this:", "_____no_output_____" ] ], [ [ "tmp1 = df.A < 0.5\ntmp2 = df.B < 0.5\ntmp3 = tmp1 & tmp2\nx = df[tmp3]", "_____no_output_____" ] ], [ [ "If the size of the temporary ``DataFrame``s is significant compared to your available system memory (typically several gigabytes) then it's a good idea to use an ``eval()`` or ``query()`` expression.\nYou can check the approximate size of your array in bytes using this:", "_____no_output_____" ] ], [ [ "df.values.nbytes", "_____no_output_____" ] ], [ [ "On the performance side, ``eval()`` can be faster even when you are not maxing-out your system memory.\nThe issue is how your temporary ``DataFrame``s compare to the size of the L1 or L2 CPU cache on your system (typically a few megabytes in 2016); if they are much bigger, then ``eval()`` can avoid some potentially slow movement of values between the different memory caches.\nIn practice, I find that the difference in computation time between the traditional methods and the ``eval``/``query`` method is usually not significant–if anything, the traditional method is faster for smaller arrays!\nThe benefit of ``eval``/``query`` is mainly in the saved memory, and the sometimes cleaner syntax they offer.\n\nWe've covered most of the details of ``eval()`` and ``query()`` here; for more information on these, you can refer to the Pandas documentation.\nIn particular, different parsers and engines can be specified for running these queries; for details on this, see the discussion within the [\"Enhancing Performance\" section](http://pandas.pydata.org/pandas-docs/dev/enhancingperf.html).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7256ba9f500659e41bd8a2f3bee877fd75c5847
11,831
ipynb
Jupyter Notebook
SEGMENT AND DECTECT CHARACTER/Predict_1_9.ipynb
truongkyle/LUAN_VAN_PYTHON
7c7fcfb9e390e4f1f8fef06b775be4ad7f0db4ad
[ "CNRI-Python" ]
null
null
null
SEGMENT AND DECTECT CHARACTER/Predict_1_9.ipynb
truongkyle/LUAN_VAN_PYTHON
7c7fcfb9e390e4f1f8fef06b775be4ad7f0db4ad
[ "CNRI-Python" ]
null
null
null
SEGMENT AND DECTECT CHARACTER/Predict_1_9.ipynb
truongkyle/LUAN_VAN_PYTHON
7c7fcfb9e390e4f1f8fef06b775be4ad7f0db4ad
[ "CNRI-Python" ]
null
null
null
50.776824
4,036
0.744231
[ [ [ "from keras.models import load_model\n", "Using TensorFlow backend.\n" ], [ "import numpy as np\nimport csv\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport cv2\nfrom PIL import Image as PIL_Image\nfrom PIL import ImageEnhance\nfrom PIL import ImageOps", "_____no_output_____" ], [ "import datetime", "_____no_output_____" ], [ "model = load_model('../CNN_1_9_1CNN.hdf5')", "_____no_output_____" ], [ "def general(img):\n #Image enhancement \n contr = ImageEnhance.Contrast(img) \n img = contr.enhance(5) # The enhancement values (contrast and brightness) \n bright = ImageEnhance.Brightness(img) # depends on backgroud, external lights etc \n img = bright.enhance(2.0)\n img = ImageOps.expand(img,border=80,fill='white') \n threshold = 120 \n img = img.point(lambda p: p > threshold and 255) \n \n #Find bounding box \n inverted = ImageOps.invert(img) \n box = inverted.getbbox() \n img_new = img.crop(box) \n width, height = img_new.size \n ratio = min((28./height), (28./width)) \n background = PIL_Image.new('RGB', (28,28), (255,255,255)) \n \n if(height == width): \n img_new = img_new.resize((28,28)) \n elif(height>width): \n img_new = img_new.resize((int(width*ratio),28)) \n background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2))) \n else: \n img_new = img_new.resize((28, int(height*ratio))) \n background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2))) \n \n # background = background.convert(\"L\")\n img_data=np.asarray(background)\n img_data = img_data[:,:,0] \n img_data = 255 -img_data\n \n return img_data", "_____no_output_____" ], [ "orig_img_path = './c61.png'\nimg = PIL_Image.open(orig_img_path).convert(\"L\")\nimg", "_____no_output_____" ], [ "img_1 = general(img)\nimg_1 = 255 -img_1\n\nprint(img_1.shape)\nplt.imshow(img_1, cmap='gray')\nplt.show()\nimg_1=img_1.reshape(1,28,28,1)\nprint(img_1.shape)", "(28, 28)\n" ], [ "a = datetime.datetime.now()\npredict = model.predict(img_1)\nb = datetime.datetime.now()\nc = b -a\nprint(str(c.total_seconds()))", "0.003478\n" ], [ "print(np.argmax(predict))\nprint(predict)", "6\n[[8.1788057e-05 2.7376764e-05 1.5893633e-06 1.6395157e-05 6.1521394e-05\n 2.8133759e-04 9.9945229e-01 8.6021091e-06 6.2560524e-05 6.5903350e-06]]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7256f0b88ca19a85414767d5900789521b4ea9d
16,441
ipynb
Jupyter Notebook
Pseudocode.ipynb
oktaysadak/Pseudocode
7f4f1d8b8bdb53cf0ad8cf3e6db37d3f8aacfc44
[ "MIT" ]
null
null
null
Pseudocode.ipynb
oktaysadak/Pseudocode
7f4f1d8b8bdb53cf0ad8cf3e6db37d3f8aacfc44
[ "MIT" ]
null
null
null
Pseudocode.ipynb
oktaysadak/Pseudocode
7f4f1d8b8bdb53cf0ad8cf3e6db37d3f8aacfc44
[ "MIT" ]
null
null
null
31.256654
451
0.573505
[ [ [ "## What is Pseudo-code?", "_____no_output_____" ], [ "Pseudo-code is an informal coding practice to help programmers plan algorithms and train programmatic thinking. Pseudo-code is not tied to any specific programming language such as Python, JavaScript, or C#. Instead, it uses human language for describing the actions needed to build the algorithm to solve a specific problem. \nBy using pseudo-code, you can plan every step of your program without worrying about syntax. Its greatest benefit is to allow you to discover the vulnerabilities and opportunities of your programmatic logic and help you improve it before implementation.", "_____no_output_____" ], [ "It is essentially breaking down the problem into simpler steps and in simpler words. We usually don't use \"technical\" language for pseudocoding but rather a conversational, everyday expressions.", "_____no_output_____" ], [ "### Do's & Don'ts:", "_____no_output_____" ], [ "#### Do's:", "_____no_output_____" ], [ "- Use control structure\n- Use proper naming conventions\n- Indentations and white spaces are the key\n- Keep it simple\n- Keep it concise", "_____no_output_____" ], [ "#### Don'ts:", "_____no_output_____" ], [ "- Don't make your pseudo-code abstract\n- Don't be too generalized", "_____no_output_____" ], [ "### Some Examples", "_____no_output_____" ], [ "**Write a program that asks the user for a temperature in Fahrenheit and prints out the same temperature in Celsius.** ", "_____no_output_____" ] ], [ [ "# This is a sample pseudo code", "_____no_output_____" ] ], [ [ "x = Get user input\ny = Convert x to Celsius\nOutput message displaying Celsius temperature", "_____no_output_____" ] ], [ [ "# Here is the pseudocode ", "_____no_output_____" ], [ "# 1. ask a user to input a temperature in Fahrenheit\n# For this, use the input command for user input. Note when we use the input function, data type is string by default\nx=input('Please input a temperature in Fahrenheit. ') \n \n# 2. Use formula to convert degrees in Celsius to Fahrenheit\n\ny=(float(x)-32)*5/9 # Formula to convert degree to Fahrenheit \n\nprint('Your temperature in Celsius is:', y)", "Your temperature in Celsius is: 37.77777777777778\n" ] ], [ [ "Now let's see how we could pseudocode the process in opposite dirrection using the thinking process we have already for the degree to Fahreheit: \n\n**Write a program that converts from Fahrenheit to Celsius or from Celsius to Fahrenheit, depending on the user's choice.**", "_____no_output_____" ] ], [ [ "# This is a pseudo code\n\n# 1. give a user option to choose if they want to convert from degrees to Fahrenheit or opposite\n # thinking backwards, we need to expain to user first that they have options and describe them so the user knows exactly what input to provide to get the desired output\n\n# 2. depending on user's input, execute a specific code block (code block 1: Fahrenheit to Celsius or code block 2: elsius to Fahrenheit)", "_____no_output_____" ] ], [ [ "x = input \"Press 1 to convert from Fahrenheit to Celsius or Press 2 to convert from Celsius to Fahrenheit.\"\ny = input ask what number?\nz = choice\n\nif z = 1 (1 is pressed) \n do f to c conversion # Note we use the correct identation in the pseudocode as well\n print output\nelif z = 2 (2 is pressed)\n do c to f conversion\n print output\nelse\n print \"Please enter either 1 or 2\"", "_____no_output_____" ] ], [ [ "# Here is the code ", "_____no_output_____" ], [ "x = int(input(\"Press 1 to convert from Fahrenheit to Celsius or Press 2 to convert from Celsius to Fahrenheit.\"))\ny = input(\"What is the number you want to convert? \")\nz = float(x)\n\nif z == 1:\n y=(float(y)-32)*5/9\n print(y)\nelif z == 2:\n y=float(y)*9/5+32\n print(y)\nelse:\n print(\"Please enter either 1 or 2\")", "37.77777777777778\n" ] ], [ [ "**Making the code more robust**", "_____no_output_____" ], [ "What else can be added? \nRight now if we debug our code, after first wrong input (let's say '3') the code will print an error message and stop. \nIn order to succeed with the request and improve the user experience we need to keep on asking for the value until the correct one is received. Besides that we should prevent compiling errors. \nThe **float()** function can receive only numerical values. What if user enters string value? They will receive Python error message which is useless for him. \nWe need to prevent such bad user experiences.", "_____no_output_____" ] ], [ [ "# This is a pseudo code\n\n# 1. Before the start of execution any of the code blocks, we should handle potential crashes of our algorithm by setting up a proper communocation with user - with very clear \"error\" messages. This is usually called \"handling edge cases\" - when user could break our algorithm with bas inputs. We should never assume what will be user's behavior - some users are \"malicious\" and they tend to do whatever it takes to break the algorithm.\n\n# edge case 1: users should be allowed just to input 1 or 2 - in any other case send a message to users notifying them about their options\n\n# edge case 2: when inputting a value for the temperature to be converted, make sure users input only numbers as the only valid input - in any other case send a message to users notifying them about their options", "_____no_output_____" ] ], [ [ "flag1 = entered value is wrong \n# We are setting the default value to wrong/false. We will change the value to correct/true if the input value is correct \nwhile flag1 is wrong\n x = input \"Press 1 to convert from Fahrenheit to Celsius or Press 2 to convert from Celsius to Fahrenheit. \"\n if x is not 1 or 2\n print error message\n else\n z = choice\n flag1 = entered value is correct\n\nflag2 = entered number is not a number\nwhile flag2 is wrong\n y = input ask what number?\n if y is not number\n print error message\n else\n convert y into float number\n flag2 = entered value is correct\n\nif z = 1 (1 is pressed)\n do f to c conversion\n print output\nelse (2 is pressed)\n do c to f conversion\n print output", "_____no_output_____" ] ], [ [ "# Here is the code ", "_____no_output_____" ], [ "flag1=False\nwhile flag1==False:\n x = input(\"Press 1 to convert from Fahrenheit to Celsius or Press 2 to convert from Celsius to Fahrenheit. \")\n if x not in ['1','2']:\n print('ERROR: Please enter either 1 or 2')\n else:\n z = int(x)\n flag1=True\n\nflag2=False\nwhile flag2==False:\n y = input(\"What is the number you want to convert?\")\n if y.replace(\".\", \"\", 1).lstrip('+-').isdigit()==False:\n print('ERROR: Please enter a number')\n else:\n y=float(y)\n flag2=True\n\nif z == 1:\n y=(y-32)*5/9\n print(y)\nelse:\n y=y*9/5+32\n print(y)", "ERROR: Please enter either 1 or 2\n210.2\n" ] ], [ [ "### Exercise - Writing Pseudocode", "_____no_output_____" ], [ "### Duel Of Sorcerers\n\n\n\n\nYou are witnessing an epic battle between two powerful sorcerers: Gandalf and Saruman. Each sorcerer has 10 spells of different powers in their mind and they are going to throw them one after the other. The winner of the duel will be the one who wins more of those clashes between spells. Spells are represented as a list of 10 integers whose value equals the power of the spell.\n\n```py\ngandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]\nsaruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]\n```\n\nFor example:\n\n- The first clash is won by Saruman: 10 against 23\n- The second clash is won by Saruman: 11 against 66\n etc.\n\nYou will create two variables, one for each sorcerer, where the total of number of clashes won by each sorcerer will be stored. Depending on which variable is greater at the end of the duel, you will show one of the following three results on the screen:\n\n```\nGandalf wins\nSaruman wins\nTie\n```\n\nWrite a pseudocode to solve this problem and then code it!", "_____no_output_____" ] ], [ [ "Pseudocode:\n\n- Create variables and assign spell power lists to them\n- Create variables to store number of wins by Gandalf and by Saruman. Assign 0 to each variable that stores the wins.\n- Execute the spell clashes. Iterate on the length of any one list:\n - Compare consecutive powers\n - If Gandalf wins update victory for Gandalf (variable that stores number of wins should go up by 1)\n - else if Saruman wins update the victory for Saruman (the same as above, just this time for Saruman)\n - else it is a tie\n- If Gandalf's wins are greater than Saruman's - print Gandalf wins\n- Else if Saruman wins are greater than Gandalf's - print Saruman wins\n- Else print there is a tie", "_____no_output_____" ], [ "\n# Assign spell power lists to variables\n\ngandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]\nsaruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]\n\n# Assign 0 to each variable that stores the victories\ntotal_gandalf_wins = 0\ntotal_saruman_wins = 0\n\n# Execution of spell clashes\nfor i in range(0, len(gandalf)):\n if (gandalf[i] > saruman[i]):\n total_gandalf_wins += 1\n elif (saruman[i] > gandalf[i]):\n total_saruman_wins += 1\n\n# We check who has won, do not forget the possibility of a draw.\n# Print the result based on the winner.\nif (total_gandalf_wins > total_saruman_wins):\n print (\"Gandalf wins!\")\nelif (total_saruman_wins > total_gandalf_wins):\n print (\"Saruman wins!\")\nelse:\n print (\"No winners, it is a tie.\")\n", "Gandalf wins!\n" ] ], [ [ "1. You are witnessing an epic battle between two powerful sorcerers: Gandalf and Saruman. Each sorcerer has 10 spells of variable power in their mind and they are going to throw them one after the other. The winner of the duel will be the one who wins more of those clashes between spells. Spells are represented as a list of 10 integers whose value equals the power of the spell.\n\ngandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]\nsaruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]\n\nFor example:\nThe first clash is won by Saruman: 10 against 23, wins 23\nThe second clash wins Saruman: 11 against 66, wins 66\netc.\n\nYou will create two variables, one for each sorcerer, where the sum of clashes won will be stored. Depending on which variable is greater at the end of the duel, you will show one of the following three results on the screen:\nGandalf wins\nSaruman wins\nTie\n\nWrite a pseudocode to solve this problem and then code it !", "_____no_output_____" ] ], [ [ "gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]\nsaruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]\n\ntotal_gandalf_wins = 0\ntotal_saruman_wins = 0\n\nfor i in range(0, len(gandalf)):\n if (gandalf[i] > saruman[i]):\n total_gandalf_wins += 1\n elif (saruman[i] > gandalf[i]):\n total_saruman_wins += 1\n\nif (total_gandalf_wins > total_saruman_wins):\n print (\"Gandalf wins!\")\nelif (total_saruman_wins > total_gandalf_wins):\n print (\"Saruman wins!\")\nelse:\n print (\"No winners, it is a tie.\")\n", "Gandalf wins!\n" ] ] ]
[ "markdown", "code", "raw", "code", "markdown", "code", "raw", "code", "markdown", "code", "raw", "code", "markdown", "code", "raw", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "raw" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "raw" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "raw" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "raw" ], [ "code" ] ]
e725809013cfcce21d1ad301bde9acc1cad330c1
867
ipynb
Jupyter Notebook
Untitled2.ipynb
jungminshan/drosophila
8efccfdaaac1404811eac2d81a90f5f42b1d24c1
[ "MIT" ]
null
null
null
Untitled2.ipynb
jungminshan/drosophila
8efccfdaaac1404811eac2d81a90f5f42b1d24c1
[ "MIT" ]
null
null
null
Untitled2.ipynb
jungminshan/drosophila
8efccfdaaac1404811eac2d81a90f5f42b1d24c1
[ "MIT" ]
null
null
null
18.847826
37
0.557093
[]
[]
[]
e72581f5e7fecbb33e7b5e1d5d861afffff97f65
426,523
ipynb
Jupyter Notebook
source/notebooks/Examples/.ipynb_checkpoints/Camera check-checkpoint.ipynb
eellak/gsoc2019-diyrobot
2121e696a534e01b0700abc31303ed00691724b4
[ "MIT" ]
19
2019-05-09T13:40:02.000Z
2022-03-25T17:39:51.000Z
source/notebooks/Examples/Camera check.ipynb
eellak-gsoc2021/diyrobot
966dcfa971b5d259c138fa13ba97488ecc62b52c
[ "MIT" ]
1
2020-01-26T17:40:33.000Z
2020-04-16T08:14:34.000Z
source/notebooks/Examples/Camera check.ipynb
eellak-gsoc2021/diyrobot
966dcfa971b5d259c138fa13ba97488ecc62b52c
[ "MIT" ]
6
2020-01-16T09:28:48.000Z
2022-02-19T21:06:19.000Z
4,352.27551
424,672
0.96222
[ [ [ "# Camera check", "_____no_output_____" ] ], [ [ "from proteas_lib import vision", "_____no_output_____" ], [ "import time", "_____no_output_____" ], [ "canvas = vision.show_image(jupyter=True)", "_____no_output_____" ] ], [ [ "Jupyter Notebook performance on OpenCV image preview is poor, use the frame directly for proccesing with othes classes eg. face detection. The show_image() class should be used only for tests. ", "_____no_output_____" ] ], [ [ "cam_1 = vision.camera(camera=0)\nfor i in range(1,10):\n frame = cam_1.take_frame()\n canvas.preview(frame)\ncam_1.stop()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e725963fcb4f29738bcc7332980919d978e90b61
23,470
ipynb
Jupyter Notebook
JupyterNotebooks/Lessons/Lesson 5.ipynb
afrank2002/CMPT-221L-621-21F
7dd778ebcad44e33202a420dc75331026aa067b0
[ "MIT" ]
null
null
null
JupyterNotebooks/Lessons/Lesson 5.ipynb
afrank2002/CMPT-221L-621-21F
7dd778ebcad44e33202a420dc75331026aa067b0
[ "MIT" ]
null
null
null
JupyterNotebooks/Lessons/Lesson 5.ipynb
afrank2002/CMPT-221L-621-21F
7dd778ebcad44e33202a420dc75331026aa067b0
[ "MIT" ]
null
null
null
64.125683
599
0.649084
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e72598fa54aaa7aa6e7f67a4ba597f5485e3669e
27,154
ipynb
Jupyter Notebook
examples/user_guide/8_Geography.ipynb
tomwhite/datashader
569520dc47493591ebdd058ab552e305420ac527
[ "BSD-3-Clause" ]
null
null
null
examples/user_guide/8_Geography.ipynb
tomwhite/datashader
569520dc47493591ebdd058ab552e305420ac527
[ "BSD-3-Clause" ]
null
null
null
examples/user_guide/8_Geography.ipynb
tomwhite/datashader
569520dc47493591ebdd058ab552e305420ac527
[ "BSD-3-Clause" ]
null
null
null
37.146375
674
0.605657
[ [ [ "Datashader is a general-purpose tool for rasterizing (and re-rasterizing) data of many different types. To make it easier to apply this general functionality to the particular domain of geoscience, Datashader provides a few geospatial-specific utilities as well:\n\n* [Project points](#Project-points)\n* [Generate terrain](#Generate-terrain)\n* [Hillshade](#Hillshade)\n* [Slope](#Slope)\n* [Aspect](#Aspect)\n* [Bump map](#Bump-map)\n* [NDVI](#NDVI)\n* [Mean](#Mean)\n* [Proximity](#Proximity)\n* [Viewshed](#Viewshed)\n* [Zonal Statistics](#Zonal-Statistics)\n\nThis notebook explains each of these topics in turn. See also [GeoViews](http://geoviews.org), which is designed to work with Datashader to provide a large range of additional geospatial functionality.", "_____no_output_____" ], [ "## Project points\n\nYou can use [GeoViews](http://geoviews.org) or the underlying [pyproj/proj.4](https://pyproj4.github.io/pyproj) libraries to perform arbitrary projections to and from a large number of different coordinate reference systems. However, for the common case of wanting to view data with latitude and longitude coordinates on top of a Web Mercator tile source such as Google Maps or OpenStreetMap, Datashader provides a simple self-contained utility `lnglat_to_meters(longitude, latitude)` to project your data once, before visualization. For instance, if you have a dataframe with some latitude and longitude points stretching from San Diego, California to Bangor, Maine:", "_____no_output_____" ] ], [ [ "import numpy as np, pandas as pd\nfrom datashader.utils import lnglat_to_meters\n\nSan_Diego = 32.715, -117.1625\nBangor = 44.8, -68.8\nn = 20\n\ndf = pd.DataFrame(dict(longitude = np.linspace(San_Diego[1], Bangor[1], n),\n latitude = np.linspace(San_Diego[0], Bangor[0], n)))", "_____no_output_____" ] ], [ [ "Then you can create new columns (or overwrite old ones) with the projected points in meters from the origin (Web Mercator coordinates):", "_____no_output_____" ] ], [ [ "df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.longitude,df.latitude)\ndf.tail()", "_____no_output_____" ] ], [ [ "The new x and y coordinates aren't very useful for humans to read, but they can now be overlaid directly onto web map sources, which are labeled with latitude and longitude appropriately by Bokeh (via HoloViews) but are actually in Web Mercator coordinates internally:", "_____no_output_____" ] ], [ [ "import holoviews as hv\nfrom holoviews.operation.datashader import datashade, spread\nfrom holoviews.element import tiles\nhv.extension('bokeh')\n\npts = spread(datashade(hv.Points(df, ['x', 'y']), cmap=\"white\", width=300, height=100), px=3)\n\ntiles.EsriImagery() * pts", "_____no_output_____" ] ], [ [ "If you are using GeoViews, you can get the same effect by calling [gv.operation.project](http://geoviews.org/user_guide/Projections.html#Explicitly-projecting). With GeoViews, you can also declare your object to be in lon,lat coordinates natively (`from cartopy import crs ; gv.Points(df, ['longitude', 'latitude'], crs=crs.PlateCarree())`) and let GeoViews then reproject the points as needed, but dynamic reprojection will be much slower for interactive use than projecting them in bulk ahead of time.", "_____no_output_____" ], [ "## Generate Terrain Data\n\nThe rest of the geo-related functions focus on raster data (or rasterized data, after a previous Datashader step that returns an Xarray object). To demonstrate using these raster-based functions, let's generate some fake terrain as an elevation raster:", "_____no_output_____" ] ], [ [ "import numpy as np, datashader as ds, datashader.geo as dsgeo\nfrom datashader.transfer_functions import shade, stack\nfrom datashader.colors import Elevation\n\nW = 800\nH = 600\n\ncvs = ds.Canvas(plot_width=W, plot_height=H, x_range=(-20e6, 20e6), y_range=(-20e6, 20e6))\nterrain = dsgeo.generate_terrain(cvs)\n\nshade(terrain, cmap=['black', 'white'], how='linear')", "_____no_output_____" ] ], [ [ "The grayscale value above shows the elevation linearly in intensity (with the large black areas indicating low elevation), but it will look more like a landscape if we map the lowest values to colors representing water, and the highest to colors representing mountaintops:", "_____no_output_____" ] ], [ [ "shade(terrain, cmap=Elevation, how='linear')", "_____no_output_____" ] ], [ [ "## Hillshade\n\n[Hillshade](https://en.wikipedia.org/wiki/Terrain_cartography) is a technique used to visualize terrain as shaded relief, illuminating it with a hypothetical light source. The illumination value for each cell is determined by its orientation to the light source, which is based on slope and aspect.", "_____no_output_____" ] ], [ [ "illuminated = dsgeo.hillshade(terrain)\n\nshade(illuminated, cmap=['gray', 'white'], alpha=255, how='linear')", "_____no_output_____" ] ], [ [ "You can combine hillshading with elevation colormapping to convey differences in terrain with elevation:", "_____no_output_____" ] ], [ [ "stack(shade(illuminated, cmap=['gray', 'white'], alpha=255, how='linear'),\n shade(terrain, cmap=Elevation, alpha=128, how='linear'))", "_____no_output_____" ] ], [ [ "## Slope\n[Slope](https://en.wikipedia.org/wiki/Slope) is the inclination of a surface. \nIn geography, *slope* is amount of change in elevation of a terrain regarding its surroundings.\n\nDatashader's slope function returns slope in degrees. Below we highlight areas at risk for avalanche by looking at [slopes around 38 degrees](http://wenatcheeoutdoors.org/2016/04/07/avalanche-abcs-for-snowshoers/).", "_____no_output_____" ] ], [ [ "risky = dsgeo.slope(terrain)\nrisky.data = np.where(np.logical_and(risky.data > 25, risky.data < 50), 1, np.nan)\n\nstack(shade(terrain, cmap=['black', 'white'], how='linear'),\n shade(illuminated, cmap=['black', 'white'], how='linear', alpha=128),\n shade(risky, cmap='red', how='linear', alpha=200))", "_____no_output_____" ] ], [ [ "## Aspect\n\n[Aspect](https://en.wikipedia.org/wiki/Aspect_%28geography%29) is the orientation of slope, measured clockwise in degrees from 0 to 360, where 0 is north-facing, 90 is east-facing, 180 is south-facing, and 270 is west-facing.\n\nBelow, we look to find slopes that face close to North.", "_____no_output_____" ] ], [ [ "north_faces = dsgeo.aspect(terrain)\nnorth_faces.data = np.where(np.logical_or(north_faces.data > 350 ,\n north_faces.data < 10), 1, np.nan)\n\nstack(shade(terrain, cmap=['black', 'white'], how='linear'),\n shade(illuminated, cmap=['black', 'white'], how='linear', alpha=128),\n shade(north_faces, cmap=['aqua'], how='linear', alpha=100))", "_____no_output_____" ] ], [ [ "## NDVI\n\nThe [Normalized Difference Vegetation Index](https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index) (NDVI) is a metric designed to detect regions with vegetation by measuring the difference between near-infrared (NIR) light (which vegetation reflects) and red light (which vegetation absorbs).\n\nThe NDVI ranges over [-1,+1], where `-1` means more \"Red\" radiation while `+1` means more \"NIR\" radiation. NDVI values close to +1.0 suggest areas dense with active green foliage, while strongly negative values suggest cloud cover or snow, and values near zero suggest open water, urban areas, or bare soil. \n\nFor our synthetic example here, we don't have access to NIR measurements, but we can approximate the results for demonstration purposes by using the green and blue channels of a colormapped image, as those represent a difference in wavelength similar to NIR vs. Red.", "_____no_output_____" ] ], [ [ "import xarray as xr\n\nrgba = stack(shade(terrain, cmap=Elevation, how='linear')).to_pil()\nr,g,b,a = [xr.DataArray(np.flipud(np.asarray(rgba.getchannel(c))))/255.0 \n for c in ['R','G','B','A']]\n\nndvi = dsgeo.ndvi(nir_agg=g, red_agg=b)\nshade(ndvi, cmap=['purple','black','green'], how='linear')", "_____no_output_____" ] ], [ [ "## Bump\n\nBump mapping is a cartographic technique that can be used to create the appearance of trees or other land features, which is useful when synthesizing human-interpretable images from source data like land use classifications.\n\n`dsgeo.bump` will produce a bump aggregate for adding detail to the terrain.\n\nIn this example, we will pretend the bumps are trees, and shade them with green. We'll also use the elevation data to modulate whether there are trees and if so how tall they are.\n\n- First, we'll define a custom `height` function to return tree heights suitable for the given elevation range\n- `dsgeo.bump` accepts a function with only a single argument (`locations`), so we will use `functools.partial` to provide values for the other arguments.\n- Bump mapping isn't normally a performance bottleneck, but if you want, you can speed it up by using Numba on your custom `height` function (`from datashader.utils import ngjit`, then put `@ngjit` above `def heights(...)`).", "_____no_output_____" ] ], [ [ "from functools import partial\n\ndef heights(locations, src, src_range, height=20):\n num_bumps = locations.shape[0]\n out = np.zeros(num_bumps, dtype=np.uint16)\n for r in range(0, num_bumps):\n loc = locations[r]\n x = loc[0]\n y = loc[1]\n val = src[y, x]\n if val >= src_range[0] and val < src_range[1]:\n out[r] = height\n return out\n\nT = 300000 # Number of trees to add per call\nsrc = terrain.data\n%time trees = dsgeo.bump(W, H, count=T, height_func=partial(heights, src=src, src_range=(1000, 1300), height=5))\ntrees += dsgeo.bump(W, H, count=T//2, height_func=partial(heights, src=src, src_range=(1300, 1700), height=20))\ntrees += dsgeo.bump(W, H, count=T//3, height_func=partial(heights, src=src, src_range=(1700, 2000), height=5))\n\ntree_colorize = trees.copy()\ntree_colorize.data[tree_colorize.data == 0] = np.nan\nhillshade = dsgeo.hillshade(terrain + trees)\n\nstack(shade(terrain, cmap=['black', 'white'], how='linear'),\n shade(hillshade, cmap=['black', 'white'], how='linear', alpha=128),\n shade(tree_colorize, cmap='limegreen', how='linear'))", "_____no_output_____" ] ], [ [ "## Mean\nThe `datashader.mean` function will smooth a given aggregate by using a 3x3 mean convolution filter. Optional parameters include `passes`, which is used to run the mean filter multiple times, and also `excludes` which are values that will not be modified by the mean filter.\n\nWe can use `mean` to add a coastal vignette to give out terrain scene a bit more character. Notice the water below now has a nice coastal gradient which adds some realism to our scene.", "_____no_output_____" ] ], [ [ "LAND_CONSTANT = 50.0\n\nwater = terrain.copy()\nwater.data = np.where(water.data > 0, LAND_CONSTANT, 0)\nwater = dsgeo.mean(water, passes=50, excludes=[LAND_CONSTANT])\nwater.data[water.data == LAND_CONSTANT] = np.nan\n\nstack(shade(terrain, cmap=['black', 'white'], how='linear'),\n shade(water, cmap=['aqua', 'white']))", "_____no_output_____" ] ], [ [ "## Full scene\n\nWe've now seen several of datashader's `geo` helper functions for working with elevation rasters.\n\nLet's make a full archipelago scene by stacking `terrain`, `water`, `hillshade`, and `tree_colorize` together into one output image: ", "_____no_output_____" ] ], [ [ "stack(shade(terrain, cmap=Elevation, how='linear'),\n shade(water, cmap=['aqua','white']),\n shade(dsgeo.hillshade(terrain + trees), cmap=['black', 'white'], how='linear', alpha=128),\n shade(tree_colorize, cmap='limegreen', how='linear'))", "_____no_output_____" ] ], [ [ "## Proximity\n\nThe `datashader.spatial.proximity` function operates on a given aggregate to produce a new distance aggregate based on target values and a distance metric. The values in the new aggregate will be the distance (according to the given metric) between each array cell (pixel) and the nearest target value in the source aggregate.\n\nA powerful feature of `proximity` is that you can target specific values in the aggregate for distance calculation, while others are ignored. Play with the `target_values` parameter below and see the difference of using `target_values=[1,2,3]` vs. `target_values=[2]` vs. `target_values=[3]`", "_____no_output_____" ], [ " ##### Load data and create `ds.Canvas`", "_____no_output_____" ] ], [ [ "from datashader.spatial import proximity\nfrom datashader.transfer_functions import dynspread\nfrom datashader.transfer_functions import set_background\n\ndf = pd.DataFrame({\n 'x': [-13, -11, -5,4, 9, 11, 18, 6],\n 'y': [-13, -5, 0, 10, 7, 2, 5, -5]\n})\n\ncvs = ds.Canvas(plot_width=W, plot_height=H,\n x_range=(-20, 20), y_range=(-20, 20))", "_____no_output_____" ] ], [ [ " ##### Create Proximity Aggregate\n \n - Use `Canvas.points` to create an `xarray.DataArray`\n - Calculate proximity to nearest non-nan / non-zero elements using `datashader.spatial.proximity`", "_____no_output_____" ] ], [ [ "points_agg = cvs.points(df, x='x', y='y')\npoints_shaded = dynspread(shade(points_agg, cmap=['salmon', 'salmon']),\n threshold=1,\n max_px=5)\nset_background(points_shaded, 'black')", "_____no_output_____" ] ], [ [ "##### Create proximity grid for all non-zero values", "_____no_output_____" ] ], [ [ "proximity_agg = proximity(points_agg)\n\nstack(shade(proximity_agg, cmap=['darkturquoise', 'black'], how='linear'),\n points_shaded)", "_____no_output_____" ], [ "line_agg = cvs.line(df, x='x', y='y')\nline_shaded = dynspread(shade(line_agg, cmap=['salmon', 'salmon']),\n threshold=1,\n max_px=2)\nset_background(line_shaded, 'black')", "_____no_output_____" ], [ "line_proximity = proximity(line_agg)\nstack(shade(line_proximity, cmap=['darkturquoise', 'black'], how='linear'),\n line_shaded)", "_____no_output_____" ] ], [ [ "##### Transform Proximity DataArray\nLike the other Datashader spatial tools, the result of `proximity` is an `xarray.DataArray` with a large API of potential transformations.\n\nBelow is an example of using `DataArray.where()` to apply a minimum distance and maximum distance.", "_____no_output_____" ] ], [ [ "where_clause = (line_proximity > 1) & (line_proximity < 1.1)\nproximity_shaded = shade(line_proximity.where(where_clause), cmap=['darkturquoise', 'darkturquoise'])\nproximity_shaded = set_background(proximity_shaded, 'black')\nstack(proximity_shaded, line_shaded)", "_____no_output_____" ] ], [ [ "## Viewshed\n\nThe `datashader.spatial.viewshed` function operates on a given aggregate to calculate the viewshed (the visible cells in the raster) for the given viewpoint (observer) location. \n\nThe visibility model is the following: Two cells are visible to each other if the line of sight that connects their centers does not intersect the terrain. If the line of sight does not pass through the cell center, elevation is determined using bilinear interpolation.", "_____no_output_____" ], [ "##### Simple Viewshed Example\n\n- The example below creates a datashader aggregate from a 2d normal distribution.\n- To calculate the viewshed, we need an observer location.\n- This location is indicated by the orange point in the upper-left of the plot.", "_____no_output_____" ] ], [ [ "from datashader.spatial import proximity\nfrom datashader.spatial import viewshed\n\nfrom datashader.transfer_functions import dynspread\nfrom datashader.transfer_functions import set_background\n\nOBSERVER_X = -12.5\nOBSERVER_Y = 10\n\nH = 400\nW = 400\n\ncanvas = ds.Canvas(plot_width=W, plot_height=H,\n x_range=(-20, 20), y_range=(-20, 20))\n\nnormal_df = pd.DataFrame({\n 'x': np.random.normal(.5, 1, 10000000),\n 'y': np.random.normal(.5, 1, 10000000)\n})\nnormal_agg = canvas.points(normal_df, 'x', 'y')\nnormal_agg.values = normal_agg.values.astype(\"float64\")\nnormal_shaded = shade(normal_agg)\n\nobserver_df = pd.DataFrame({'x': [OBSERVER_X], 'y': [OBSERVER_Y]})\nobserver_agg = canvas.points(observer_df, 'x', 'y')\nobserver_shaded = dynspread(shade(observer_agg, cmap=['orange']),\n threshold=1, max_px=4)\n\nnormal_illuminated = dsgeo.hillshade(normal_agg)\nnormal_illuminated_shaded = shade(normal_illuminated, cmap=['black', 'white'], \n alpha=128, how='linear')\n\nstack(normal_illuminated_shaded, observer_shaded)", "_____no_output_____" ] ], [ [ "##### Calculate viewshed using the observer location", "_____no_output_____" ] ], [ [ "# Will take some time to run...\n%time view = viewshed(normal_agg, x=OBSERVER_X, y=OBSERVER_Y)\n\nview_shaded = shade(view, cmap=['white', 'red'], alpha=128, how='linear')\n\nstack(normal_illuminated_shaded, observer_shaded, view_shaded) ", "_____no_output_____" ] ], [ [ "##### Viewshed on Terrain\n\n- Let's take the example above and apply it to our terrain aggregate.\n- Notice the use of the `observer_elev` argument, which is the height of the observer above the terrain.", "_____no_output_____" ] ], [ [ "from datashader.spatial import viewshed\n\nW = 600\nH = 400\n\ncvs = ds.Canvas(plot_width=W, plot_height=H, x_range=(-20e6, 20e6), y_range=(-20e6, 20e6))\nterrain = dsgeo.generate_terrain(cvs)\nterrain_shaded = shade(terrain, cmap=Elevation, alpha=128, how='linear')\n\nilluminated = dsgeo.hillshade(terrain)\n\nOBSERVER_X = 0.0\nOBSERVER_Y = 0.0\n\nobserver_df = pd.DataFrame({'x': [OBSERVER_X],'y': [OBSERVER_Y]})\nobserver_agg = cvs.points(observer_df, 'x', 'y')\nobserver_shaded = dynspread(shade(observer_agg, cmap=['orange']),\n threshold=1, max_px=4)\n\nstack(shade(illuminated, cmap=['black', 'white'], alpha=128, how='linear'),\n terrain_shaded,\n observer_shaded)", "_____no_output_____" ], [ "%time view = viewshed(terrain, x=OBSERVER_X, y=OBSERVER_Y, observer_elev=100)\n\nview_shaded = shade(view, cmap='fuchsia', how='linear')\nstack(shade(illuminated, cmap=['black', 'white'], alpha=128, how='linear'),\n terrain_shaded,\n view_shaded,\n observer_shaded)", "_____no_output_____" ] ], [ [ "The fuchsia areas are those visible to an observer of the given height at the indicated orange location.", "_____no_output_____" ], [ "## Zonal Statistics\n\nZonal statistics allows for calculating summary statistics for specific areas or *zones* within a datashader aggregate. Zones are defined by creating an integer aggregate where the cell values are zone_ids. The output of zonal statistics is a Pandas dataframe containing summary statistics for each zone based on a value raster.\n\nImagine the following scenario:\n- You are a hiker on a six-day-trip.\n- The path for each day is defined by a line segement.\n- You wish to calculate the max and min elevations for each hiking segment as a Pandas dataframe based on an elevation dataset.", "_____no_output_____" ] ], [ [ "from datashader.colors import Set1\n\nW = 800\nH = 600\n\ncvs = ds.Canvas(plot_width=W, plot_height=H, x_range=(-20, 20), y_range=(-20, 20))\n\nterrain = dsgeo.generate_terrain(cvs)\nterrain_shaded = shade(terrain, cmap=Elevation, alpha=128, how='linear')\n\nilluminated = dsgeo.hillshade(terrain)\nilluminated_shaded = shade(illuminated, cmap=['gray', 'white'], alpha=255, how='linear')\n\nzone_df = pd.DataFrame({\n 'x': [-11, -5, 4, 12, 14, 18, 19],\n 'y': [-5, 4, 10, 13, 13, 13, 10],\n 'trail_segement_id': [11, 12, 13, 14, 15, 16, 17]\n})\n\nzones_agg = cvs.line(zone_df, 'x', 'y', ds.sum('trail_segement_id'))\nzones_agg.values = np.nan_to_num(zones_agg.values, copy=False).astype(np.int)\nzones_shaded = dynspread(shade(zones_agg, cmap=Set1), max_px=5)\n\nstack(illuminated_shaded, terrain_shaded, zones_shaded)", "_____no_output_____" ], [ "from datashader.spatial import zonal_stats\n\nzonal_stats(zones_agg, terrain)", "_____no_output_____" ] ], [ [ "##### Calculate custom stats for each zone", "_____no_output_____" ] ], [ [ "stats = dict(elevation_change=lambda zone: zone.max() - zone.min(),\n elevation_min=np.min,\n elevation_max=np.max)\n\nzonal_stats(zones_agg, terrain, stats)", "_____no_output_____" ] ], [ [ "Here the zones are defined by line segments, but they can be any spatial pattern, and in particular can be any region computable as a Datashader aggregate.\n\n\n### References\n- Burrough, P. A., and McDonell, R. A., 1998. Principles of Geographical Information Systems (Oxford University Press, New York), p. 406.\n- Making Maps with Noise Functions: https://www.redblobgames.com/maps/terrain-from-noise/\n- How Aspect Works: http://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/how-aspect-works.htm#ESRI_SECTION1_4198691F8852475A9F4BC71246579FAA", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7259e7a4c17277e0a963c9d186f104f2f725eae
299,962
ipynb
Jupyter Notebook
project-bikesharing/Your_first_neural_network.ipynb
fireis/deep-learning-v2-pytorch
a3a5f114add66ca19ebb41e22bbba89b28be2527
[ "MIT" ]
null
null
null
project-bikesharing/Your_first_neural_network.ipynb
fireis/deep-learning-v2-pytorch
a3a5f114add66ca19ebb41e22bbba89b28be2527
[ "MIT" ]
null
null
null
project-bikesharing/Your_first_neural_network.ipynb
fireis/deep-learning-v2-pytorch
a3a5f114add66ca19ebb41e22bbba89b28be2527
[ "MIT" ]
null
null
null
317.420106
159,232
0.910072
[ [ [ "# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "_____no_output_____" ] ], [ [ "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)", "_____no_output_____" ], [ "rides.head()", "_____no_output_____" ] ], [ [ "## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "_____no_output_____" ] ], [ [ "rides[:24*10].plot(x='dteday', y='cnt')", "_____no_output_____" ] ], [ [ "### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.", "_____no_output_____" ] ], [ [ "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "_____no_output_____" ] ], [ [ "### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "_____no_output_____" ] ], [ [ "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "_____no_output_____" ] ], [ [ "### Splitting the data into training, testing, and validation sets\n\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "_____no_output_____" ] ], [ [ "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "_____no_output_____" ] ], [ [ "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "_____no_output_____" ] ], [ [ "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "_____no_output_____" ] ], [ [ "## Time to build the network\n\nBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\n<img src=\"assets/neural_network.png\" width=300px>\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ", "_____no_output_____" ] ], [ [ "#############\n# In the my_answers.py file, fill out the TODO sections as specified\n#############\n\nfrom my_answers import NeuralNetwork", "_____no_output_____" ], [ "def MSE(y, Y):\n return np.mean((y-Y)**2)", "_____no_output_____" ] ], [ [ "## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "_____no_output_____" ] ], [ [ "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", ".....\n----------------------------------------------------------------------\nRan 5 tests in 0.013s\n\nOK\n" ] ], [ [ "## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \n\nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.", "_____no_output_____" ] ], [ [ "import sys\n\n####################\n### Set the hyperparameters in you myanswers.py file ###\n####################\n\nfrom my_answers import iterations, learning_rate, hidden_nodes, output_nodes\n\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)", "Progress: 100.0% ... Training loss: 0.062 ... Validation loss: 0.138" ], [ "plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "_____no_output_____" ] ], [ [ "## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:10: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated\n # Remove the CWD from sys.path while we load stuff.\n" ] ], [ [ "## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\n#### Your answer below", "_____no_output_____" ], [ "## Submitting:\nOpen up the 'jwt' file in the first-neural-network directory (which also contains this notebook) for submission instructions", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7259f7c197936ed09e86400bad07d739296f325
5,264
ipynb
Jupyter Notebook
notebooks/liftover-ribotricer-index-hg38-panTro3.ipynb
saketkc/re-ribo-smk
c9326cbafdfa060e22e9af692d9146c37f5035ba
[ "BSD-2-Clause" ]
1
2019-09-11T17:09:48.000Z
2019-09-11T17:09:48.000Z
notebooks/liftover-ribotricer-index-hg38-panTro3.ipynb
saketkc/re-ribo-smk
c9326cbafdfa060e22e9af692d9146c37f5035ba
[ "BSD-2-Clause" ]
null
null
null
notebooks/liftover-ribotricer-index-hg38-panTro3.ipynb
saketkc/re-ribo-smk
c9326cbafdfa060e22e9af692d9146c37f5035ba
[ "BSD-2-Clause" ]
null
null
null
28.454054
245
0.584726
[ [ [ "# Goal", "_____no_output_____" ], [ "We want to see how many ORFs in hg38 when lifted over to panTro3\nremain a potential ORF. The strategy is simple, lift hg38\nribotricer index (after converting it to bed) to panTro3.\nThis lifted over bed file after accounting for half interval\nbased coordinates in the bed can be matched up to the ribotricer panTro3\nindex", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport os\nimport re", "_____no_output_____" ], [ "hg38_ribotricer_index = '/home/cmb-06/as/skchoudh/genomes/hg38/ribotricer_v96_annotation_longest_candidate_orfs.tsv'\npantro3_ribotricer_index = '/home/cmb-06/as/skchoudh/genomes/panTro3/ribotricer_v96_annotation_longest_candidate_orfs.tsv'\n#mmul8_ribotricer_index = '/home/cmb-06/as/skchoudh/genomes/Mmul8/ribotricer_v96_annotation_longest_candidate_orfs.tsv'\n", "_____no_output_____" ], [ "hg38_index = pd.read_csv(hg38_ribotricer_index, sep='\\t')\npantro3_index = pd.read_csv(pantro3_ribotricer_index, sep='\\t')", "/home/cmb-06/as/skchoudh/software_frozen/anaconda37/envs/riboraptor/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3049: DtypeWarning: Columns (7) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n" ], [ "def parse_ribotricer_index(row):\n chrom = row.chrom\n strand = row.strand\n name = row.ORF_ID\n coordinates = row.coordinate.split(',')\n starts, ends = [], []\n for coordinate in coordinates:\n start, end = coordinate.split('-')\n start = int(start)\n end = int(end)\n # 0-based start\n start = start-1\n starts.append(start)\n ends.append(end)\n bedline = ''\n for start, end in zip(starts, ends):\n score = end-start\n bedline += '{}\\t{}\\t{}\\t{}\\t{}\\t{}\\n'.format(chrom, start, end, name, score, strand)\n return bedline", "_____no_output_____" ], [ "bedline = ''\nfor idx, row in hg38_index.iterrows():\n bedline+=parse_ribotricer_index(row)\n#hg38_index.head()#[['chrom', 'start', 'end', 'ORF_ID', 'length', 'strand']].head()", "_____no_output_____" ], [ "with open('/home/cmb-06/as/skchoudh/genomes/hg38/ribotricer_v96_annotation_longest_candidate_orfs.bed', 'w') as fh:\n fh.write(bedline)", "_____no_output_____" ], [ "!liftOver /home/cmb-06/as/skchoudh/genomes/hg38/ribotricer_v96_annotation_longest_candidate_orfs.bed \\\n/home/cmb-06/as/skchoudh/liftOver_chains/GRCh38ToPanTro3.over.chain \\\n/home/cmb-06/as/skchoudh/genomes/hg38/ribotricer_v96_annotation_longest_candidate_orfs.GRCh38ToPanTro3.bed \\\n/home/cmb-06/as/skchoudh/genomes/hg38/ribotricer_v96_annotation_longest_candidate_orfs.GRCh38ToPanTro3.unmapped.bed", "Reading liftover chains\nMapping coordinates\n" ], [ "bedline = ''\nfor idx, row in pantro3_index.iterrows():\n bedline+=parse_ribotricer_index(row)\n#hg38_index.head()#[['chrom', 'start', 'end', 'ORF_ID', 'length', 'strand']].head()", "_____no_output_____" ], [ "with open('/home/cmb-06/as/skchoudh/genomes/panTro3/ribotricer_v96_annotation_longest_candidate_orfs.bed', 'w') as fh:\n fh.write(bedline)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7259fb76a2a2a14b4c39d7272f0472ddc3e0fa8
4,296
ipynb
Jupyter Notebook
00_core_solns.ipynb
amykingapple26/nbdev_demo
526ac2288b3f4a84e881adc3f257cee5783cf569
[ "Apache-2.0" ]
null
null
null
00_core_solns.ipynb
amykingapple26/nbdev_demo
526ac2288b3f4a84e881adc3f257cee5783cf569
[ "Apache-2.0" ]
null
null
null
00_core_solns.ipynb
amykingapple26/nbdev_demo
526ac2288b3f4a84e881adc3f257cee5783cf569
[ "Apache-2.0" ]
null
null
null
22.492147
494
0.574721
[ [ [ "# 00_core_solns\n> Demo notebok for exploring nbdev\n\nIn this notebook, we explore the functionality of nbdev. If you're working on Google Colab, make sure you've already cloned (or copied) the repository onto your Google Drive. Note that the following line should be added somewhere in the notebook. It isn't explicitly coded here because nbdev will read this notebook and try to create a module (`core`) when there is already a `core` module from the previous notebook. We implement this in markdown here to avoid possible complexities.", "_____no_output_____" ], [ "`#default_exp core`", "_____no_output_____" ] ], [ [ "# all_no_test", "_____no_output_____" ] ], [ [ "## Compatibility steps for Google Colab\nThese steps are required to access these repo files (your code, built modules, etc) while editing your notebook of interest on Google Colab.", "_____no_output_____" ] ], [ [ "!pip install nbdev", "_____no_output_____" ], [ "from google.colab import drive\ndrive.mount('/content/drive')", "_____no_output_____" ], [ "%cd drive/MyDrive/nbdev_playground", "_____no_output_____" ] ], [ [ "## First time setup\nThe first time you start using the repository, you'll need to run the following commands to get everything working. We'll discuss what they are further in a moment.", "_____no_output_____" ] ], [ [ "!nbdev_install_git_hooks", "_____no_output_____" ], [ "!nbdev_clean_nbs", "_____no_output_____" ], [ "!nbdev_build_lib", "_____no_output_____" ], [ "!nbdev_build_docs", "_____no_output_____" ] ], [ [ "# Core functionality\nThe purpose of this demo notebook is to read from files and process them. Let's say for instance that we have a function that for some reason needs to append `.pdf` to a string that has been input. Note that we again make the markdown here for the export command so that nbdev does not try to export the following function.", "_____no_output_____" ], [ "`#export`", "_____no_output_____" ] ], [ [ "#create function that will append a file ending\ndef append_file_ending(in_str):\n return in_str + '.pdf'", "_____no_output_____" ] ], [ [ "Now, we perform a unit test on the function we've written...", "_____no_output_____" ] ], [ [ "#perform unit test\ntest_str = 'ThisNewFile'\nprint(append_file_ending(test_str))", "_____no_output_____" ] ], [ [ "# Building modules and cleaning notebooks", "_____no_output_____" ] ], [ [ "#clean metadata from notebooks\n!nbdev_clean_nbs", "_____no_output_____" ], [ "#build modules\n!nbdev_build_lib", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e725a30aa6a7e77a90ca7d5ad9ee0768decd74e3
256,837
ipynb
Jupyter Notebook
Analyses Functions WIP/.ipynb_checkpoints/Mood Analyses-checkpoint.ipynb
kjlafoll/kjlafoll-gen
37c488c23d3b7157590fae15d442c7b8463ed911
[ "MIT" ]
null
null
null
Analyses Functions WIP/.ipynb_checkpoints/Mood Analyses-checkpoint.ipynb
kjlafoll/kjlafoll-gen
37c488c23d3b7157590fae15d442c7b8463ed911
[ "MIT" ]
null
null
null
Analyses Functions WIP/.ipynb_checkpoints/Mood Analyses-checkpoint.ipynb
kjlafoll/kjlafoll-gen
37c488c23d3b7157590fae15d442c7b8463ed911
[ "MIT" ]
1
2018-07-26T00:12:30.000Z
2018-07-26T00:12:30.000Z
122.947343
92,736
0.782259
[ [ [ "import os\nimport pandas as pd\nimport numpy as np\nimport math\nimport statistics\nimport pathlib\nfrom more_itertools import unique_everseen\nimport seaborn\nimport matplotlib as mp\nfrom scipy import stats\nimport statsmodels.formula.api as smf\nimport statsmodels.api as sm\nfrom scipy.stats.stats import pearsonr", "//anaconda/envs/py35/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n return f(*args, **kwds)\n" ], [ "root = pathlib.Path(os.path.join(os.path.join(os.path.expanduser('~')), 'Desktop') + \"/SearchTool_Matches/\")\ncortdf = pd.read_excel(open(root / 'Cortisol/Cortisol_Analysis.xlsx',\"rb\"), index_col=False)\ncortdf['Log Mean (µg/dL)'] = [math.log(y,10) for y in cortdf['Mean (µg/dL)']]\ncortsubslist = list(unique_everseen(cortdf['Record ID'].astype(int)))\ncortsubslist.sort()\ndf = pd.read_csv(root / 'VAS/VAS-Analysis.csv', delimiter=',', index_col=False)\n\nvassubslist = [int(y[9:13]) for y in df['record_id']]", "_____no_output_____" ], [ "df['record_id'] = df['record_id'].str[9:].astype(int)\nvasalertlist = []; vassadlist = []; vastenselist = []; vaseffortlist = []; vashappylist = []; vaswearylist = []; vascalmlist = []; vassleepylist = []; vasglobalvigorlist = []; vasglobalaffectlist = []\nvasalertcortlist = []; vassadcortlist = []; vastensecortlist = []; vaseffortcortlist = []; vashappycortlist = []; vaswearycortlist = []; vascalmcortlist = []; vassleepycortlist = []; vasglobalvigorcortlist = []; vasglobalaffectcortlist = []\nvasalertmeanlist = []; vassadmeanlist = []; vastensemeanlist = []; vaseffortmeanlist = []; vaseffortmeanlist = []; vashappymeanlist = []; vaswearymeanlist = []; vascalmmeanlist = []; vassleepymeanlist = []; vasglobalvigormeanlist = []; vasglobalaffectmeanlist = []\nvasalertcortmeanlist = []; vassadcortmeanlist = []; vastensecortmeanlist = []; vaseffortcortmeanlist = []; vaseffortcortmeanlist = []; vashappycortmeanlist = []; vaswearycortmeanlist = []; vascalmcortmeanlist = []; vassleepycortmeanlist = []; vasglobalvigorcortmeanlist = []; vasglobalaffectcortmeanlist = []\nvashappycortstdlist = []\nmoodcortlist = []\n\nfor x in cortsubslist:\n moodcortlist.append(cortdf[cortdf['Record ID']==x].reset_index()['Mood Condition'][0])\n subrow = df[df['record_id'] == x].reset_index()\n if len(subrow.index) > 0:\n vasalertcortlist.append([x, [subrow['vas_alert_1'][0],subrow['vas_alert_2'][0],subrow['vas_mood_boost_alert'][0],subrow['vas_mood_boost_alert_2'][0],subrow['vas_mood_boost_alert_3'][0],subrow['vas_mood_boost_alert_4'][0],subrow['vas_mood_boost_alert_5'][0],subrow['vas_mood_boost_alert_6'][0],subrow['vas_mood_boost_alert_7'][0],subrow['vas_alert_3'][0]]])\n vassadcortlist.append([x, [subrow['vas_sad_1'][0],subrow['vas_sad_2'][0],subrow['vas_mood_boost_sad'][0],subrow['vas_mood_boost_sad_2'][0],subrow['vas_mood_boost_sad_3'][0],subrow['vas_mood_boost_sad_4'][0],subrow['vas_mood_boost_sad_5'][0],subrow['vas_mood_boost_sad_6'][0],subrow['vas_mood_boost_sad_7'][0],subrow['vas_sad_3'][0]]])\n vastensecortlist.append([x, [subrow['vas_tense_1'][0],subrow['vas_tense_2'][0],subrow['vas_mood_boost_tense'][0],subrow['vas_mood_boost_tense_2'][0],subrow['vas_mood_boost_tense_3'][0],subrow['vas_mood_boost_tense_4'][0],subrow['vas_mood_boost_tense_5'][0],subrow['vas_mood_boost_tense_6'][0],subrow['vas_mood_boost_tense_7'][0],subrow['vas_tense_3'][0]]])\n vaseffortcortlist.append([x, [subrow['vas_effort_1'][0],subrow['vas_effort_2'][0],subrow['vas_mood_boost_effort'][0],subrow['vas_mood_boost_effort_2'][0],subrow['vas_mood_boost_effort_3'][0],subrow['vas_mood_boost_effort_4'][0],subrow['vas_mood_boost_effort_5'][0],subrow['vas_mood_boost_effort_6'][0],subrow['vas_mood_boost_effort_7'][0],subrow['vas_effort_3'][0]]])\n vashappycortlist.append([x, [subrow['vas_happy_1'][0],subrow['vas_happy_2'][0],subrow['vas_mood_boost_happy'][0],subrow['vas_mood_boost_happy_2'][0],subrow['vas_mood_boost_happy_3'][0],subrow['vas_mood_boost_happy_4'][0],subrow['vas_mood_boost_happy_5'][0],subrow['vas_mood_boost_happy_6'][0],subrow['vas_mood_boost_happy_7'][0],subrow['vas_happy_3'][0]]])\n vaswearycortlist.append([x, [subrow['vas_weary_1'][0],subrow['vas_weary_2'][0],subrow['vas_mood_boost_weary'][0],subrow['vas_mood_boost_weary_2'][0],subrow['vas_mood_boost_weary_3'][0],subrow['vas_mood_boost_weary_4'][0],subrow['vas_mood_boost_weary_5'][0],subrow['vas_mood_boost_weary_6'][0],subrow['vas_mood_boost_weary_7'][0],subrow['vas_weary_3'][0]]])\n vascalmcortlist.append([x, [subrow['vas_calm_1'][0],subrow['vas_calm_2'][0],subrow['vas_mood_boost_calm'][0],subrow['vas_mood_boost_calm_2'][0],subrow['vas_mood_boost_calm_3'][0],subrow['vas_mood_boost_calm_4'][0],subrow['vas_mood_boost_calm_5'][0],subrow['vas_mood_boost_calm_6'][0],subrow['vas_mood_boost_calm_7'][0],subrow['vas_calm_3'][0]]])\n vassleepycortlist.append([x, [subrow['vas_sleepy_1'][0],subrow['vas_sleepy_2'][0],subrow['vas_mood_boost_sleepy'][0],subrow['vas_mood_boost_sleepy_2'][0],subrow['vas_mood_boost_sleepy_3'][0],subrow['vas_mood_boost_sleepy_4'][0],subrow['vas_mood_boost_sleepy_5'][0],subrow['vas_mood_boost_sleepy_6'][0],subrow['vas_mood_boost_sleepy_7'][0],subrow['vas_sleepy_3'][0]]])\n vasglobalvigorcortlist.append([x, [subrow['vas_global_vigor_1'][0],subrow['vas_global_vigor_2'][0],subrow['vas_mood_boost_global_vigor'][0],subrow['vas_mood_boost_global_vigor_2'][0],subrow['vas_mood_boost_global_vigor_3'][0],subrow['vas_mood_boost_global_vigor_4'][0],subrow['vas_mood_boost_global_vigor_5'][0],subrow['vas_mood_boost_global_vigor_6'][0],subrow['vas_mood_boost_global_vigor_7'][0],subrow['vas_global_vigor_3'][0]]])\n vasglobalaffectcortlist.append([x, [subrow['vas_global_affect_1'][0],subrow['vas_global_affect_2'][0],subrow['vas_mood_boost_global_affect'][0],subrow['vas_mood_boost_global_affect_2'][0],subrow['vas_mood_boost_global_affect_3'][0],subrow['vas_mood_boost_global_affect_4'][0],subrow['vas_mood_boost_global_affect_5'][0],subrow['vas_mood_boost_global_affect_6'][0],subrow['vas_mood_boost_global_affect_7'][0],subrow['vas_global_affect_3'][0]]])\nfor x in vasalertcortlist:\n vasalertcortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vassadcortlist:\n vassadcortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vastensecortlist: \n vastensecortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vaseffortcortlist:\n vaseffortcortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vashappycortlist:\n vashappycortmeanlist.append([x[0], np.nanmean(x[1])])\n vashappycortstdlist.append([x[0], np.nanstd(x[1])])\nfor x in vaswearycortlist:\n vaswearycortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vascalmcortlist:\n vascalmcortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vassleepycortlist:\n vassleepycortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vasglobalvigorcortlist:\n vasglobalvigorcortmeanlist.append([x[0], np.nanmean(x[1])])\nfor x in vasglobalaffectcortlist:\n vasglobalaffectcortmeanlist.append([x[0], np.nanmean(x[1])])", "_____no_output_____" ], [ "logcortlist = []\nmeanvaslist = [vasalertcortmeanlist, vassadcortmeanlist, vastensecortmeanlist,\n vaseffortcortmeanlist,\n vashappycortmeanlist, vaswearycortmeanlist, vascalmcortmeanlist,\n vassleepycortmeanlist, vasglobalvigorcortmeanlist, vasglobalaffectcortmeanlist]\nmoodlist = []\ndeletelist = []\ndiftrierlist = []\nagelist = []; genderlist = []\nfor i, x in enumerate(cortsubslist):\n logcortlist.append([x, []])\n moodlist.append([x])\n agelist.append([x])\n genderlist.append([x])\n try:\n moodlist[i].append(cortdf.loc[cortdf['Record ID'] == int(x), 'Mood Condition'].iloc[0])\n agelist[i].append(cortdf.loc[cortdf['Record ID'] == int(x), 'Age'].iloc[0])\n genderlist[i].append(cortdf.loc[cortdf['Record ID'] == int(x), 'Gender'].iloc[0])\n for y in range(1,12):\n logcortlist[i][1].append(cortdf.loc[(cortdf['Record ID'] == int(x)) & (cortdf['General Time'] == y), 'Log Mean (µg/dL)'].iloc[0])\n except:\n deletelist.append(i)\nloop=0\nfor x in deletelist:\n logcortlist.pop(x-loop)\n genderlist.pop(x-loop)\n moodlist.pop(x-loop)\n for i, y in enumerate(meanvaslist):\n meanvaslist[i].pop(x-loop)\n vashappycortstdlist.pop(x-loop)\n loop +=1\ndiftrierlist = [i-j for i,j in zip([x[1][8] for x in logcortlist],[x[1][7] for x in logcortlist])]\ndiftrierlistz = list(stats.mstats.zscore(diftrierlist))\n\ndeletelist = []\nfor i, x in enumerate(diftrierlistz):\n if x > 3 or x < -3:\n deletelist.append(i) \n print(x)\nloop = 0\nfor x in deletelist:\n moodlist.pop(x-loop)\n genderlist.pop(x-loop)\n diftrierlistz.pop(x-loop)\n for i, y in enumerate(meanvaslist):\n meanvaslist[i].pop(x-loop)\n vashappycortstdlist.pop(x-loop)\n loop +=1\n", "_____no_output_____" ], [ "#Edit These\nVASPLOT = True\nCORTPLOT = False\nSAVE = False\nvar = [y[1] for y in vashappycortlist]\nvarlabel = \"Global Affect Score\"\npointlabel = \"VAS Assessment Point\"\ntitle = \"\"\ntitlefont = {'weight':'bold','size':18,}\nxlabelfont = {'weight':'bold','size':18,}\nylabelfont = {'weight':'bold','size':18,}\nfigureparams = {'size':(15,10),'labelpad':25,'scale':1.5,'capsize':.1,'legendloc':(.145, -.15),}\nsavename = \"Analyses/VAS/VASGlobalAffect-All.png\"\n\n#--------------------------------------------\nmp.pyplot.clf()\ntable = pd.DataFrame(\n {varlabel: [item for sublist in var for item in sublist],\n pointlabel: var[0]*len(var)\n })\nfig, lm = mp.pyplot.subplots(figsize=figureparams['size'])\nif VASPLOT == True:\n table['SubID'] = subslong\n nmoodlist = [y for y in moodcortlist]\n for x in list(unique_everseen(nmoodlist)):\n nmoodlist = [w.replace(x, x + ' (N=%s)' % nmoodlist.count(x)) for w in nmoodlist]\n table[pointlabel] = ['ES1','ES2','B1','B2','B3','B4','B5','B6','B7','ES3']*len(var)\n table['Mood'] = [item for item, count in zip(nmoodlist, [len(var[0])]*len(var)) for i in range(count)]\n lm = seaborn.pointplot(x=pointlabel, y=varlabel, data=table, hue=\"Mood\", palette=('b', 'r', 'g'), ci=80, scale=figureparams['scale'], capsize=figureparams['capsize'])\n lgd = lm.legend(bbox_to_anchor=figureparams['legendloc'], loc=2, borderaxespad=0., ncol=3, fontsize=16)\nif CORTPLOT == True:\n table['SubID'] = subslong\n nmoodlist = [y for y in moodcortlist]\n for x in list(unique_everseen(nmoodlist)):\n nmoodlist = [w.replace(x, x + ' (N=%s)' % nmoodlist.count(x)) for w in nmoodlist]\n table[pointlabel] = ['12:45','13:25','13:35','14:10','15:05','15:15','15:50','16:40','17:10','17:35', '18:05']*len(var)\n table['Mood'] = [item for item, count in zip(nmoodlist, [len(var[0])]*len(var)) for i in range(count)]\n lm = seaborn.pointplot(x=pointlabel, y=varlabel, data=table, hue=\"Mood\", palette=('b', 'r', 'g'), ci=80, scale=figureparams['scale'], capsize=figureparams['capsize'])\n for x in [1.5, 4.5, 7.5, 9.5]:\n mp.pyplot.plot([x, x], [.35, .15], linewidth=2.5, color='b' if x==9.5 else 'r', linestyle='dotted')\nfor axis in ['top','bottom','left','right']:\n lm.spines[axis].set_linewidth(2)\nlm.set_title(title, titlefont, loc='left')\nlm.set_ylabel(varlabel, ylabelfont, labelpad=figureparams['labelpad'])\nlm.set_xlabel(pointlabel, xlabelfont, labelpad=figureparams['labelpad'])\nlm.tick_params(labelsize=16, size=8, direction='out', width=2)\nif SAVE == True:\n fig.savefig(root / savename, bbox_extra_artists=(lgd,), bbox_inches='tight', transparent=False)\nif VASPLOT == True: \n table[pointlabel] = [1,2,3,4,5,6,7,8,9,10]*len(var)\nif CORTPLOT == True:\n table[pointlabel] = [1,2,3,4,5,6,7,8,9,10,11]*len(var)\ntable = table.rename(index=str, columns={varlabel: \"VASChange\", pointlabel: \"Time\"})\ntable = table[np.isfinite(table['VASChange'])]\ntable.to_csv(\"C:\\\\Users\\\\localadmin\\\\R Scripts\\\\R_repeatedm_linearmixed_dataset.csv\", sep=',', index=False)\nif VASPLOT == True:\n for x in list(unique_everseen(nmoodlist)):\n print(\"\\n\\nMixed Linear Model Output for %s\\n\" % x)\n md = smf.mixedlm(\"VASChange ~ Time + Mood + Time*Mood\", data=table[table['Mood']==x], groups=table[table['Mood']==x][\"SubID\"])\n mdf = md.fit()\n print(mdf.summary())", "//anaconda/envs/py35/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n" ], [ "scale = vassadcortmeanlist\nfor i, x in enumerate(scale):\n if x[0] in list(set([y[0] for y in scale]).difference([y[0] for y in moodlist])):\n vassadcortmeanlist.pop(i)", "_____no_output_____" ], [ "logdiflist = []\nfor i, x in enumerate(logcortlist):\n logdiflist.append(logcortlist[i][1][8] - logcortlist[i][1][7])\ndiftrierlistz = list(stats.mstats.zscore(logdiflist))", "_____no_output_____" ], [ "subslong = [item for item, count in zip(cortsubslist, [len(var[0])]*len(cortsubslist)) for i in range(count)]", "_____no_output_____" ], [ "#subslong = [item for item, count in zip(cortsubslist, [len(var[0])]*len(var)) for i in range(count)]\nfor i, x in enumerate(subslong):\n if x in list(set([y for y in subslong]).difference([y[0] for y in vashappycortlist])):\n print(i)\n print(x)\n subslong.pop(i)", "_____no_output_____" ], [ "#Edit These\nSAVE = False\nvar1 = diftrierlistz\nvar2 = [y[1] for y in vashappycortstdlist]\nvar1label = \"Log Cortisol Delta (z-score)\"\nvar2label = \"Average VAS Global Affect Score\"\ntitle = \"Log Cortisol Delta vs.\\nAverage VAS Global Affect Score (N=%s)\" % len(var1)\nsavename = \"Analyses/VAS/VASGlobalAffect-Stress.png\"\n\n#--------------------------------------------\ntable = pd.DataFrame(\n {var1label: var1,\n var2label: var2\n })\nfor x in table.columns:\n table = table[np.isfinite(table[x])]\n\nlm = seaborn.lmplot(x=var1label, y=var2label, palette=('r'), data=table, legend_out=False)\nax = mp.pyplot.gca()\nax.set_title(\"%s\\nr=%.4f, p=%.4f\" % (title,pearsonr(table[var1label],table[var2label])[0],pearsonr(table[var1label],table[var2label])[1]))\nprint(\"r = %s, p = %s\" % pearsonr(table[var1label],table[var2label]))\nif SAVE == True:\n lm.savefig(root / savename, bbox_inches='tight')", "r = -0.20839884983508597, p = 0.13047350596209947\n" ], [ "#Edit These\nSAVE = True\nvar1 = diftrierlistz\nvar2 = [y[1] for y in meanvaslist[9]]\ngroupvar = [y[1] for y in genderlist]\nMEANSPLIT = False\nvar1label = \"Pre-Post TSST Difference in Cortisol (µg/dL)\"\nvar2label = \"Average VAS Global Affect Score\"\nhighgrouplabel = \"High EQI Total Score\"\nlowgrouplabel = \"Low EQI Total Score\"\ntitle = \"Log Cortisol Delta vs.\\nAverage VAS Global Affect Score\"\nsavename = \"Analyses/VAS/Gender/VASGlobalAffect-Stress-GenderSplit.png\"\n\n#--------------------------------------------\ntable = pd.DataFrame(\n {var1label: var1,\n var2label: var2,\n 'z_raw': groupvar,\n 'z_group': groupvar\n })\n\ngrouplist = []\n\nfor i, x in enumerate(table.z_raw):\n if MEANSPLIT == True:\n if x > statistics.mean(groupvar):\n grouplist.append(highgrouplabel)\n else:\n grouplist.append(lowgrouplabel)\n else:\n grouplist.append(groupvar[i])\n\nfor x in list(unique_everseen(grouplist)):\n grouplist = [w.replace(x, x + ' (N=%s)' % grouplist.count(x)) for w in grouplist]\ntable['z_group'] = grouplist\n\nseaborn.set(rc={'figure.figsize':(300,300)})\n#seaborn.reset_orig()\n\nlm = seaborn.lmplot(x=var1label, y=var2label, hue = 'z_group', data=table, legend=False)\nax = mp.pyplot.gca()\nax.set_title(title)\nmp.pyplot.legend(bbox_to_anchor=(1, 1), loc=2)\n\nfor x in list(unique_everseen(grouplist)):\n print(\"%s\" % x + \" Group: r = %s, p = %s\" % (pearsonr(table[var1label][table['z_group'] == x],table[var2label][table['z_group'] == x])))\nif MEANSPLIT == True:\n print(\"Mean of Grouping Variable: %.4f\" % statistics.mean(groupvar))\nif SAVE == True:\n lm.savefig(root / savename, bbox_inches='tight')", "//anaconda/envs/py35/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n" ], [ "[y[1] for y in genderlist].count('F')", "_____no_output_____" ], [ "nummoodlist = []\nfor x in moodlist:\n if x[1] == 'Positive':\n nummoodlist.append(1)\n elif x[1] == 'Neutral':\n nummoodlist.append(2)\n elif x[1] == 'Negative':\n nummoodlist.append(3)", "_____no_output_____" ], [ "[y[0] for y in logcortlist]", "_____no_output_____" ], [ "moodlist", "_____no_output_____" ], [ "len(genderlist)", "_____no_output_____" ], [ "cortdf", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e725a786553e90320283d5684c0c9c3a07d19beb
1,666
ipynb
Jupyter Notebook
first-notebook.ipynb
nocibambi/jupyter-notebook-dev
21b868c11f251737410d90788bed21779844eb88
[ "MIT" ]
null
null
null
first-notebook.ipynb
nocibambi/jupyter-notebook-dev
21b868c11f251737410d90788bed21779844eb88
[ "MIT" ]
null
null
null
first-notebook.ipynb
nocibambi/jupyter-notebook-dev
21b868c11f251737410d90788bed21779844eb88
[ "MIT" ]
null
null
null
19.149425
45
0.537815
[ [ [ "# Introduction", "_____no_output_____" ] ], [ [ "print(\"Hello World\")", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e725ce6aa563966df3815e9aa84b2536db2d5f92
788,990
ipynb
Jupyter Notebook
example/multi_condition_sample.ipynb
theislab/trvaep
8bfe2dc1553dc1a536113cffbe428d3eb9c96579
[ "MIT" ]
6
2020-06-02T00:00:33.000Z
2021-08-16T15:12:00.000Z
example/multi_condition_sample.ipynb
theislab/trvaep
8bfe2dc1553dc1a536113cffbe428d3eb9c96579
[ "MIT" ]
3
2020-02-15T06:35:22.000Z
2020-08-02T05:23:12.000Z
example/multi_condition_sample.ipynb
theislab/trvaep
8bfe2dc1553dc1a536113cffbe428d3eb9c96579
[ "MIT" ]
3
2020-02-15T06:54:57.000Z
2021-12-12T10:43:18.000Z
733.944186
220,820
0.927952
[ [ [ "import scanpy as sc\nimport sys\nsys.path.append(\"../\")\nimport trvaep\nimport numpy as np", "_____no_output_____" ] ], [ [ "## loading and preparing data ", "_____no_output_____" ] ], [ [ "adata = sc.read(\"../data/haber_count.h5ad\", backup_url=\"shorturl.at/imuEP\")\nsc.pp.normalize_per_cell(adata)\nsc.pp.log1p(adata)\nsc.pp.highly_variable_genes(adata, n_top_genes=2000)\nadata = adata[:, adata.var['highly_variable']]\nn_conditions = adata.obs[\"condition\"].unique().shape[0]\ncondition_key = \"condition\"", "_____no_output_____" ], [ "adata_train = adata[~((adata.obs[\"cell_label\"] == \"TA\")\n & (adata.obs[\"condition\"] == \"Hpoly.Day10\"))]", "_____no_output_____" ], [ "sc.pp.neighbors(adata_train)\nsc.tl.umap(adata_train)\n", "WARNING: You’re trying to run this on 2000 dimensions of `.X`, if you really want this, set `use_rep='X'`.\n Falling back to preprocessing with `sc.pp.pca` and default params.\n" ], [ "sc.pl.umap(adata_train, color=[\"condition\", \"cell_label\"], wspace=.4)", "_____no_output_____" ] ], [ [ "## creating model object ", "_____no_output_____" ] ], [ [ "model = trvaep.CVAE(adata_train.n_vars, num_classes=n_conditions,\n output_activation=\"ReLU\", encoder_layer_sizes=[64, 32], decoder_layer_sizes=[32, 64], use_mmd=True, beta=1)", "_____no_output_____" ] ], [ [ "### Make a trainer object \n", "_____no_output_____" ] ], [ [ "trainer = trvaep.Trainer(model, adata_train, condition_key=condition_key)", "_____no_output_____" ] ], [ [ "#### important: please consider using larger batch size (e.g. 1024, 2048) for datasets with more than two batches (i.e. conditions, studies) for better performance and avoid any computation error.", "_____no_output_____" ] ], [ [ "trainer.train_trvae(n_epochs=300, batch_size=512, early_patience=50)", "Epoch 00/300 Batch 0000/15, Loss: 388.9540, rec_loss: 387.3120, KL_loss: 0.0021, MMD_loss: 1.6400\nEpoch 00/300 Batch 0015/15, Loss: 233.0981, rec_loss: 231.4851, KL_loss: 0.0028, MMD_loss: 1.6102\nEpoch 01/300 Batch 0000/15, Loss: 226.2108, rec_loss: 224.6947, KL_loss: 0.0029, MMD_loss: 1.5131\nEpoch 01/300 Batch 0015/15, Loss: 184.3745, rec_loss: 183.0172, KL_loss: 0.0040, MMD_loss: 1.3533\nEpoch 02/300 Batch 0000/15, Loss: 187.5044, rec_loss: 186.1422, KL_loss: 0.0040, MMD_loss: 1.3582\nEpoch 02/300 Batch 0015/15, Loss: 160.1572, rec_loss: 158.7332, KL_loss: 0.0048, MMD_loss: 1.4192\nEpoch 03/300 Batch 0000/15, Loss: 161.5090, rec_loss: 160.1668, KL_loss: 0.0049, MMD_loss: 1.3373\nEpoch 03/300 Batch 0015/15, Loss: 149.5970, rec_loss: 148.3430, KL_loss: 0.0065, MMD_loss: 1.2475\nEpoch 04/300 Batch 0000/15, Loss: 146.4888, rec_loss: 145.2492, KL_loss: 0.0065, MMD_loss: 1.2331\nEpoch 04/300 Batch 0015/15, Loss: 133.3064, rec_loss: 131.8642, KL_loss: 0.0082, MMD_loss: 1.4341\nEpoch 05/300 Batch 0000/15, Loss: 132.8760, rec_loss: 131.6510, KL_loss: 0.0083, MMD_loss: 1.2167\nEpoch 05/300 Batch 0015/15, Loss: 125.9368, rec_loss: 124.7598, KL_loss: 0.0109, MMD_loss: 1.1662\nEpoch 06/300 Batch 0000/15, Loss: 125.5383, rec_loss: 124.4178, KL_loss: 0.0106, MMD_loss: 1.1099\nEpoch 06/300 Batch 0015/15, Loss: 125.3534, rec_loss: 124.2306, KL_loss: 0.0122, MMD_loss: 1.1106\nEpoch 07/300 Batch 0000/15, Loss: 118.5388, rec_loss: 117.2885, KL_loss: 0.0127, MMD_loss: 1.2375\nEpoch 07/300 Batch 0015/15, Loss: 121.8226, rec_loss: 120.8375, KL_loss: 0.0140, MMD_loss: 0.9711\nEpoch 08/300 Batch 0000/15, Loss: 117.2280, rec_loss: 116.1856, KL_loss: 0.0145, MMD_loss: 1.0278\nEpoch 08/300 Batch 0015/15, Loss: 115.1225, rec_loss: 114.1035, KL_loss: 0.0173, MMD_loss: 1.0017\nEpoch 09/300 Batch 0000/15, Loss: 113.4403, rec_loss: 112.2518, KL_loss: 0.0162, MMD_loss: 1.1723\nEpoch 09/300 Batch 0015/15, Loss: 115.6096, rec_loss: 114.5371, KL_loss: 0.0175, MMD_loss: 1.0551\nEpoch 10/300 Batch 0000/15, Loss: 117.8211, rec_loss: 116.6365, KL_loss: 0.0182, MMD_loss: 1.1664\nEpoch 10/300 Batch 0015/15, Loss: 109.3953, rec_loss: 108.1537, KL_loss: 0.0195, MMD_loss: 1.2221\nEpoch 11/300 Batch 0000/15, Loss: 113.9833, rec_loss: 112.9695, KL_loss: 0.0199, MMD_loss: 0.9939\nEpoch 11/300 Batch 0015/15, Loss: 109.3150, rec_loss: 108.2208, KL_loss: 0.0215, MMD_loss: 1.0727\nEpoch 12/300 Batch 0000/15, Loss: 110.5718, rec_loss: 109.4711, KL_loss: 0.0202, MMD_loss: 1.0805\nEpoch 12/300 Batch 0015/15, Loss: 107.6259, rec_loss: 106.6547, KL_loss: 0.0220, MMD_loss: 0.9492\nEpoch 13/300 Batch 0000/15, Loss: 106.3605, rec_loss: 105.2821, KL_loss: 0.0225, MMD_loss: 1.0559\nEpoch 13/300 Batch 0015/15, Loss: 107.3077, rec_loss: 106.2209, KL_loss: 0.0236, MMD_loss: 1.0633\nEpoch 14/300 Batch 0000/15, Loss: 112.5205, rec_loss: 111.4908, KL_loss: 0.0240, MMD_loss: 1.0056\nEpoch 14/300 Batch 0015/15, Loss: 104.3965, rec_loss: 103.3148, KL_loss: 0.0253, MMD_loss: 1.0565\nEpoch 15/300 Batch 0000/15, Loss: 108.7772, rec_loss: 107.7707, KL_loss: 0.0254, MMD_loss: 0.9812\nEpoch 15/300 Batch 0015/15, Loss: 103.8460, rec_loss: 102.8916, KL_loss: 0.0269, MMD_loss: 0.9276\nEpoch 16/300 Batch 0000/15, Loss: 103.8789, rec_loss: 102.8998, KL_loss: 0.0266, MMD_loss: 0.9524\nEpoch 16/300 Batch 0015/15, Loss: 103.7652, rec_loss: 102.7331, KL_loss: 0.0283, MMD_loss: 1.0038\nEpoch 17/300 Batch 0000/15, Loss: 102.8496, rec_loss: 101.9009, KL_loss: 0.0285, MMD_loss: 0.9202\nEpoch 17/300 Batch 0015/15, Loss: 103.7155, rec_loss: 102.6402, KL_loss: 0.0292, MMD_loss: 1.0461\nEpoch 18/300 Batch 0000/15, Loss: 101.7910, rec_loss: 100.8737, KL_loss: 0.0296, MMD_loss: 0.8877\nEpoch 18/300 Batch 0015/15, Loss: 104.6859, rec_loss: 103.9105, KL_loss: 0.0298, MMD_loss: 0.7455\nEpoch 19/300 Batch 0000/15, Loss: 103.0755, rec_loss: 101.9458, KL_loss: 0.0315, MMD_loss: 1.0982\nEpoch 19/300 Batch 0015/15, Loss: 100.1365, rec_loss: 99.3377, KL_loss: 0.0333, MMD_loss: 0.7655\nEpoch 20/300 Batch 0000/15, Loss: 103.7283, rec_loss: 102.8521, KL_loss: 0.0314, MMD_loss: 0.8448\nEpoch 20/300 Batch 0015/15, Loss: 99.6207, rec_loss: 98.7867, KL_loss: 0.0345, MMD_loss: 0.7996\nEpoch 20, Loss_valid: 152.4705, rec_loss_valid: 138.7722, KL_loss_valid: 0.0475, MMD_loss: 8.5716 \nEpoch 21/300 Batch 0000/15, Loss: 97.9728, rec_loss: 97.1486, KL_loss: 0.0338, MMD_loss: 0.7904\nEpoch 21/300 Batch 0015/15, Loss: 101.8258, rec_loss: 100.9972, KL_loss: 0.0347, MMD_loss: 0.7939\nEpoch 22/300 Batch 0000/15, Loss: 101.0762, rec_loss: 100.2285, KL_loss: 0.0335, MMD_loss: 0.8142\nEpoch 22/300 Batch 0015/15, Loss: 100.8727, rec_loss: 100.1056, KL_loss: 0.0354, MMD_loss: 0.7316\nEpoch 23/300 Batch 0000/15, Loss: 99.9489, rec_loss: 99.0750, KL_loss: 0.0351, MMD_loss: 0.8389\nEpoch 23/300 Batch 0015/15, Loss: 102.9383, rec_loss: 102.1832, KL_loss: 0.0364, MMD_loss: 0.7187\nEpoch 24/300 Batch 0000/15, Loss: 98.4898, rec_loss: 97.6806, KL_loss: 0.0377, MMD_loss: 0.7715\nEpoch 24/300 Batch 0015/15, Loss: 99.8968, rec_loss: 99.0205, KL_loss: 0.0387, MMD_loss: 0.8376\nEpoch 25/300 Batch 0000/15, Loss: 98.6542, rec_loss: 97.8880, KL_loss: 0.0386, MMD_loss: 0.7276\nEpoch 25/300 Batch 0015/15, Loss: 101.1164, rec_loss: 100.3963, KL_loss: 0.0391, MMD_loss: 0.6809\nEpoch 26/300 Batch 0000/15, Loss: 99.0320, rec_loss: 98.2326, KL_loss: 0.0398, MMD_loss: 0.7597\nEpoch 26/300 Batch 0015/15, Loss: 99.2768, rec_loss: 98.5863, KL_loss: 0.0413, MMD_loss: 0.6492\nEpoch 27/300 Batch 0000/15, Loss: 97.5860, rec_loss: 96.8916, KL_loss: 0.0408, MMD_loss: 0.6536\nEpoch 27/300 Batch 0015/15, Loss: 102.5444, rec_loss: 101.9310, KL_loss: 0.0414, MMD_loss: 0.5719\nEpoch 28/300 Batch 0000/15, Loss: 102.3181, rec_loss: 101.5902, KL_loss: 0.0412, MMD_loss: 0.6866\nEpoch 28/300 Batch 0015/15, Loss: 96.3414, rec_loss: 95.6174, KL_loss: 0.0416, MMD_loss: 0.6824\nEpoch 29/300 Batch 0000/15, Loss: 97.2571, rec_loss: 96.5895, KL_loss: 0.0422, MMD_loss: 0.6254\nEpoch 29/300 Batch 0015/15, Loss: 101.9391, rec_loss: 101.2105, KL_loss: 0.0424, MMD_loss: 0.6862\nEpoch 30/300 Batch 0000/15, Loss: 96.9122, rec_loss: 96.2445, KL_loss: 0.0443, MMD_loss: 0.6235\nEpoch 30/300 Batch 0015/15, Loss: 96.2947, rec_loss: 95.6894, KL_loss: 0.0436, MMD_loss: 0.5617\nEpoch 31/300 Batch 0000/15, Loss: 98.8109, rec_loss: 98.0892, KL_loss: 0.0456, MMD_loss: 0.6761\nEpoch 31/300 Batch 0015/15, Loss: 96.0713, rec_loss: 95.4416, KL_loss: 0.0462, MMD_loss: 0.5835\nEpoch 32/300 Batch 0000/15, Loss: 95.7035, rec_loss: 95.0565, KL_loss: 0.0459, MMD_loss: 0.6012\nEpoch 32/300 Batch 0015/15, Loss: 100.8639, rec_loss: 100.2247, KL_loss: 0.0454, MMD_loss: 0.5938\nEpoch 33/300 Batch 0000/15, Loss: 94.8681, rec_loss: 94.1860, KL_loss: 0.0470, MMD_loss: 0.6350\nEpoch 33/300 Batch 0015/15, Loss: 99.4269, rec_loss: 98.7502, KL_loss: 0.0485, MMD_loss: 0.6282\nEpoch 34/300 Batch 0000/15, Loss: 96.2527, rec_loss: 95.6205, KL_loss: 0.0480, MMD_loss: 0.5841\nEpoch 34/300 Batch 0015/15, Loss: 96.0572, rec_loss: 95.3946, KL_loss: 0.0489, MMD_loss: 0.6137\nEpoch 35/300 Batch 0000/15, Loss: 97.6811, rec_loss: 97.1085, KL_loss: 0.0491, MMD_loss: 0.5235\nEpoch 35/300 Batch 0015/15, Loss: 95.5990, rec_loss: 95.0248, KL_loss: 0.0500, MMD_loss: 0.5242\nEpoch 36/300 Batch 0000/15, Loss: 95.2522, rec_loss: 94.6580, KL_loss: 0.0508, MMD_loss: 0.5434\nEpoch 36/300 Batch 0015/15, Loss: 96.5723, rec_loss: 95.8620, KL_loss: 0.0522, MMD_loss: 0.6582\nEpoch 37/300 Batch 0000/15, Loss: 98.8443, rec_loss: 98.2113, KL_loss: 0.0503, MMD_loss: 0.5826\nEpoch 37/300 Batch 0015/15, Loss: 96.1596, rec_loss: 95.6658, KL_loss: 0.0527, MMD_loss: 0.4411\nEpoch 38/300 Batch 0000/15, Loss: 94.9612, rec_loss: 94.2686, KL_loss: 0.0517, MMD_loss: 0.6409\nEpoch 38/300 Batch 0015/15, Loss: 98.5880, rec_loss: 98.0014, KL_loss: 0.0534, MMD_loss: 0.5332\nEpoch 39/300 Batch 0000/15, Loss: 96.9878, rec_loss: 96.3081, KL_loss: 0.0519, MMD_loss: 0.6277\nEpoch 39/300 Batch 0015/15, Loss: 93.0073, rec_loss: 92.5086, KL_loss: 0.0544, MMD_loss: 0.4443\nEpoch 40/300 Batch 0000/15, Loss: 91.6633, rec_loss: 91.0413, KL_loss: 0.0542, MMD_loss: 0.5678\nEpoch 40/300 Batch 0015/15, Loss: 92.0270, rec_loss: 91.4154, KL_loss: 0.0567, MMD_loss: 0.5550\nEpoch 40, Loss_valid: 139.9258, rec_loss_valid: 129.7552, KL_loss_valid: 0.0767, MMD_loss: 7.1165 \nEpoch 41/300 Batch 0000/15, Loss: 95.7662, rec_loss: 95.1298, KL_loss: 0.0534, MMD_loss: 0.5830\nEpoch 41/300 Batch 0015/15, Loss: 96.2324, rec_loss: 95.6190, KL_loss: 0.0560, MMD_loss: 0.5573\nEpoch 42/300 Batch 0000/15, Loss: 97.5008, rec_loss: 96.9811, KL_loss: 0.0576, MMD_loss: 0.4621\nEpoch 42/300 Batch 0015/15, Loss: 99.2509, rec_loss: 98.6990, KL_loss: 0.0571, MMD_loss: 0.4948\nEpoch 43/300 Batch 0000/15, Loss: 94.9210, rec_loss: 94.3867, KL_loss: 0.0584, MMD_loss: 0.4758\nEpoch 43/300 Batch 0015/15, Loss: 97.9498, rec_loss: 97.4230, KL_loss: 0.0589, MMD_loss: 0.4679\nEpoch 44/300 Batch 0000/15, Loss: 93.4733, rec_loss: 92.8616, KL_loss: 0.0589, MMD_loss: 0.5528\nEpoch 44/300 Batch 0015/15, Loss: 93.5125, rec_loss: 92.9836, KL_loss: 0.0601, MMD_loss: 0.4688\nEpoch 45/300 Batch 0000/15, Loss: 92.7321, rec_loss: 92.2407, KL_loss: 0.0582, MMD_loss: 0.4332\nEpoch 45/300 Batch 0015/15, Loss: 92.9617, rec_loss: 92.3764, KL_loss: 0.0604, MMD_loss: 0.5248\nEpoch 46/300 Batch 0000/15, Loss: 92.3981, rec_loss: 91.8104, KL_loss: 0.0610, MMD_loss: 0.5268\nEpoch 46/300 Batch 0015/15, Loss: 97.2236, rec_loss: 96.7346, KL_loss: 0.0637, MMD_loss: 0.4253\nEpoch 47/300 Batch 0000/15, Loss: 94.5074, rec_loss: 93.9385, KL_loss: 0.0607, MMD_loss: 0.5082\nEpoch 47/300 Batch 0015/15, Loss: 96.9680, rec_loss: 96.5450, KL_loss: 0.0620, MMD_loss: 0.3609\nEpoch 48/300 Batch 0000/15, Loss: 89.6372, rec_loss: 89.1122, KL_loss: 0.0630, MMD_loss: 0.4620\nEpoch 48/300 Batch 0015/15, Loss: 91.2100, rec_loss: 90.7677, KL_loss: 0.0645, MMD_loss: 0.3777\nEpoch 49/300 Batch 0000/15, Loss: 92.8206, rec_loss: 92.2682, KL_loss: 0.0646, MMD_loss: 0.4878\nEpoch 49/300 Batch 0015/15, Loss: 91.7414, rec_loss: 91.1560, KL_loss: 0.0640, MMD_loss: 0.5214\nEpoch 50/300 Batch 0000/15, Loss: 94.1428, rec_loss: 93.6626, KL_loss: 0.0630, MMD_loss: 0.4172\nEpoch 50/300 Batch 0015/15, Loss: 94.4778, rec_loss: 94.0603, KL_loss: 0.0653, MMD_loss: 0.3522\nEpoch 51/300 Batch 0000/15, Loss: 93.6700, rec_loss: 93.1469, KL_loss: 0.0659, MMD_loss: 0.4572\nEpoch 51/300 Batch 0015/15, Loss: 93.7240, rec_loss: 93.0918, KL_loss: 0.0674, MMD_loss: 0.5648\nEpoch 52/300 Batch 0000/15, Loss: 93.0067, rec_loss: 92.4363, KL_loss: 0.0676, MMD_loss: 0.5028\nEpoch 52/300 Batch 0015/15, Loss: 96.0216, rec_loss: 95.5818, KL_loss: 0.0682, MMD_loss: 0.3716\nEpoch 53/300 Batch 0000/15, Loss: 91.7707, rec_loss: 91.2492, KL_loss: 0.0660, MMD_loss: 0.4555\nEpoch 53/300 Batch 0015/15, Loss: 90.6525, rec_loss: 90.1831, KL_loss: 0.0679, MMD_loss: 0.4015\nEpoch 54/300 Batch 0000/15, Loss: 94.1411, rec_loss: 93.6942, KL_loss: 0.0672, MMD_loss: 0.3798\nEpoch 54/300 Batch 0015/15, Loss: 92.6665, rec_loss: 91.9768, KL_loss: 0.0689, MMD_loss: 0.6209\nEpoch 55/300 Batch 0000/15, Loss: 91.8499, rec_loss: 91.3571, KL_loss: 0.0698, MMD_loss: 0.4231\nEpoch 55/300 Batch 0015/15, Loss: 88.2741, rec_loss: 87.7710, KL_loss: 0.0691, MMD_loss: 0.4339\nEpoch 56/300 Batch 0000/15, Loss: 94.5690, rec_loss: 94.0643, KL_loss: 0.0707, MMD_loss: 0.4340\nEpoch 56/300 Batch 0015/15, Loss: 92.6941, rec_loss: 92.2198, KL_loss: 0.0711, MMD_loss: 0.4033\nEpoch 57/300 Batch 0000/15, Loss: 93.8389, rec_loss: 93.2952, KL_loss: 0.0728, MMD_loss: 0.4709\nEpoch 57/300 Batch 0015/15, Loss: 89.2589, rec_loss: 88.7744, KL_loss: 0.0713, MMD_loss: 0.4131\nEpoch 58/300 Batch 0000/15, Loss: 93.0163, rec_loss: 92.4687, KL_loss: 0.0730, MMD_loss: 0.4746\nEpoch 58/300 Batch 0015/15, Loss: 93.5987, rec_loss: 93.1535, KL_loss: 0.0733, MMD_loss: 0.3719\nEpoch 59/300 Batch 0000/15, Loss: 92.7984, rec_loss: 92.3051, KL_loss: 0.0734, MMD_loss: 0.4199\nEpoch 59/300 Batch 0015/15, Loss: 92.5970, rec_loss: 92.1633, KL_loss: 0.0740, MMD_loss: 0.3597\nEpoch 60/300 Batch 0000/15, Loss: 92.8855, rec_loss: 92.4460, KL_loss: 0.0736, MMD_loss: 0.3659\nEpoch 60/300 Batch 0015/15, Loss: 91.7478, rec_loss: 91.2812, KL_loss: 0.0742, MMD_loss: 0.3924\nEpoch 60, Loss_valid: 134.1593, rec_loss_valid: 125.8761, KL_loss_valid: 0.1027, MMD_loss: 5.1215 \nEpoch 61/300 Batch 0000/15, Loss: 95.4177, rec_loss: 94.9260, KL_loss: 0.0755, MMD_loss: 0.4161\nEpoch 61/300 Batch 0015/15, Loss: 92.2241, rec_loss: 91.7585, KL_loss: 0.0783, MMD_loss: 0.3872\nEpoch 62/300 Batch 0000/15, Loss: 93.3021, rec_loss: 92.7413, KL_loss: 0.0752, MMD_loss: 0.4855\nEpoch 62/300 Batch 0015/15, Loss: 94.4946, rec_loss: 93.9923, KL_loss: 0.0770, MMD_loss: 0.4253\nEpoch 63/300 Batch 0000/15, Loss: 91.2771, rec_loss: 90.8681, KL_loss: 0.0766, MMD_loss: 0.3325\nEpoch 63/300 Batch 0015/15, Loss: 91.2541, rec_loss: 90.8367, KL_loss: 0.0764, MMD_loss: 0.3411\nEpoch 64/300 Batch 0000/15, Loss: 91.9053, rec_loss: 91.3075, KL_loss: 0.0784, MMD_loss: 0.5194\nEpoch 64/300 Batch 0015/15, Loss: 89.3330, rec_loss: 88.9184, KL_loss: 0.0809, MMD_loss: 0.3336\nEpoch 65/300 Batch 0000/15, Loss: 94.1233, rec_loss: 93.6512, KL_loss: 0.0778, MMD_loss: 0.3943\nEpoch 65/300 Batch 0015/15, Loss: 93.6249, rec_loss: 93.2032, KL_loss: 0.0765, MMD_loss: 0.3451\nEpoch 66/300 Batch 0000/15, Loss: 92.5460, rec_loss: 92.0680, KL_loss: 0.0782, MMD_loss: 0.3998\nEpoch 66/300 Batch 0015/15, Loss: 92.7501, rec_loss: 92.3963, KL_loss: 0.0827, MMD_loss: 0.2710\nEpoch 67/300 Batch 0000/15, Loss: 93.3277, rec_loss: 92.9298, KL_loss: 0.0800, MMD_loss: 0.3179\nEpoch 67/300 Batch 0015/15, Loss: 89.2257, rec_loss: 88.8194, KL_loss: 0.0798, MMD_loss: 0.3265\nEpoch 68/300 Batch 0000/15, Loss: 94.1738, rec_loss: 93.7309, KL_loss: 0.0794, MMD_loss: 0.3634\nEpoch 68/300 Batch 0015/15, Loss: 92.5483, rec_loss: 92.1653, KL_loss: 0.0820, MMD_loss: 0.3010\nEpoch 69/300 Batch 0000/15, Loss: 90.9699, rec_loss: 90.5390, KL_loss: 0.0839, MMD_loss: 0.3471\nEpoch 69/300 Batch 0015/15, Loss: 92.4141, rec_loss: 91.9094, KL_loss: 0.0800, MMD_loss: 0.4247\nEpoch 70/300 Batch 0000/15, Loss: 91.0081, rec_loss: 90.5195, KL_loss: 0.0830, MMD_loss: 0.4057\nEpoch 70/300 Batch 0015/15, Loss: 96.0755, rec_loss: 95.5812, KL_loss: 0.0822, MMD_loss: 0.4121\nEpoch 71/300 Batch 0000/15, Loss: 90.4734, rec_loss: 90.0666, KL_loss: 0.0815, MMD_loss: 0.3253\nEpoch 71/300 Batch 0015/15, Loss: 93.1048, rec_loss: 92.6057, KL_loss: 0.0831, MMD_loss: 0.4159\nEpoch 72/300 Batch 0000/15, Loss: 96.1522, rec_loss: 95.7036, KL_loss: 0.0839, MMD_loss: 0.3647\nEpoch 72/300 Batch 0015/15, Loss: 93.4172, rec_loss: 92.9925, KL_loss: 0.0856, MMD_loss: 0.3391\nEpoch 73/300 Batch 0000/15, Loss: 89.3970, rec_loss: 88.9511, KL_loss: 0.0846, MMD_loss: 0.3614\nEpoch 73/300 Batch 0015/15, Loss: 93.7914, rec_loss: 93.3258, KL_loss: 0.0843, MMD_loss: 0.3813\nEpoch 74/300 Batch 0000/15, Loss: 89.6929, rec_loss: 89.2347, KL_loss: 0.0874, MMD_loss: 0.3708\nEpoch 74/300 Batch 0015/15, Loss: 89.9258, rec_loss: 89.4855, KL_loss: 0.0856, MMD_loss: 0.3547\nEpoch 75/300 Batch 0000/15, Loss: 89.8210, rec_loss: 89.4416, KL_loss: 0.0880, MMD_loss: 0.2914\nEpoch 75/300 Batch 0015/15, Loss: 94.1168, rec_loss: 93.7086, KL_loss: 0.0871, MMD_loss: 0.3211\nEpoch 76/300 Batch 0000/15, Loss: 91.2257, rec_loss: 90.8037, KL_loss: 0.0885, MMD_loss: 0.3335\nEpoch 76/300 Batch 0015/15, Loss: 91.9996, rec_loss: 91.4150, KL_loss: 0.0862, MMD_loss: 0.4984\nEpoch 77/300 Batch 0000/15, Loss: 91.5255, rec_loss: 91.1113, KL_loss: 0.0881, MMD_loss: 0.3261\nEpoch 77/300 Batch 0015/15, Loss: 90.5517, rec_loss: 90.1634, KL_loss: 0.0874, MMD_loss: 0.3009\nEpoch 78/300 Batch 0000/15, Loss: 88.9422, rec_loss: 88.5532, KL_loss: 0.0877, MMD_loss: 0.3012\nEpoch 78/300 Batch 0015/15, Loss: 95.0782, rec_loss: 94.6333, KL_loss: 0.0872, MMD_loss: 0.3577\nEpoch 79/300 Batch 0000/15, Loss: 87.9678, rec_loss: 87.5572, KL_loss: 0.0945, MMD_loss: 0.3162\nEpoch 79/300 Batch 0015/15, Loss: 91.1911, rec_loss: 90.7658, KL_loss: 0.0912, MMD_loss: 0.3340\nEpoch 80/300 Batch 0000/15, Loss: 91.8428, rec_loss: 91.4381, KL_loss: 0.0910, MMD_loss: 0.3138\nEpoch 80/300 Batch 0015/15, Loss: 89.3939, rec_loss: 89.0102, KL_loss: 0.0918, MMD_loss: 0.2919\nEpoch 80, Loss_valid: 131.1122, rec_loss_valid: 124.2599, KL_loss_valid: 0.1255, MMD_loss: 5.0990 \nEpoch 81/300 Batch 0000/15, Loss: 91.5512, rec_loss: 91.0955, KL_loss: 0.0911, MMD_loss: 0.3646\nEpoch 81/300 Batch 0015/15, Loss: 89.0811, rec_loss: 88.6714, KL_loss: 0.0937, MMD_loss: 0.3160\nEpoch 82/300 Batch 0000/15, Loss: 91.6422, rec_loss: 91.2143, KL_loss: 0.0917, MMD_loss: 0.3361\nEpoch 82/300 Batch 0015/15, Loss: 94.1389, rec_loss: 93.7262, KL_loss: 0.0933, MMD_loss: 0.3194\nEpoch 83/300 Batch 0000/15, Loss: 91.9809, rec_loss: 91.5998, KL_loss: 0.0923, MMD_loss: 0.2887\nEpoch 83/300 Batch 0015/15, Loss: 92.3535, rec_loss: 91.8716, KL_loss: 0.0952, MMD_loss: 0.3867\nEpoch 84/300 Batch 0000/15, Loss: 89.7356, rec_loss: 89.3484, KL_loss: 0.0945, MMD_loss: 0.2926\nEpoch 84/300 Batch 0015/15, Loss: 93.0773, rec_loss: 92.7189, KL_loss: 0.0923, MMD_loss: 0.2661\nEpoch 85/300 Batch 0000/15, Loss: 90.7611, rec_loss: 90.2848, KL_loss: 0.0942, MMD_loss: 0.3821\nEpoch 85/300 Batch 0015/15, Loss: 90.2413, rec_loss: 89.8512, KL_loss: 0.0946, MMD_loss: 0.2954\nEpoch 86/300 Batch 0000/15, Loss: 91.0277, rec_loss: 90.5886, KL_loss: 0.0929, MMD_loss: 0.3461\nEpoch 86/300 Batch 0015/15, Loss: 90.9849, rec_loss: 90.5756, KL_loss: 0.0926, MMD_loss: 0.3167\nEpoch 87/300 Batch 0000/15, Loss: 85.8787, rec_loss: 85.4671, KL_loss: 0.0941, MMD_loss: 0.3175\nEpoch 87/300 Batch 0015/15, Loss: 89.6560, rec_loss: 89.1865, KL_loss: 0.0939, MMD_loss: 0.3756\nEpoch 88/300 Batch 0000/15, Loss: 88.7800, rec_loss: 88.3320, KL_loss: 0.0980, MMD_loss: 0.3500\nEpoch 88/300 Batch 0015/15, Loss: 90.0169, rec_loss: 89.6208, KL_loss: 0.0977, MMD_loss: 0.2984\nEpoch 89/300 Batch 0000/15, Loss: 89.0271, rec_loss: 88.6771, KL_loss: 0.0956, MMD_loss: 0.2545\nEpoch 89/300 Batch 0015/15, Loss: 87.3665, rec_loss: 86.8414, KL_loss: 0.0969, MMD_loss: 0.4282\nEpoch 90/300 Batch 0000/15, Loss: 89.0134, rec_loss: 88.5822, KL_loss: 0.0939, MMD_loss: 0.3373\nEpoch 90/300 Batch 0015/15, Loss: 89.4859, rec_loss: 89.1114, KL_loss: 0.0927, MMD_loss: 0.2818\nEpoch 91/300 Batch 0000/15, Loss: 89.8488, rec_loss: 89.4461, KL_loss: 0.0959, MMD_loss: 0.3068\nEpoch 91/300 Batch 0015/15, Loss: 90.0553, rec_loss: 89.6065, KL_loss: 0.0985, MMD_loss: 0.3502\nEpoch 92/300 Batch 0000/15, Loss: 89.9612, rec_loss: 89.5611, KL_loss: 0.0985, MMD_loss: 0.3016\nEpoch 92/300 Batch 0015/15, Loss: 90.3577, rec_loss: 89.9429, KL_loss: 0.1001, MMD_loss: 0.3146\nEpoch 93/300 Batch 0000/15, Loss: 93.7081, rec_loss: 93.3163, KL_loss: 0.0969, MMD_loss: 0.2950\nEpoch 93/300 Batch 0015/15, Loss: 89.3630, rec_loss: 88.9810, KL_loss: 0.0947, MMD_loss: 0.2873\nEpoch 94/300 Batch 0000/15, Loss: 90.6467, rec_loss: 90.2616, KL_loss: 0.0991, MMD_loss: 0.2860\nEpoch 94/300 Batch 0015/15, Loss: 89.7288, rec_loss: 89.3812, KL_loss: 0.0992, MMD_loss: 0.2484\nEpoch 95/300 Batch 0000/15, Loss: 89.3269, rec_loss: 88.9480, KL_loss: 0.1005, MMD_loss: 0.2784\nEpoch 95/300 Batch 0015/15, Loss: 88.7673, rec_loss: 88.3979, KL_loss: 0.1035, MMD_loss: 0.2659\nEpoch 96/300 Batch 0000/15, Loss: 91.4750, rec_loss: 91.1085, KL_loss: 0.1013, MMD_loss: 0.2653\nEpoch 96/300 Batch 0015/15, Loss: 87.8191, rec_loss: 87.4044, KL_loss: 0.1015, MMD_loss: 0.3133\nEpoch 97/300 Batch 0000/15, Loss: 90.0185, rec_loss: 89.5151, KL_loss: 0.1055, MMD_loss: 0.3979\nEpoch 97/300 Batch 0015/15, Loss: 94.3643, rec_loss: 93.9944, KL_loss: 0.1009, MMD_loss: 0.2690\nEpoch 98/300 Batch 0000/15, Loss: 90.0230, rec_loss: 89.6199, KL_loss: 0.1029, MMD_loss: 0.3003\nEpoch 98/300 Batch 0015/15, Loss: 93.6980, rec_loss: 93.3611, KL_loss: 0.1042, MMD_loss: 0.2327\nEpoch 99/300 Batch 0000/15, Loss: 90.6788, rec_loss: 90.2821, KL_loss: 0.1016, MMD_loss: 0.2952\nEpoch 99/300 Batch 0015/15, Loss: 86.1476, rec_loss: 85.6945, KL_loss: 0.1042, MMD_loss: 0.3490\nEpoch 100/300 Batch 0000/15, Loss: 91.2125, rec_loss: 90.7201, KL_loss: 0.1058, MMD_loss: 0.3866\nEpoch 100/300 Batch 0015/15, Loss: 91.0489, rec_loss: 90.6587, KL_loss: 0.1045, MMD_loss: 0.2857\nEpoch 100, Loss_valid: 128.5286, rec_loss_valid: 122.1860, KL_loss_valid: 0.1415, MMD_loss: 3.7760 \nEpoch 101/300 Batch 0000/15, Loss: 90.5503, rec_loss: 90.2031, KL_loss: 0.1030, MMD_loss: 0.2443\nEpoch 101/300 Batch 0015/15, Loss: 88.6107, rec_loss: 88.2547, KL_loss: 0.1036, MMD_loss: 0.2524\nEpoch 102/300 Batch 0000/15, Loss: 92.2054, rec_loss: 91.8018, KL_loss: 0.1034, MMD_loss: 0.3002\nEpoch 102/300 Batch 0015/15, Loss: 87.3683, rec_loss: 86.7653, KL_loss: 0.1018, MMD_loss: 0.5012\nEpoch 103/300 Batch 0000/15, Loss: 88.3684, rec_loss: 88.0334, KL_loss: 0.1013, MMD_loss: 0.2337\nEpoch 103/300 Batch 0015/15, Loss: 88.2429, rec_loss: 87.8274, KL_loss: 0.1061, MMD_loss: 0.3094\nEpoch 104/300 Batch 0000/15, Loss: 88.1113, rec_loss: 87.7372, KL_loss: 0.1054, MMD_loss: 0.2687\nEpoch 104/300 Batch 0015/15, Loss: 89.8001, rec_loss: 89.3401, KL_loss: 0.1080, MMD_loss: 0.3519\nEpoch 105/300 Batch 0000/15, Loss: 87.8315, rec_loss: 87.4366, KL_loss: 0.1037, MMD_loss: 0.2912\nEpoch 105/300 Batch 0015/15, Loss: 91.4892, rec_loss: 91.1186, KL_loss: 0.1053, MMD_loss: 0.2654\nEpoch 106/300 Batch 0000/15, Loss: 88.4432, rec_loss: 88.0351, KL_loss: 0.1069, MMD_loss: 0.3012\nEpoch 106/300 Batch 0015/15, Loss: 93.6184, rec_loss: 93.1537, KL_loss: 0.1050, MMD_loss: 0.3597\nEpoch 107/300 Batch 0000/15, Loss: 90.2500, rec_loss: 89.8318, KL_loss: 0.1085, MMD_loss: 0.3098\nEpoch 107/300 Batch 0015/15, Loss: 89.5173, rec_loss: 89.2185, KL_loss: 0.1072, MMD_loss: 0.1916\nEpoch 108/300 Batch 0000/15, Loss: 89.8634, rec_loss: 89.3966, KL_loss: 0.1074, MMD_loss: 0.3595\nEpoch 108/300 Batch 0015/15, Loss: 88.7810, rec_loss: 88.3306, KL_loss: 0.1071, MMD_loss: 0.3432\nEpoch 109/300 Batch 0000/15, Loss: 90.3845, rec_loss: 89.9268, KL_loss: 0.1086, MMD_loss: 0.3490\nEpoch 109/300 Batch 0015/15, Loss: 88.2346, rec_loss: 87.7820, KL_loss: 0.1086, MMD_loss: 0.3440\nEpoch 110/300 Batch 0000/15, Loss: 87.9095, rec_loss: 87.4922, KL_loss: 0.1090, MMD_loss: 0.3083\nEpoch 110/300 Batch 0015/15, Loss: 88.7859, rec_loss: 88.3493, KL_loss: 0.1106, MMD_loss: 0.3261\nEpoch 111/300 Batch 0000/15, Loss: 87.6091, rec_loss: 87.1905, KL_loss: 0.1091, MMD_loss: 0.3094\nEpoch 111/300 Batch 0015/15, Loss: 89.0433, rec_loss: 88.7299, KL_loss: 0.1076, MMD_loss: 0.2058\nEpoch 112/300 Batch 0000/15, Loss: 87.1827, rec_loss: 86.7882, KL_loss: 0.1081, MMD_loss: 0.2864\nEpoch 112/300 Batch 0015/15, Loss: 88.4161, rec_loss: 88.0596, KL_loss: 0.1090, MMD_loss: 0.2474\nEpoch 113/300 Batch 0000/15, Loss: 90.8628, rec_loss: 90.4534, KL_loss: 0.1114, MMD_loss: 0.2980\nEpoch 113/300 Batch 0015/15, Loss: 91.4313, rec_loss: 90.9931, KL_loss: 0.1101, MMD_loss: 0.3280\nEpoch 114/300 Batch 0000/15, Loss: 89.0241, rec_loss: 88.6243, KL_loss: 0.1073, MMD_loss: 0.2925\nEpoch 114/300 Batch 0015/15, Loss: 88.0171, rec_loss: 87.5845, KL_loss: 0.1097, MMD_loss: 0.3229\nEpoch 115/300 Batch 0000/15, Loss: 87.7467, rec_loss: 87.4302, KL_loss: 0.1083, MMD_loss: 0.2083\nEpoch 115/300 Batch 0015/15, Loss: 88.7629, rec_loss: 88.3505, KL_loss: 0.1128, MMD_loss: 0.2997\nEpoch 116/300 Batch 0000/15, Loss: 90.1939, rec_loss: 89.8031, KL_loss: 0.1102, MMD_loss: 0.2806\nEpoch 116/300 Batch 0015/15, Loss: 88.7055, rec_loss: 88.3660, KL_loss: 0.1125, MMD_loss: 0.2270\nEpoch 117/300 Batch 0000/15, Loss: 87.8846, rec_loss: 87.5487, KL_loss: 0.1128, MMD_loss: 0.2231\nEpoch 117/300 Batch 0015/15, Loss: 92.2975, rec_loss: 91.8634, KL_loss: 0.1108, MMD_loss: 0.3233\nEpoch 118/300 Batch 0000/15, Loss: 89.1429, rec_loss: 88.7871, KL_loss: 0.1106, MMD_loss: 0.2452\nEpoch 118/300 Batch 0015/15, Loss: 85.0030, rec_loss: 84.6192, KL_loss: 0.1108, MMD_loss: 0.2730\nEpoch 119/300 Batch 0000/15, Loss: 90.6764, rec_loss: 90.3253, KL_loss: 0.1108, MMD_loss: 0.2402\nEpoch 119/300 Batch 0015/15, Loss: 90.7650, rec_loss: 90.3730, KL_loss: 0.1125, MMD_loss: 0.2795\nEpoch 120/300 Batch 0000/15, Loss: 87.7364, rec_loss: 87.3324, KL_loss: 0.1099, MMD_loss: 0.2941\nEpoch 120/300 Batch 0015/15, Loss: 91.0855, rec_loss: 90.7539, KL_loss: 0.1104, MMD_loss: 0.2212\nEpoch 120, Loss_valid: 127.5483, rec_loss_valid: 121.5923, KL_loss_valid: 0.1555, MMD_loss: 3.8274 \nEpoch 121/300 Batch 0000/15, Loss: 89.2908, rec_loss: 88.9319, KL_loss: 0.1119, MMD_loss: 0.2471\nEpoch 121/300 Batch 0015/15, Loss: 87.2757, rec_loss: 86.9007, KL_loss: 0.1085, MMD_loss: 0.2666\nEpoch 122/300 Batch 0000/15, Loss: 87.5590, rec_loss: 87.1478, KL_loss: 0.1161, MMD_loss: 0.2952\nEpoch 122/300 Batch 0015/15, Loss: 89.0756, rec_loss: 88.7392, KL_loss: 0.1120, MMD_loss: 0.2244\nEpoch 123/300 Batch 0000/15, Loss: 85.5059, rec_loss: 85.1318, KL_loss: 0.1116, MMD_loss: 0.2625\nEpoch 123/300 Batch 0015/15, Loss: 90.9548, rec_loss: 90.6146, KL_loss: 0.1117, MMD_loss: 0.2285\nEpoch 124/300 Batch 0000/15, Loss: 88.2299, rec_loss: 87.8696, KL_loss: 0.1120, MMD_loss: 0.2484\nEpoch 124/300 Batch 0015/15, Loss: 88.1978, rec_loss: 87.7168, KL_loss: 0.1129, MMD_loss: 0.3681\nEpoch 125/300 Batch 0000/15, Loss: 88.1732, rec_loss: 87.8145, KL_loss: 0.1132, MMD_loss: 0.2456\nEpoch 125/300 Batch 0015/15, Loss: 89.2189, rec_loss: 88.9122, KL_loss: 0.1160, MMD_loss: 0.1908\nEpoch 126/300 Batch 0000/15, Loss: 88.2878, rec_loss: 87.8653, KL_loss: 0.1140, MMD_loss: 0.3085\nEpoch 126/300 Batch 0015/15, Loss: 88.2460, rec_loss: 87.8750, KL_loss: 0.1155, MMD_loss: 0.2556\nEpoch 127/300 Batch 0000/15, Loss: 88.3410, rec_loss: 87.9899, KL_loss: 0.1125, MMD_loss: 0.2387\nEpoch 127/300 Batch 0015/15, Loss: 88.7271, rec_loss: 88.3642, KL_loss: 0.1160, MMD_loss: 0.2468\nEpoch 128/300 Batch 0000/15, Loss: 87.0356, rec_loss: 86.6453, KL_loss: 0.1202, MMD_loss: 0.2702\nEpoch 128/300 Batch 0015/15, Loss: 86.0290, rec_loss: 85.5755, KL_loss: 0.1197, MMD_loss: 0.3337\nEpoch 129/300 Batch 0000/15, Loss: 88.9025, rec_loss: 88.5112, KL_loss: 0.1165, MMD_loss: 0.2748\nEpoch 129/300 Batch 0015/15, Loss: 92.2843, rec_loss: 91.9315, KL_loss: 0.1176, MMD_loss: 0.2352\nEpoch 130/300 Batch 0000/15, Loss: 87.2924, rec_loss: 86.9573, KL_loss: 0.1194, MMD_loss: 0.2158\nEpoch 130/300 Batch 0015/15, Loss: 90.5726, rec_loss: 90.1363, KL_loss: 0.1174, MMD_loss: 0.3188\nEpoch 131/300 Batch 0000/15, Loss: 90.5163, rec_loss: 90.1232, KL_loss: 0.1179, MMD_loss: 0.2752\nEpoch 131/300 Batch 0015/15, Loss: 87.7059, rec_loss: 87.4027, KL_loss: 0.1149, MMD_loss: 0.1883\nEpoch 132/300 Batch 0000/15, Loss: 88.7586, rec_loss: 88.4262, KL_loss: 0.1192, MMD_loss: 0.2132\nEpoch 132/300 Batch 0015/15, Loss: 89.1920, rec_loss: 88.8027, KL_loss: 0.1172, MMD_loss: 0.2722\nEpoch 133/300 Batch 0000/15, Loss: 88.6928, rec_loss: 88.3459, KL_loss: 0.1207, MMD_loss: 0.2261\nEpoch 133/300 Batch 0015/15, Loss: 90.9414, rec_loss: 90.5504, KL_loss: 0.1166, MMD_loss: 0.2743\nEpoch 134/300 Batch 0000/15, Loss: 89.9720, rec_loss: 89.5618, KL_loss: 0.1198, MMD_loss: 0.2904\nEpoch 134/300 Batch 0015/15, Loss: 90.3130, rec_loss: 89.9131, KL_loss: 0.1178, MMD_loss: 0.2821\nEpoch 135/300 Batch 0000/15, Loss: 88.3656, rec_loss: 87.9372, KL_loss: 0.1206, MMD_loss: 0.3078\nEpoch 135/300 Batch 0015/15, Loss: 88.9705, rec_loss: 88.6472, KL_loss: 0.1180, MMD_loss: 0.2053\nEpoch 136/300 Batch 0000/15, Loss: 89.7667, rec_loss: 89.4369, KL_loss: 0.1157, MMD_loss: 0.2142\nEpoch 136/300 Batch 0015/15, Loss: 88.8610, rec_loss: 88.5456, KL_loss: 0.1233, MMD_loss: 0.1921\nEpoch 137/300 Batch 0000/15, Loss: 90.0844, rec_loss: 89.7258, KL_loss: 0.1185, MMD_loss: 0.2401\nEpoch 137/300 Batch 0015/15, Loss: 88.9036, rec_loss: 88.6384, KL_loss: 0.1212, MMD_loss: 0.1440\nEpoch 138/300 Batch 0000/15, Loss: 88.3927, rec_loss: 87.9791, KL_loss: 0.1234, MMD_loss: 0.2902\nEpoch 138/300 Batch 0015/15, Loss: 89.5292, rec_loss: 89.1558, KL_loss: 0.1191, MMD_loss: 0.2544\nEpoch 139/300 Batch 0000/15, Loss: 88.1120, rec_loss: 87.7328, KL_loss: 0.1200, MMD_loss: 0.2592\nEpoch 139/300 Batch 0015/15, Loss: 89.5844, rec_loss: 89.2171, KL_loss: 0.1230, MMD_loss: 0.2442\nEpoch 140/300 Batch 0000/15, Loss: 90.3349, rec_loss: 89.9604, KL_loss: 0.1217, MMD_loss: 0.2528\nEpoch 140/300 Batch 0015/15, Loss: 91.1501, rec_loss: 90.7753, KL_loss: 0.1237, MMD_loss: 0.2510\nEpoch 140, Loss_valid: 126.5580, rec_loss_valid: 120.4248, KL_loss_valid: 0.1702, MMD_loss: 3.9073 \nEpoch 141/300 Batch 0000/15, Loss: 89.3135, rec_loss: 88.9442, KL_loss: 0.1197, MMD_loss: 0.2496\nEpoch 141/300 Batch 0015/15, Loss: 88.6364, rec_loss: 88.2278, KL_loss: 0.1175, MMD_loss: 0.2911\nEpoch 142/300 Batch 0000/15, Loss: 88.1951, rec_loss: 87.8624, KL_loss: 0.1199, MMD_loss: 0.2128\nEpoch 142/300 Batch 0015/15, Loss: 88.4603, rec_loss: 88.0414, KL_loss: 0.1218, MMD_loss: 0.2971\nEpoch 143/300 Batch 0000/15, Loss: 89.8602, rec_loss: 89.5188, KL_loss: 0.1204, MMD_loss: 0.2210\nEpoch 143/300 Batch 0015/15, Loss: 85.8937, rec_loss: 85.4749, KL_loss: 0.1203, MMD_loss: 0.2985\nEpoch 144/300 Batch 0000/15, Loss: 86.5974, rec_loss: 86.1573, KL_loss: 0.1210, MMD_loss: 0.3191\nEpoch 144/300 Batch 0015/15, Loss: 86.8744, rec_loss: 86.5107, KL_loss: 0.1179, MMD_loss: 0.2459\nEpoch 145/300 Batch 0000/15, Loss: 89.4886, rec_loss: 89.1479, KL_loss: 0.1178, MMD_loss: 0.2229\nEpoch 145/300 Batch 0015/15, Loss: 88.6853, rec_loss: 88.2929, KL_loss: 0.1213, MMD_loss: 0.2711\nEpoch 146/300 Batch 0000/15, Loss: 85.0950, rec_loss: 84.6185, KL_loss: 0.1235, MMD_loss: 0.3530\nEpoch 146/300 Batch 0015/15, Loss: 89.4926, rec_loss: 89.1683, KL_loss: 0.1252, MMD_loss: 0.1990\nEpoch 147/300 Batch 0000/15, Loss: 89.5236, rec_loss: 89.2025, KL_loss: 0.1199, MMD_loss: 0.2012\nEpoch 147/300 Batch 0015/15, Loss: 88.5093, rec_loss: 88.1444, KL_loss: 0.1213, MMD_loss: 0.2436\nEpoch 148/300 Batch 0000/15, Loss: 89.1446, rec_loss: 88.7670, KL_loss: 0.1230, MMD_loss: 0.2546\nEpoch 148/300 Batch 0015/15, Loss: 90.2513, rec_loss: 89.8243, KL_loss: 0.1278, MMD_loss: 0.2991\nEpoch 149/300 Batch 0000/15, Loss: 86.8023, rec_loss: 86.3759, KL_loss: 0.1206, MMD_loss: 0.3058\nEpoch 149/300 Batch 0015/15, Loss: 91.1062, rec_loss: 90.6061, KL_loss: 0.1224, MMD_loss: 0.3777\nEpoch 150/300 Batch 0000/15, Loss: 87.4548, rec_loss: 87.0979, KL_loss: 0.1222, MMD_loss: 0.2347\nEpoch 150/300 Batch 0015/15, Loss: 87.5407, rec_loss: 87.2262, KL_loss: 0.1234, MMD_loss: 0.1911\nEpoch 151/300 Batch 0000/15, Loss: 89.8401, rec_loss: 89.4523, KL_loss: 0.1234, MMD_loss: 0.2645\nEpoch 151/300 Batch 0015/15, Loss: 85.9707, rec_loss: 85.4893, KL_loss: 0.1227, MMD_loss: 0.3587\nEpoch 152/300 Batch 0000/15, Loss: 90.4187, rec_loss: 90.0631, KL_loss: 0.1245, MMD_loss: 0.2311\nEpoch 152/300 Batch 0015/15, Loss: 87.0506, rec_loss: 86.7068, KL_loss: 0.1245, MMD_loss: 0.2193\nEpoch 153/300 Batch 0000/15, Loss: 87.5748, rec_loss: 87.2570, KL_loss: 0.1260, MMD_loss: 0.1918\nEpoch 153/300 Batch 0015/15, Loss: 89.8107, rec_loss: 89.5165, KL_loss: 0.1220, MMD_loss: 0.1721\nEpoch 154/300 Batch 0000/15, Loss: 87.6464, rec_loss: 87.2983, KL_loss: 0.1223, MMD_loss: 0.2258\nEpoch 154/300 Batch 0015/15, Loss: 89.3731, rec_loss: 89.0122, KL_loss: 0.1266, MMD_loss: 0.2344\nEpoch 155/300 Batch 0000/15, Loss: 87.4869, rec_loss: 87.1701, KL_loss: 0.1257, MMD_loss: 0.1912\nEpoch 155/300 Batch 0015/15, Loss: 88.0939, rec_loss: 87.7582, KL_loss: 0.1285, MMD_loss: 0.2072\nEpoch 156/300 Batch 0000/15, Loss: 86.1788, rec_loss: 85.8459, KL_loss: 0.1293, MMD_loss: 0.2036\nEpoch 156/300 Batch 0015/15, Loss: 90.0082, rec_loss: 89.6387, KL_loss: 0.1278, MMD_loss: 0.2417\nEpoch 157/300 Batch 0000/15, Loss: 88.3439, rec_loss: 88.0302, KL_loss: 0.1274, MMD_loss: 0.1863\nEpoch 157/300 Batch 0015/15, Loss: 85.9087, rec_loss: 85.5724, KL_loss: 0.1268, MMD_loss: 0.2095\nEpoch 158/300 Batch 0000/15, Loss: 87.7266, rec_loss: 87.3521, KL_loss: 0.1255, MMD_loss: 0.2490\nEpoch 158/300 Batch 0015/15, Loss: 92.3338, rec_loss: 92.0211, KL_loss: 0.1292, MMD_loss: 0.1834\nEpoch 159/300 Batch 0000/15, Loss: 89.2891, rec_loss: 88.9120, KL_loss: 0.1253, MMD_loss: 0.2518\nEpoch 159/300 Batch 0015/15, Loss: 88.0859, rec_loss: 87.7340, KL_loss: 0.1309, MMD_loss: 0.2209\nEpoch 160/300 Batch 0000/15, Loss: 87.1688, rec_loss: 86.7196, KL_loss: 0.1224, MMD_loss: 0.3268\nEpoch 160/300 Batch 0015/15, Loss: 89.3347, rec_loss: 88.9292, KL_loss: 0.1242, MMD_loss: 0.2813\nEpoch 160, Loss_valid: 126.2264, rec_loss_valid: 120.2713, KL_loss_valid: 0.1736, MMD_loss: 4.2306 \nEpoch 161/300 Batch 0000/15, Loss: 89.7916, rec_loss: 89.4058, KL_loss: 0.1282, MMD_loss: 0.2576\nEpoch 161/300 Batch 0015/15, Loss: 90.2983, rec_loss: 89.9666, KL_loss: 0.1271, MMD_loss: 0.2046\nEpoch 162/300 Batch 0000/15, Loss: 88.9470, rec_loss: 88.5775, KL_loss: 0.1268, MMD_loss: 0.2427\nEpoch 162/300 Batch 0015/15, Loss: 85.2226, rec_loss: 84.9323, KL_loss: 0.1309, MMD_loss: 0.1595\nEpoch 163/300 Batch 0000/15, Loss: 86.0770, rec_loss: 85.6765, KL_loss: 0.1281, MMD_loss: 0.2724\nEpoch 163/300 Batch 0015/15, Loss: 88.6745, rec_loss: 88.2648, KL_loss: 0.1327, MMD_loss: 0.2770\nEpoch 164/300 Batch 0000/15, Loss: 86.9365, rec_loss: 86.6084, KL_loss: 0.1308, MMD_loss: 0.1973\nEpoch 164/300 Batch 0015/15, Loss: 89.9820, rec_loss: 89.5923, KL_loss: 0.1269, MMD_loss: 0.2627\nEpoch 165/300 Batch 0000/15, Loss: 88.0868, rec_loss: 87.7519, KL_loss: 0.1311, MMD_loss: 0.2038\nEpoch 165/300 Batch 0015/15, Loss: 90.5309, rec_loss: 90.1790, KL_loss: 0.1291, MMD_loss: 0.2229\nEpoch 166/300 Batch 0000/15, Loss: 86.9595, rec_loss: 86.6016, KL_loss: 0.1246, MMD_loss: 0.2333\nEpoch 166/300 Batch 0015/15, Loss: 90.8941, rec_loss: 90.4976, KL_loss: 0.1290, MMD_loss: 0.2674\nEpoch 167/300 Batch 0000/15, Loss: 87.8583, rec_loss: 87.5056, KL_loss: 0.1299, MMD_loss: 0.2228\nEpoch 167/300 Batch 0015/15, Loss: 85.9312, rec_loss: 85.5186, KL_loss: 0.1338, MMD_loss: 0.2789\nEpoch 168/300 Batch 0000/15, Loss: 86.4583, rec_loss: 86.0924, KL_loss: 0.1261, MMD_loss: 0.2398\nEpoch 168/300 Batch 0015/15, Loss: 86.6182, rec_loss: 86.2894, KL_loss: 0.1318, MMD_loss: 0.1971\nEpoch 169/300 Batch 0000/15, Loss: 86.3219, rec_loss: 85.9717, KL_loss: 0.1275, MMD_loss: 0.2227\nEpoch 169/300 Batch 0015/15, Loss: 88.9146, rec_loss: 88.5426, KL_loss: 0.1253, MMD_loss: 0.2467\nEpoch 170/300 Batch 0000/15, Loss: 89.7609, rec_loss: 89.3936, KL_loss: 0.1312, MMD_loss: 0.2361\nEpoch 170/300 Batch 0015/15, Loss: 89.3019, rec_loss: 88.9514, KL_loss: 0.1305, MMD_loss: 0.2200\nEpoch 171/300 Batch 0000/15, Loss: 91.0778, rec_loss: 90.7329, KL_loss: 0.1336, MMD_loss: 0.2113\nEpoch 171/300 Batch 0015/15, Loss: 86.9147, rec_loss: 86.6018, KL_loss: 0.1283, MMD_loss: 0.1846\nEpoch 172/300 Batch 0000/15, Loss: 86.3956, rec_loss: 86.0424, KL_loss: 0.1284, MMD_loss: 0.2248\nEpoch 172/300 Batch 0015/15, Loss: 85.7617, rec_loss: 85.4333, KL_loss: 0.1278, MMD_loss: 0.2006\nEpoch 173/300 Batch 0000/15, Loss: 88.2717, rec_loss: 87.9224, KL_loss: 0.1274, MMD_loss: 0.2219\nEpoch 173/300 Batch 0015/15, Loss: 90.8000, rec_loss: 90.5048, KL_loss: 0.1335, MMD_loss: 0.1618\nEpoch 174/300 Batch 0000/15, Loss: 85.0268, rec_loss: 84.6709, KL_loss: 0.1297, MMD_loss: 0.2261\nEpoch 174/300 Batch 0015/15, Loss: 85.1140, rec_loss: 84.7372, KL_loss: 0.1311, MMD_loss: 0.2457\nEpoch 175/300 Batch 0000/15, Loss: 87.5796, rec_loss: 87.1231, KL_loss: 0.1308, MMD_loss: 0.3257\nEpoch 175/300 Batch 0015/15, Loss: 86.8532, rec_loss: 86.4276, KL_loss: 0.1323, MMD_loss: 0.2934\nEpoch 176/300 Batch 0000/15, Loss: 85.9746, rec_loss: 85.5435, KL_loss: 0.1317, MMD_loss: 0.2993\nEpoch 176/300 Batch 0015/15, Loss: 88.6991, rec_loss: 88.3107, KL_loss: 0.1285, MMD_loss: 0.2600\nEpoch 177/300 Batch 0000/15, Loss: 89.0990, rec_loss: 88.7472, KL_loss: 0.1311, MMD_loss: 0.2207\nEpoch 177/300 Batch 0015/15, Loss: 86.5646, rec_loss: 86.2345, KL_loss: 0.1335, MMD_loss: 0.1966\nEpoch 178/300 Batch 0000/15, Loss: 90.8889, rec_loss: 90.5713, KL_loss: 0.1360, MMD_loss: 0.1816\nEpoch 178/300 Batch 0015/15, Loss: 85.7043, rec_loss: 85.2819, KL_loss: 0.1310, MMD_loss: 0.2914\nEpoch 179/300 Batch 0000/15, Loss: 87.3548, rec_loss: 87.0078, KL_loss: 0.1316, MMD_loss: 0.2153\nEpoch 179/300 Batch 0015/15, Loss: 84.0229, rec_loss: 83.7026, KL_loss: 0.1365, MMD_loss: 0.1837\nEpoch 180/300 Batch 0000/15, Loss: 85.9664, rec_loss: 85.5246, KL_loss: 0.1347, MMD_loss: 0.3072\nEpoch 180/300 Batch 0015/15, Loss: 86.9704, rec_loss: 86.5538, KL_loss: 0.1352, MMD_loss: 0.2814\nEpoch 180, Loss_valid: 125.1221, rec_loss_valid: 119.5832, KL_loss_valid: 0.1834, MMD_loss: 3.7247 \nEpoch 181/300 Batch 0000/15, Loss: 84.9083, rec_loss: 84.4811, KL_loss: 0.1332, MMD_loss: 0.2941\nEpoch 181/300 Batch 0015/15, Loss: 87.3144, rec_loss: 86.9993, KL_loss: 0.1370, MMD_loss: 0.1781\nEpoch 182/300 Batch 0000/15, Loss: 87.8895, rec_loss: 87.4660, KL_loss: 0.1327, MMD_loss: 0.2908\nEpoch 182/300 Batch 0015/15, Loss: 83.8019, rec_loss: 83.4654, KL_loss: 0.1350, MMD_loss: 0.2015\nEpoch 183/300 Batch 0000/15, Loss: 83.3539, rec_loss: 82.9707, KL_loss: 0.1344, MMD_loss: 0.2488\nEpoch 183/300 Batch 0015/15, Loss: 89.7860, rec_loss: 89.4328, KL_loss: 0.1364, MMD_loss: 0.2169\nEpoch 184/300 Batch 0000/15, Loss: 86.5803, rec_loss: 86.2503, KL_loss: 0.1369, MMD_loss: 0.1931\nEpoch 184/300 Batch 0015/15, Loss: 85.3249, rec_loss: 84.9505, KL_loss: 0.1332, MMD_loss: 0.2413\nEpoch 185/300 Batch 0000/15, Loss: 86.7897, rec_loss: 86.4212, KL_loss: 0.1346, MMD_loss: 0.2339\nEpoch 185/300 Batch 0015/15, Loss: 85.7548, rec_loss: 85.3941, KL_loss: 0.1356, MMD_loss: 0.2250\nEpoch 186/300 Batch 0000/15, Loss: 86.2554, rec_loss: 85.8820, KL_loss: 0.1340, MMD_loss: 0.2394\nEpoch 186/300 Batch 0015/15, Loss: 85.6995, rec_loss: 85.3450, KL_loss: 0.1297, MMD_loss: 0.2248\nEpoch 187/300 Batch 0000/15, Loss: 88.7440, rec_loss: 88.4054, KL_loss: 0.1328, MMD_loss: 0.2058\nEpoch 187/300 Batch 0015/15, Loss: 86.0549, rec_loss: 85.7530, KL_loss: 0.1335, MMD_loss: 0.1685\nEpoch 188/300 Batch 0000/15, Loss: 85.2332, rec_loss: 84.9189, KL_loss: 0.1355, MMD_loss: 0.1789\nEpoch 188/300 Batch 0015/15, Loss: 89.6851, rec_loss: 89.2290, KL_loss: 0.1325, MMD_loss: 0.3235\nEpoch 189/300 Batch 0000/15, Loss: 85.5008, rec_loss: 85.1507, KL_loss: 0.1352, MMD_loss: 0.2149\nEpoch 189/300 Batch 0015/15, Loss: 90.4378, rec_loss: 89.9906, KL_loss: 0.1322, MMD_loss: 0.3150\nEpoch 190/300 Batch 0000/15, Loss: 88.6927, rec_loss: 88.3223, KL_loss: 0.1374, MMD_loss: 0.2330\nEpoch 190/300 Batch 0015/15, Loss: 89.3348, rec_loss: 88.9564, KL_loss: 0.1345, MMD_loss: 0.2439\nEpoch 191/300 Batch 0000/15, Loss: 86.3476, rec_loss: 86.0039, KL_loss: 0.1351, MMD_loss: 0.2086\nEpoch 191/300 Batch 0015/15, Loss: 85.3554, rec_loss: 84.8988, KL_loss: 0.1370, MMD_loss: 0.3197\nEpoch 192/300 Batch 0000/15, Loss: 86.7130, rec_loss: 86.3303, KL_loss: 0.1371, MMD_loss: 0.2455\nEpoch 192/300 Batch 0015/15, Loss: 85.3010, rec_loss: 84.9670, KL_loss: 0.1372, MMD_loss: 0.1969\nEpoch 193/300 Batch 0000/15, Loss: 84.8269, rec_loss: 84.4691, KL_loss: 0.1346, MMD_loss: 0.2231\nEpoch 193/300 Batch 0015/15, Loss: 90.7365, rec_loss: 90.4243, KL_loss: 0.1354, MMD_loss: 0.1769\nEpoch 194/300 Batch 0000/15, Loss: 84.5253, rec_loss: 84.1461, KL_loss: 0.1356, MMD_loss: 0.2436\nEpoch 194/300 Batch 0015/15, Loss: 87.9853, rec_loss: 87.6732, KL_loss: 0.1416, MMD_loss: 0.1704\nEpoch 195/300 Batch 0000/15, Loss: 84.0531, rec_loss: 83.7186, KL_loss: 0.1384, MMD_loss: 0.1961\nEpoch 195/300 Batch 0015/15, Loss: 85.1213, rec_loss: 84.7920, KL_loss: 0.1342, MMD_loss: 0.1951\nEpoch 196/300 Batch 0000/15, Loss: 87.5889, rec_loss: 87.2175, KL_loss: 0.1340, MMD_loss: 0.2374\nEpoch 196/300 Batch 0015/15, Loss: 83.2978, rec_loss: 82.9444, KL_loss: 0.1393, MMD_loss: 0.2140\nEpoch 197/300 Batch 0000/15, Loss: 85.8766, rec_loss: 85.5166, KL_loss: 0.1327, MMD_loss: 0.2272\nEpoch 197/300 Batch 0015/15, Loss: 88.0670, rec_loss: 87.6445, KL_loss: 0.1362, MMD_loss: 0.2863\nEpoch 198/300 Batch 0000/15, Loss: 86.2430, rec_loss: 85.8110, KL_loss: 0.1354, MMD_loss: 0.2965\nEpoch 198/300 Batch 0015/15, Loss: 83.8788, rec_loss: 83.5571, KL_loss: 0.1380, MMD_loss: 0.1837\nEpoch 199/300 Batch 0000/15, Loss: 84.0330, rec_loss: 83.6741, KL_loss: 0.1425, MMD_loss: 0.2164\nEpoch 199/300 Batch 0015/15, Loss: 86.2519, rec_loss: 85.8789, KL_loss: 0.1428, MMD_loss: 0.2302\nEpoch 200/300 Batch 0000/15, Loss: 84.0533, rec_loss: 83.7330, KL_loss: 0.1377, MMD_loss: 0.1827\nEpoch 200/300 Batch 0015/15, Loss: 85.5300, rec_loss: 85.1632, KL_loss: 0.1409, MMD_loss: 0.2259\nEpoch 200, Loss_valid: 124.2800, rec_loss_valid: 119.0530, KL_loss_valid: 0.1858, MMD_loss: 3.3483 \nEpoch 201/300 Batch 0000/15, Loss: 86.2822, rec_loss: 85.9560, KL_loss: 0.1400, MMD_loss: 0.1862\nEpoch 201/300 Batch 0015/15, Loss: 89.9936, rec_loss: 89.6210, KL_loss: 0.1309, MMD_loss: 0.2417\nEpoch 202/300 Batch 0000/15, Loss: 86.8724, rec_loss: 86.5253, KL_loss: 0.1359, MMD_loss: 0.2112\nEpoch 202/300 Batch 0015/15, Loss: 82.9704, rec_loss: 82.6361, KL_loss: 0.1375, MMD_loss: 0.1968\nEpoch 203/300 Batch 0000/15, Loss: 87.2075, rec_loss: 86.8328, KL_loss: 0.1364, MMD_loss: 0.2383\nEpoch 203/300 Batch 0015/15, Loss: 88.3064, rec_loss: 87.9104, KL_loss: 0.1424, MMD_loss: 0.2536\nEpoch 204/300 Batch 0000/15, Loss: 87.0690, rec_loss: 86.7269, KL_loss: 0.1418, MMD_loss: 0.2004\nEpoch 204/300 Batch 0015/15, Loss: 85.9447, rec_loss: 85.6079, KL_loss: 0.1392, MMD_loss: 0.1976\nEpoch 205/300 Batch 0000/15, Loss: 86.5376, rec_loss: 86.1882, KL_loss: 0.1431, MMD_loss: 0.2064\nEpoch 205/300 Batch 0015/15, Loss: 88.6525, rec_loss: 88.3306, KL_loss: 0.1415, MMD_loss: 0.1805\nEpoch 206/300 Batch 0000/15, Loss: 85.6330, rec_loss: 85.2722, KL_loss: 0.1434, MMD_loss: 0.2174\nEpoch 206/300 Batch 0015/15, Loss: 86.4835, rec_loss: 86.1232, KL_loss: 0.1378, MMD_loss: 0.2225\nEpoch 207/300 Batch 0000/15, Loss: 84.4147, rec_loss: 84.0469, KL_loss: 0.1409, MMD_loss: 0.2269\nEpoch 207/300 Batch 0015/15, Loss: 85.0716, rec_loss: 84.6940, KL_loss: 0.1381, MMD_loss: 0.2395\nEpoch 208/300 Batch 0000/15, Loss: 88.0116, rec_loss: 87.6433, KL_loss: 0.1416, MMD_loss: 0.2267\nEpoch 208/300 Batch 0015/15, Loss: 93.5977, rec_loss: 93.1857, KL_loss: 0.1383, MMD_loss: 0.2738\nEpoch 209/300 Batch 0000/15, Loss: 90.0350, rec_loss: 89.6710, KL_loss: 0.1414, MMD_loss: 0.2226\nEpoch 209/300 Batch 0015/15, Loss: 86.3961, rec_loss: 86.0564, KL_loss: 0.1383, MMD_loss: 0.2013\nEpoch 210/300 Batch 0000/15, Loss: 87.4505, rec_loss: 87.0760, KL_loss: 0.1406, MMD_loss: 0.2340\nEpoch 210/300 Batch 0015/15, Loss: 84.5745, rec_loss: 84.2279, KL_loss: 0.1368, MMD_loss: 0.2098\nEpoch 211/300 Batch 0000/15, Loss: 88.1957, rec_loss: 87.8372, KL_loss: 0.1379, MMD_loss: 0.2205\nEpoch 211/300 Batch 0015/15, Loss: 84.5081, rec_loss: 84.0275, KL_loss: 0.1415, MMD_loss: 0.3391\nEpoch 212/300 Batch 0000/15, Loss: 85.2223, rec_loss: 84.8610, KL_loss: 0.1427, MMD_loss: 0.2186\nEpoch 212/300 Batch 0015/15, Loss: 87.0632, rec_loss: 86.6579, KL_loss: 0.1386, MMD_loss: 0.2667\nEpoch 213/300 Batch 0000/15, Loss: 88.1333, rec_loss: 87.8203, KL_loss: 0.1378, MMD_loss: 0.1752\nEpoch 213/300 Batch 0015/15, Loss: 85.9710, rec_loss: 85.5446, KL_loss: 0.1493, MMD_loss: 0.2771\nEpoch 214/300 Batch 0000/15, Loss: 85.3925, rec_loss: 85.0419, KL_loss: 0.1420, MMD_loss: 0.2085\nEpoch 214/300 Batch 0015/15, Loss: 84.2050, rec_loss: 83.8637, KL_loss: 0.1375, MMD_loss: 0.2038\nEpoch 215/300 Batch 0000/15, Loss: 86.9918, rec_loss: 86.6890, KL_loss: 0.1395, MMD_loss: 0.1633\nEpoch 215/300 Batch 0015/15, Loss: 87.2533, rec_loss: 86.9370, KL_loss: 0.1379, MMD_loss: 0.1783\nEpoch 216/300 Batch 0000/15, Loss: 88.0970, rec_loss: 87.7339, KL_loss: 0.1410, MMD_loss: 0.2221\nEpoch 216/300 Batch 0015/15, Loss: 86.9183, rec_loss: 86.5214, KL_loss: 0.1364, MMD_loss: 0.2605\nEpoch 217/300 Batch 0000/15, Loss: 86.2235, rec_loss: 85.8666, KL_loss: 0.1410, MMD_loss: 0.2160\nEpoch 217/300 Batch 0015/15, Loss: 84.9105, rec_loss: 84.5366, KL_loss: 0.1426, MMD_loss: 0.2313\nEpoch 218/300 Batch 0000/15, Loss: 87.5593, rec_loss: 87.1741, KL_loss: 0.1417, MMD_loss: 0.2435\nEpoch 218/300 Batch 0015/15, Loss: 85.5720, rec_loss: 85.2095, KL_loss: 0.1379, MMD_loss: 0.2246\nEpoch 219/300 Batch 0000/15, Loss: 90.5775, rec_loss: 90.2292, KL_loss: 0.1422, MMD_loss: 0.2062\nEpoch 219/300 Batch 0015/15, Loss: 85.5877, rec_loss: 85.2069, KL_loss: 0.1430, MMD_loss: 0.2378\nEpoch 220/300 Batch 0000/15, Loss: 83.5327, rec_loss: 83.1530, KL_loss: 0.1443, MMD_loss: 0.2354\nEpoch 220/300 Batch 0015/15, Loss: 87.7274, rec_loss: 87.3978, KL_loss: 0.1433, MMD_loss: 0.1863\nEpoch 220, Loss_valid: 123.5621, rec_loss_valid: 118.7862, KL_loss_valid: 0.1914, MMD_loss: 2.7717 \nEpoch 221/300 Batch 0000/15, Loss: 85.3137, rec_loss: 84.9773, KL_loss: 0.1417, MMD_loss: 0.1946\nEpoch 221/300 Batch 0015/15, Loss: 84.7640, rec_loss: 84.3829, KL_loss: 0.1412, MMD_loss: 0.2398\nEpoch 222/300 Batch 0000/15, Loss: 85.0462, rec_loss: 84.6840, KL_loss: 0.1422, MMD_loss: 0.2200\nEpoch 222/300 Batch 0015/15, Loss: 84.3695, rec_loss: 84.0000, KL_loss: 0.1419, MMD_loss: 0.2276\nEpoch 223/300 Batch 0000/15, Loss: 84.6023, rec_loss: 84.2526, KL_loss: 0.1409, MMD_loss: 0.2088\nEpoch 223/300 Batch 0015/15, Loss: 84.4229, rec_loss: 84.0997, KL_loss: 0.1418, MMD_loss: 0.1814\nEpoch 224/300 Batch 0000/15, Loss: 85.6521, rec_loss: 85.2123, KL_loss: 0.1387, MMD_loss: 0.3012\nEpoch 224/300 Batch 0015/15, Loss: 87.2424, rec_loss: 86.8129, KL_loss: 0.1487, MMD_loss: 0.2809\nEpoch 225/300 Batch 0000/15, Loss: 86.8677, rec_loss: 86.4863, KL_loss: 0.1455, MMD_loss: 0.2358\nEpoch 225/300 Batch 0015/15, Loss: 88.0765, rec_loss: 87.6172, KL_loss: 0.1389, MMD_loss: 0.3204\nEpoch 226/300 Batch 0000/15, Loss: 83.3256, rec_loss: 82.9059, KL_loss: 0.1404, MMD_loss: 0.2793\nEpoch 226/300 Batch 0015/15, Loss: 86.7891, rec_loss: 86.3846, KL_loss: 0.1382, MMD_loss: 0.2664\nEpoch 227/300 Batch 0000/15, Loss: 84.5058, rec_loss: 84.1623, KL_loss: 0.1402, MMD_loss: 0.2034\nEpoch 227/300 Batch 0015/15, Loss: 83.5865, rec_loss: 83.1969, KL_loss: 0.1433, MMD_loss: 0.2464\nEpoch 228/300 Batch 0000/15, Loss: 83.8168, rec_loss: 83.4347, KL_loss: 0.1399, MMD_loss: 0.2423\nEpoch 228/300 Batch 0015/15, Loss: 83.0476, rec_loss: 82.7061, KL_loss: 0.1362, MMD_loss: 0.2053\nEpoch 229/300 Batch 0000/15, Loss: 87.7490, rec_loss: 87.3317, KL_loss: 0.1431, MMD_loss: 0.2742\nEpoch 229/300 Batch 0015/15, Loss: 85.4249, rec_loss: 85.0397, KL_loss: 0.1452, MMD_loss: 0.2400\nEpoch 230/300 Batch 0000/15, Loss: 86.8262, rec_loss: 86.4662, KL_loss: 0.1423, MMD_loss: 0.2178\nEpoch 230/300 Batch 0015/15, Loss: 86.6766, rec_loss: 86.2835, KL_loss: 0.1418, MMD_loss: 0.2513\nEpoch 231/300 Batch 0000/15, Loss: 86.7102, rec_loss: 86.4159, KL_loss: 0.1443, MMD_loss: 0.1500\nEpoch 231/300 Batch 0015/15, Loss: 84.2320, rec_loss: 83.8206, KL_loss: 0.1384, MMD_loss: 0.2729\nEpoch 232/300 Batch 0000/15, Loss: 88.1110, rec_loss: 87.7621, KL_loss: 0.1484, MMD_loss: 0.2005\nEpoch 232/300 Batch 0015/15, Loss: 87.1834, rec_loss: 86.7691, KL_loss: 0.1448, MMD_loss: 0.2695\nEpoch 233/300 Batch 0000/15, Loss: 86.5608, rec_loss: 86.2386, KL_loss: 0.1434, MMD_loss: 0.1789\nEpoch 233/300 Batch 0015/15, Loss: 83.1578, rec_loss: 82.7757, KL_loss: 0.1432, MMD_loss: 0.2390\nEpoch 234/300 Batch 0000/15, Loss: 90.9531, rec_loss: 90.6324, KL_loss: 0.1475, MMD_loss: 0.1732\nEpoch 234/300 Batch 0015/15, Loss: 82.6081, rec_loss: 82.2609, KL_loss: 0.1467, MMD_loss: 0.2005\nEpoch 235/300 Batch 0000/15, Loss: 87.6318, rec_loss: 87.3156, KL_loss: 0.1436, MMD_loss: 0.1727\nEpoch 235/300 Batch 0015/15, Loss: 87.3779, rec_loss: 87.0481, KL_loss: 0.1459, MMD_loss: 0.1840\nEpoch 236/300 Batch 0000/15, Loss: 86.9199, rec_loss: 86.5550, KL_loss: 0.1432, MMD_loss: 0.2218\nEpoch 236/300 Batch 0015/15, Loss: 88.0344, rec_loss: 87.6946, KL_loss: 0.1444, MMD_loss: 0.1955\nEpoch 237/300 Batch 0000/15, Loss: 87.6334, rec_loss: 87.3267, KL_loss: 0.1432, MMD_loss: 0.1635\nEpoch 237/300 Batch 0015/15, Loss: 84.9246, rec_loss: 84.5954, KL_loss: 0.1400, MMD_loss: 0.1892\nEpoch 238/300 Batch 0000/15, Loss: 84.5333, rec_loss: 84.2101, KL_loss: 0.1495, MMD_loss: 0.1737\nEpoch 238/300 Batch 0015/15, Loss: 84.3543, rec_loss: 83.9933, KL_loss: 0.1438, MMD_loss: 0.2173\nEpoch 239/300 Batch 0000/15, Loss: 84.1562, rec_loss: 83.7913, KL_loss: 0.1515, MMD_loss: 0.2135\nEpoch 239/300 Batch 0015/15, Loss: 87.0897, rec_loss: 86.7602, KL_loss: 0.1504, MMD_loss: 0.1791\nEpoch 240/300 Batch 0000/15, Loss: 85.8030, rec_loss: 85.4651, KL_loss: 0.1418, MMD_loss: 0.1961\nEpoch 240/300 Batch 0015/15, Loss: 82.7740, rec_loss: 82.3948, KL_loss: 0.1512, MMD_loss: 0.2281\nEpoch 240, Loss_valid: 123.8219, rec_loss_valid: 118.4634, KL_loss_valid: 0.1995, MMD_loss: 3.1537 \nEpoch 241/300 Batch 0000/15, Loss: 86.0661, rec_loss: 85.7260, KL_loss: 0.1472, MMD_loss: 0.1929\nEpoch 241/300 Batch 0015/15, Loss: 84.4234, rec_loss: 84.0797, KL_loss: 0.1432, MMD_loss: 0.2005\nEpoch 242/300 Batch 0000/15, Loss: 84.8255, rec_loss: 84.4818, KL_loss: 0.1455, MMD_loss: 0.1982\nEpoch 242/300 Batch 0015/15, Loss: 85.5318, rec_loss: 85.1285, KL_loss: 0.1398, MMD_loss: 0.2635\nEpoch 243/300 Batch 0000/15, Loss: 88.3046, rec_loss: 87.9592, KL_loss: 0.1483, MMD_loss: 0.1971\nEpoch 243/300 Batch 0015/15, Loss: 88.5712, rec_loss: 88.1337, KL_loss: 0.1413, MMD_loss: 0.2962\nEpoch 244/300 Batch 0000/15, Loss: 87.7348, rec_loss: 87.3761, KL_loss: 0.1466, MMD_loss: 0.2121\nEpoch 244/300 Batch 0015/15, Loss: 87.7916, rec_loss: 87.4096, KL_loss: 0.1470, MMD_loss: 0.2351\nEpoch 245/300 Batch 0000/15, Loss: 82.9217, rec_loss: 82.6270, KL_loss: 0.1486, MMD_loss: 0.1461\nEpoch 245/300 Batch 0015/15, Loss: 83.2609, rec_loss: 82.9164, KL_loss: 0.1488, MMD_loss: 0.1957\nEpoch 246/300 Batch 0000/15, Loss: 88.7493, rec_loss: 88.4182, KL_loss: 0.1415, MMD_loss: 0.1896\nEpoch 246/300 Batch 0015/15, Loss: 82.2598, rec_loss: 81.8970, KL_loss: 0.1472, MMD_loss: 0.2157\nEpoch 247/300 Batch 0000/15, Loss: 90.4163, rec_loss: 90.0831, KL_loss: 0.1395, MMD_loss: 0.1936\nEpoch 247/300 Batch 0015/15, Loss: 86.3580, rec_loss: 86.0281, KL_loss: 0.1501, MMD_loss: 0.1798\nEpoch 248/300 Batch 0000/15, Loss: 86.7393, rec_loss: 86.3516, KL_loss: 0.1457, MMD_loss: 0.2420\nEpoch 248/300 Batch 0015/15, Loss: 87.6098, rec_loss: 87.2754, KL_loss: 0.1536, MMD_loss: 0.1808\nEpoch 249/300 Batch 0000/15, Loss: 86.7256, rec_loss: 86.4075, KL_loss: 0.1482, MMD_loss: 0.1700\nEpoch 249/300 Batch 0015/15, Loss: 87.0603, rec_loss: 86.7443, KL_loss: 0.1531, MMD_loss: 0.1630\nEpoch 250/300 Batch 0000/15, Loss: 83.5984, rec_loss: 83.2197, KL_loss: 0.1453, MMD_loss: 0.2333\nEpoch 250/300 Batch 0015/15, Loss: 84.0626, rec_loss: 83.7110, KL_loss: 0.1446, MMD_loss: 0.2070\nEpoch 251/300 Batch 0000/15, Loss: 88.5731, rec_loss: 88.2176, KL_loss: 0.1491, MMD_loss: 0.2063\nEpoch 251/300 Batch 0015/15, Loss: 87.9734, rec_loss: 87.6139, KL_loss: 0.1452, MMD_loss: 0.2143\nEpoch 252/300 Batch 0000/15, Loss: 85.9991, rec_loss: 85.6478, KL_loss: 0.1474, MMD_loss: 0.2040\nEpoch 252/300 Batch 0015/15, Loss: 84.4947, rec_loss: 84.1539, KL_loss: 0.1452, MMD_loss: 0.1956\nEpoch 253/300 Batch 0000/15, Loss: 88.3712, rec_loss: 88.0013, KL_loss: 0.1476, MMD_loss: 0.2223\nEpoch 253/300 Batch 0015/15, Loss: 83.6163, rec_loss: 83.3138, KL_loss: 0.1451, MMD_loss: 0.1574\nEpoch 254/300 Batch 0000/15, Loss: 86.9609, rec_loss: 86.6329, KL_loss: 0.1477, MMD_loss: 0.1803\nEpoch 254/300 Batch 0015/15, Loss: 87.5038, rec_loss: 87.1372, KL_loss: 0.1486, MMD_loss: 0.2180\nEpoch 255/300 Batch 0000/15, Loss: 86.0919, rec_loss: 85.7701, KL_loss: 0.1447, MMD_loss: 0.1771\nEpoch 255/300 Batch 0015/15, Loss: 86.8041, rec_loss: 86.4161, KL_loss: 0.1496, MMD_loss: 0.2384\nEpoch 256/300 Batch 0000/15, Loss: 88.6750, rec_loss: 88.3555, KL_loss: 0.1471, MMD_loss: 0.1724\nEpoch 256/300 Batch 0015/15, Loss: 89.2324, rec_loss: 88.8875, KL_loss: 0.1485, MMD_loss: 0.1964\nEpoch 257/300 Batch 0000/15, Loss: 89.4353, rec_loss: 89.1117, KL_loss: 0.1437, MMD_loss: 0.1799\nEpoch 257/300 Batch 0015/15, Loss: 87.6113, rec_loss: 87.2150, KL_loss: 0.1478, MMD_loss: 0.2485\nEpoch 258/300 Batch 0000/15, Loss: 85.6923, rec_loss: 85.3545, KL_loss: 0.1503, MMD_loss: 0.1875\nEpoch 258/300 Batch 0015/15, Loss: 85.1296, rec_loss: 84.7987, KL_loss: 0.1505, MMD_loss: 0.1804\nEpoch 259/300 Batch 0000/15, Loss: 84.8438, rec_loss: 84.5101, KL_loss: 0.1508, MMD_loss: 0.1829\nEpoch 259/300 Batch 0015/15, Loss: 85.5994, rec_loss: 85.2819, KL_loss: 0.1481, MMD_loss: 0.1694\nEpoch 260/300 Batch 0000/15, Loss: 81.1966, rec_loss: 80.8510, KL_loss: 0.1514, MMD_loss: 0.1942\nEpoch 260/300 Batch 0015/15, Loss: 87.2830, rec_loss: 86.8694, KL_loss: 0.1462, MMD_loss: 0.2674\nEpoch 260, Loss_valid: 123.3512, rec_loss_valid: 118.2663, KL_loss_valid: 0.1979, MMD_loss: 3.0962 \nEpoch 261/300 Batch 0000/15, Loss: 85.3728, rec_loss: 85.0588, KL_loss: 0.1462, MMD_loss: 0.1678\nEpoch 261/300 Batch 0015/15, Loss: 87.3317, rec_loss: 87.0103, KL_loss: 0.1449, MMD_loss: 0.1766\nEpoch 262/300 Batch 0000/15, Loss: 87.9121, rec_loss: 87.5727, KL_loss: 0.1490, MMD_loss: 0.1904\nEpoch 262/300 Batch 0015/15, Loss: 86.2158, rec_loss: 85.7577, KL_loss: 0.1480, MMD_loss: 0.3101\nEpoch 263/300 Batch 0000/15, Loss: 85.1926, rec_loss: 84.8180, KL_loss: 0.1458, MMD_loss: 0.2288\nEpoch 263/300 Batch 0015/15, Loss: 87.7072, rec_loss: 87.3678, KL_loss: 0.1395, MMD_loss: 0.1999\nEpoch 264/300 Batch 0000/15, Loss: 86.3010, rec_loss: 85.9485, KL_loss: 0.1460, MMD_loss: 0.2065\nEpoch 264/300 Batch 0015/15, Loss: 86.5187, rec_loss: 86.0986, KL_loss: 0.1428, MMD_loss: 0.2773\nEpoch 265/300 Batch 0000/15, Loss: 88.7647, rec_loss: 88.4469, KL_loss: 0.1491, MMD_loss: 0.1687\nEpoch 265/300 Batch 0015/15, Loss: 86.7113, rec_loss: 86.2808, KL_loss: 0.1495, MMD_loss: 0.2811\nEpoch 266/300 Batch 0000/15, Loss: 88.0041, rec_loss: 87.6655, KL_loss: 0.1509, MMD_loss: 0.1876\nEpoch 266/300 Batch 0015/15, Loss: 84.2025, rec_loss: 83.7985, KL_loss: 0.1487, MMD_loss: 0.2553\nEpoch 267/300 Batch 0000/15, Loss: 86.6672, rec_loss: 86.3710, KL_loss: 0.1489, MMD_loss: 0.1473\nEpoch 267/300 Batch 0015/15, Loss: 86.5445, rec_loss: 86.1646, KL_loss: 0.1509, MMD_loss: 0.2290\nEpoch 268/300 Batch 0000/15, Loss: 84.7915, rec_loss: 84.4355, KL_loss: 0.1441, MMD_loss: 0.2119\nEpoch 268/300 Batch 0015/15, Loss: 85.5136, rec_loss: 85.1935, KL_loss: 0.1424, MMD_loss: 0.1777\nEpoch 269/300 Batch 0000/15, Loss: 85.1773, rec_loss: 84.8074, KL_loss: 0.1456, MMD_loss: 0.2242\nEpoch 269/300 Batch 0015/15, Loss: 88.0245, rec_loss: 87.6579, KL_loss: 0.1495, MMD_loss: 0.2171\nEpoch 270/300 Batch 0000/15, Loss: 87.5601, rec_loss: 87.2053, KL_loss: 0.1451, MMD_loss: 0.2097\nEpoch 270/300 Batch 0015/15, Loss: 90.6261, rec_loss: 90.2355, KL_loss: 0.1452, MMD_loss: 0.2453\nEpoch 271/300 Batch 0000/15, Loss: 84.9907, rec_loss: 84.6994, KL_loss: 0.1506, MMD_loss: 0.1407\nEpoch 271/300 Batch 0015/15, Loss: 87.7479, rec_loss: 87.4050, KL_loss: 0.1518, MMD_loss: 0.1911\nEpoch 272/300 Batch 0000/15, Loss: 87.4030, rec_loss: 87.0404, KL_loss: 0.1520, MMD_loss: 0.2106\nEpoch 272/300 Batch 0015/15, Loss: 85.9622, rec_loss: 85.6870, KL_loss: 0.1486, MMD_loss: 0.1266\nEpoch 273/300 Batch 0000/15, Loss: 85.8217, rec_loss: 85.4571, KL_loss: 0.1491, MMD_loss: 0.2155\nEpoch 273/300 Batch 0015/15, Loss: 86.6622, rec_loss: 86.3257, KL_loss: 0.1468, MMD_loss: 0.1898\nEpoch 274/300 Batch 0000/15, Loss: 83.7913, rec_loss: 83.3705, KL_loss: 0.1486, MMD_loss: 0.2722\nEpoch 274/300 Batch 0015/15, Loss: 84.3117, rec_loss: 83.9763, KL_loss: 0.1563, MMD_loss: 0.1792\nEpoch 275/300 Batch 0000/15, Loss: 84.1764, rec_loss: 83.8558, KL_loss: 0.1483, MMD_loss: 0.1723\nEpoch 275/300 Batch 0015/15, Loss: 87.2294, rec_loss: 86.8381, KL_loss: 0.1509, MMD_loss: 0.2404\nEpoch 276/300 Batch 0000/15, Loss: 85.6623, rec_loss: 85.3322, KL_loss: 0.1515, MMD_loss: 0.1786\nEpoch 276/300 Batch 0015/15, Loss: 88.3208, rec_loss: 88.0046, KL_loss: 0.1508, MMD_loss: 0.1654\nEpoch 277/300 Batch 0000/15, Loss: 86.8883, rec_loss: 86.5071, KL_loss: 0.1480, MMD_loss: 0.2332\nEpoch 277/300 Batch 0015/15, Loss: 87.3065, rec_loss: 86.9926, KL_loss: 0.1499, MMD_loss: 0.1641\nEpoch 278/300 Batch 0000/15, Loss: 87.4449, rec_loss: 87.1045, KL_loss: 0.1516, MMD_loss: 0.1888\nEpoch 278/300 Batch 0015/15, Loss: 85.6328, rec_loss: 85.2216, KL_loss: 0.1530, MMD_loss: 0.2582\nEpoch 279/300 Batch 0000/15, Loss: 82.1571, rec_loss: 81.8217, KL_loss: 0.1495, MMD_loss: 0.1859\nEpoch 279/300 Batch 0015/15, Loss: 87.8493, rec_loss: 87.5590, KL_loss: 0.1506, MMD_loss: 0.1396\nEpoch 280/300 Batch 0000/15, Loss: 84.5547, rec_loss: 84.2167, KL_loss: 0.1479, MMD_loss: 0.1901\nEpoch 280/300 Batch 0015/15, Loss: 85.9424, rec_loss: 85.6495, KL_loss: 0.1526, MMD_loss: 0.1404\nEpoch 280, Loss_valid: 123.2934, rec_loss_valid: 118.1615, KL_loss_valid: 0.1997, MMD_loss: 4.2777 \nEpoch 281/300 Batch 0000/15, Loss: 83.8814, rec_loss: 83.5462, KL_loss: 0.1484, MMD_loss: 0.1868\nEpoch 281/300 Batch 0015/15, Loss: 83.4888, rec_loss: 83.1602, KL_loss: 0.1521, MMD_loss: 0.1765\nEpoch 282/300 Batch 0000/15, Loss: 87.3811, rec_loss: 87.0271, KL_loss: 0.1534, MMD_loss: 0.2006\nEpoch 282/300 Batch 0015/15, Loss: 85.2023, rec_loss: 84.9016, KL_loss: 0.1537, MMD_loss: 0.1470\nEpoch 283/300 Batch 0000/15, Loss: 83.9458, rec_loss: 83.6295, KL_loss: 0.1497, MMD_loss: 0.1666\nEpoch 283/300 Batch 0015/15, Loss: 84.5908, rec_loss: 84.2545, KL_loss: 0.1507, MMD_loss: 0.1857\nEpoch 284/300 Batch 0000/15, Loss: 84.3243, rec_loss: 83.9671, KL_loss: 0.1562, MMD_loss: 0.2010\nEpoch 284/300 Batch 0015/15, Loss: 87.9117, rec_loss: 87.5375, KL_loss: 0.1483, MMD_loss: 0.2259\nEpoch 285/300 Batch 0000/15, Loss: 86.9900, rec_loss: 86.6408, KL_loss: 0.1476, MMD_loss: 0.2016\nEpoch 285/300 Batch 0015/15, Loss: 85.7961, rec_loss: 85.4789, KL_loss: 0.1484, MMD_loss: 0.1688\nEpoch 286/300 Batch 0000/15, Loss: 82.4677, rec_loss: 82.1246, KL_loss: 0.1507, MMD_loss: 0.1925\nEpoch 286/300 Batch 0015/15, Loss: 83.1407, rec_loss: 82.7665, KL_loss: 0.1520, MMD_loss: 0.2222\nEpoch 287/300 Batch 0000/15, Loss: 86.4114, rec_loss: 86.0393, KL_loss: 0.1519, MMD_loss: 0.2201\nEpoch 287/300 Batch 0015/15, Loss: 89.6433, rec_loss: 89.3083, KL_loss: 0.1504, MMD_loss: 0.1846\nEpoch 288/300 Batch 0000/15, Loss: 82.7523, rec_loss: 82.4379, KL_loss: 0.1561, MMD_loss: 0.1584\nEpoch 288/300 Batch 0015/15, Loss: 86.7235, rec_loss: 86.3410, KL_loss: 0.1510, MMD_loss: 0.2315\nEpoch 289/300 Batch 0000/15, Loss: 86.2569, rec_loss: 85.9211, KL_loss: 0.1487, MMD_loss: 0.1871\nEpoch 289/300 Batch 0015/15, Loss: 85.0506, rec_loss: 84.6711, KL_loss: 0.1545, MMD_loss: 0.2249\nEpoch 290/300 Batch 0000/15, Loss: 82.9145, rec_loss: 82.5509, KL_loss: 0.1538, MMD_loss: 0.2097\nEpoch 290/300 Batch 0015/15, Loss: 88.0485, rec_loss: 87.6932, KL_loss: 0.1497, MMD_loss: 0.2056\nEpoch 291/300 Batch 0000/15, Loss: 85.7132, rec_loss: 85.3576, KL_loss: 0.1485, MMD_loss: 0.2072\nEpoch 291/300 Batch 0015/15, Loss: 86.5469, rec_loss: 86.1283, KL_loss: 0.1488, MMD_loss: 0.2698\nEpoch 292/300 Batch 0000/15, Loss: 83.9133, rec_loss: 83.5765, KL_loss: 0.1554, MMD_loss: 0.1815\nEpoch 292/300 Batch 0015/15, Loss: 84.2012, rec_loss: 83.8158, KL_loss: 0.1472, MMD_loss: 0.2382\nEpoch 293/300 Batch 0000/15, Loss: 84.2061, rec_loss: 83.8754, KL_loss: 0.1523, MMD_loss: 0.1785\nEpoch 293/300 Batch 0015/15, Loss: 86.4187, rec_loss: 86.1049, KL_loss: 0.1524, MMD_loss: 0.1614\nEpoch 294/300 Batch 0000/15, Loss: 86.9202, rec_loss: 86.5638, KL_loss: 0.1484, MMD_loss: 0.2080\nEpoch 294/300 Batch 0015/15, Loss: 88.4378, rec_loss: 88.0569, KL_loss: 0.1515, MMD_loss: 0.2294\nEpoch 295/300 Batch 0000/15, Loss: 88.3305, rec_loss: 87.9960, KL_loss: 0.1503, MMD_loss: 0.1842\nEpoch 295/300 Batch 0015/15, Loss: 84.2042, rec_loss: 83.8565, KL_loss: 0.1540, MMD_loss: 0.1937\nEpoch 296/300 Batch 0000/15, Loss: 84.8976, rec_loss: 84.5420, KL_loss: 0.1526, MMD_loss: 0.2030\nEpoch 296/300 Batch 0015/15, Loss: 85.6030, rec_loss: 85.2593, KL_loss: 0.1507, MMD_loss: 0.1929\nEpoch 297/300 Batch 0000/15, Loss: 85.1993, rec_loss: 84.9023, KL_loss: 0.1506, MMD_loss: 0.1464\nEpoch 297/300 Batch 0015/15, Loss: 85.9585, rec_loss: 85.5340, KL_loss: 0.1534, MMD_loss: 0.2711\nEpoch 298/300 Batch 0000/15, Loss: 82.7905, rec_loss: 82.4496, KL_loss: 0.1507, MMD_loss: 0.1902\nEpoch 298/300 Batch 0015/15, Loss: 88.2119, rec_loss: 87.8518, KL_loss: 0.1466, MMD_loss: 0.2134\nEpoch 299/300 Batch 0000/15, Loss: 87.1541, rec_loss: 86.7917, KL_loss: 0.1495, MMD_loss: 0.2128\nEpoch 299/300 Batch 0015/15, Loss: 85.1953, rec_loss: 84.9000, KL_loss: 0.1506, MMD_loss: 0.1447\n" ] ], [ [ "## visualizing the MMD latent space", "_____no_output_____" ], [ "You can use MMD layer representation for batch-removal", "_____no_output_____" ], [ "`c` can be the batch or the condition that you want your data to be mapped on it.", "_____no_output_____" ] ], [ [ "latent_y = model.get_y(\n adata.X, c=model.label_encoder.transform(np.tile(np.array(\"Control\"), len(adata))))\nadata_latent = sc.AnnData(latent_y)\nadata_latent.obs[\"cell_label\"] = adata.obs[\"cell_label\"].tolist()\nadata_latent.obs[condition_key] = adata.obs[condition_key].tolist()\nsc.pp.neighbors(adata_latent)\nsc.tl.umap(adata_latent)\nsc.pl.umap(adata_latent, color=[condition_key, \"cell_label\"])", "... storing 'cell_label' as categorical\n... storing 'condition' as categorical\n" ] ], [ [ "## Making prediction", "_____no_output_____" ] ], [ [ "ground_truth = adata[((adata.obs[\"cell_label\"] == \"TA\")\n & (adata.obs[\"condition\"].isin([\"Hpoly.Day10\", \"Control\"])))]\nadata_source = adata[(adata.obs[\"cell_label\"] == \"Stem\") &\n (adata.obs[\"condition\"] == \"Hpoly.Day10\")]\npredicted_data = model.predict(x=adata_source.X, y=adata_source.obs[\"condition\"].tolist(),\n target=\"Hpoly.Day10\")\nadata_pred = sc.AnnData(predicted_data)\nadata_pred.obs[\"condition\"] = np.tile(\"predicted\", len(adata_pred))\nadata_pred.var_names = adata_source.var_names.tolist()\nall_adata = ground_truth.concatenate(adata_pred)\nsc.tl.pca(all_adata)\nsc.pl.pca(all_adata, color=[\"condition\"])\nsc.pl.violin(all_adata, keys=\"Reg3b\", groupby=\"condition\")", "... storing 'barcode' as categorical\n... storing 'cell_label' as categorical\n... storing 'condition' as categorical\n" ] ], [ [ "### Get batch corrected data in output space for all data ", "_____no_output_____" ], [ "`target` can be the batch or the condition that you want your data to be mapped on it.", "_____no_output_____" ] ], [ [ "batch_removed = model.predict(x=adata.X, y=adata.obs[condition_key].tolist(),\n target=\"Control\")", "_____no_output_____" ], [ "corrected = sc.AnnData(batch_removed)\ncorrected.obs[\"cell_label\"] = adata.obs[\"cell_label\"].tolist()\ncorrected.obs[condition_key] = adata.obs[condition_key].tolist()", "_____no_output_____" ], [ "sc.pp.neighbors(corrected)\nsc.tl.umap(corrected)", "WARNING: You’re trying to run this on 2000 dimensions of `.X`, if you really want this, set `use_rep='X'`.\n Falling back to preprocessing with `sc.pp.pca` and default params.\n" ], [ "sc.pl.umap(corrected, color=[condition_key, \"cell_label\"])", "... storing 'cell_label' as categorical\n... storing 'condition' as categorical\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
e725d3576d31017f6f5db5627817dffab3e6ec14
46,191
ipynb
Jupyter Notebook
Criando Agrupamentos.ipynb
Cesarso/Python-para-DataScience
6c0ff82240d439e9dc89cb68e3e2a87473435174
[ "MIT" ]
null
null
null
Criando Agrupamentos.ipynb
Cesarso/Python-para-DataScience
6c0ff82240d439e9dc89cb68e3e2a87473435174
[ "MIT" ]
null
null
null
Criando Agrupamentos.ipynb
Cesarso/Python-para-DataScience
6c0ff82240d439e9dc89cb68e3e2a87473435174
[ "MIT" ]
null
null
null
28.407749
350
0.348336
[ [ [ "# Relatório de Análise VII", "_____no_output_____" ], [ "# Criando Agrupamentos", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "dados = pd.read_csv('dados/aluguel_residencial.csv', sep = ';')", "_____no_output_____" ], [ "dados.head(10)", "_____no_output_____" ], [ "dados['Valor'].mean()", "_____no_output_____" ], [ "# qiual o valor medio por bairro", "_____no_output_____" ], [ "#bairros = ['Barra da Tijuca', 'Copacabana', 'Jardim Botânico', 'Leblon', 'Ramos', 'Grajaú', 'Tijuca']\n\n#selecao = dados['Bairro'].isin(bairros)\n\n#dados = dados[selecao]", "_____no_output_____" ], [ "dados['Bairro'].drop_duplicates()", "_____no_output_____" ], [ "grupo_bairro = dados.groupby('Bairro')", "_____no_output_____" ], [ "type(grupo_bairro)", "_____no_output_____" ], [ "grupo_bairro.groups", "_____no_output_____" ], [ "for bairro, dados in grupo_bairro:\n print(bairro)\n ", "Barra da Tijuca\nCachambi\nCentro\nCopacabana\nGrajaú\nHigienópolis\nJardim Botânico\nLins de Vasconcelos\nRamos\n" ], [ " for bairro, dados in grupo_bairro:\n print(type(dados))", "<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n" ], [ " for bairro, data in grupo_bairro:\n print('{} -> {}'.format(bairro, data.Valor.mean())) # calculou a media pra cada bairro desse", "Barra da Tijuca -> 22000.0\nCachambi -> 1300.0\nCentro -> 800.0\nCopacabana -> 1850.0\nGrajaú -> 1500.0\nHigienópolis -> 800.0\nJardim Botânico -> 7000.0\nLins de Vasconcelos -> 1500.0\nRamos -> 1000.0\n" ], [ "grupo_bairro[['Valor', 'Condominio']].mean().round(2)\n", "_____no_output_____" ], [ "# Exercicio", "_____no_output_____" ], [ "import pandas as pd\nalunos = pd.DataFrame({'Nome': ['Ary', 'Cátia', 'Denis', 'Beto', 'Bruna', 'Dara', 'Carlos', 'Alice'], \n 'Sexo': ['M', 'F', 'M', 'M', 'F', 'F', 'M', 'F'], \n 'Idade': [15, 27, 56, 32, 42, 21, 19, 35], \n 'Notas': [7.5, 2.5, 5.0, 10, 8.2, 7, 6, 5.6], \n 'Aprovado': [True, False, False, True, True, True, False, False]}, \n columns = ['Nome', 'Idade', 'Sexo', 'Notas', 'Aprovado'])\nalunos", "_____no_output_____" ], [ "sexo = alunos.groupby('Sexo')\nsexo = pd.DataFrame(sexo['Notas'].mean().round(2))\nsexo.columns = ['Notas Médias']\nsexo", "_____no_output_____" ] ], [ [ "# Estatisticas Descritivas", "_____no_output_____" ] ], [ [ "grupo_bairro['Valor'].describe().round(2)", "_____no_output_____" ], [ "# std é o desvio padrao.... min de Minimo ....25% é 1° quartil...50% é a mediana....75% é o terceiro quartil", "_____no_output_____" ], [ "grupo_bairro['Valor'].aggregate(['min', 'max', 'sum']).rename(columns = {'min': 'Minimo', 'max': 'Maximo'})", "_____no_output_____" ], [ "\nimport matplotlib.pyplot as plt\n", "_____no_output_____" ], [ "conda install notebook ipykernel\n", "_____no_output_____" ], [ "ipython kernel install --user", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rc('figure', figsize = (20, 10))", "_____no_output_____" ], [ "# A criação de grupamentos com o método groupby() facilita bastante na sumarização das informações de um DataFrame. \n# O método describe() aplicado a um grupamento gera um conjunto de estatísticas descritivas bastante útil no processo de \n# análise de dados, conforme o exemplo abaixo:", "_____no_output_____" ], [ "precos = pd.DataFrame([['Feira', 'Cebola', 2.5], \n ['Mercado', 'Cebola', 1.99], \n ['Supermercado', 'Cebola', 1.69], \n ['Feira', 'Tomate', 4], \n ['Mercado', 'Tomate', 3.29], \n ['Supermercado', 'Tomate', 2.99], \n ['Feira', 'Batata', 4.2], \n ['Mercado', 'Batata', 3.99], \n ['Supermercado', 'Batata', 3.69]], \n columns = ['Local', 'Produto', 'Preço'])\nprecos", "_____no_output_____" ], [ "produtos = precos.groupby('Produto')\nprodutos.describe().round(2)", "_____no_output_____" ], [ "estatisticas = ['mean', 'std', 'min', 'max']\nnomes = {'mean': 'Média', 'std': 'Desvio Padrão','min': 'Mínimo', 'max': 'Máximo'}\nprodutos['Preço'].aggregate(estatisticas).rename(columns = nomes).round(2)", "_____no_output_____" ], [ "produtos['Preço'].agg(['mean', 'std']) ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e725d4b50959ab4cfd8b1322bab679abab6a39ea
37,008
ipynb
Jupyter Notebook
TODO_Lab1_Part1_TensorFlow.ipynb
rajadavidh/lab-introtodeeplearning
a298df4fee1c10e4462015e06cd229b99f75d65e
[ "MIT" ]
null
null
null
TODO_Lab1_Part1_TensorFlow.ipynb
rajadavidh/lab-introtodeeplearning
a298df4fee1c10e4462015e06cd229b99f75d65e
[ "MIT" ]
null
null
null
TODO_Lab1_Part1_TensorFlow.ipynb
rajadavidh/lab-introtodeeplearning
a298df4fee1c10e4462015e06cd229b99f75d65e
[ "MIT" ]
null
null
null
39.793548
747
0.554096
[ [ [ "<a href=\"https://colab.research.google.com/github/rajadavidh/lab-introtodeeplearning/blob/master/TODO_Lab1_Part1_TensorFlow.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<table align=\"center\">\n <td align=\"center\"><a target=\"_blank\" href=\"http://introtodeeplearning.com\">\n <img src=\"http://introtodeeplearning.com/images/colab/mit.png\" style=\"padding-bottom:5px;\" />\n Visit MIT Deep Learning</a></td>\n <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb\">\n <img src=\"http://introtodeeplearning.com/images/colab/colab.png?v2.0\" style=\"padding-bottom:5px;\" />Run in Google Colab</a></td>\n <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb\">\n <img src=\"http://introtodeeplearning.com/images/colab/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n</table>\n\n# Copyright Information\n", "_____no_output_____" ] ], [ [ "# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.\n# \n# Licensed under the MIT License. You may not use this file except in compliance\n# with the License. Use and/or modification of this code outside of 6.S191 must\n# reference:\n#\n# © MIT 6.S191: Introduction to Deep Learning\n# http://introtodeeplearning.com\n#", "_____no_output_____" ] ], [ [ "# Lab 1: Intro to TensorFlow and Music Generation with RNNs\n\nIn this lab, you'll get exposure to using TensorFlow and learn how it can be used for solving deep learning tasks. Go through the code and run each cell. Along the way, you'll encounter several ***TODO*** blocks -- follow the instructions to fill them out before running those cells and continuing.\n\n\n# Part 1: Intro to TensorFlow\n\n## 0.1 Install TensorFlow\n\nTensorFlow is a software library extensively used in machine learning. Here we'll learn how computations are represented and how to define a simple neural network in TensorFlow. For all the labs in 6.S191 2020, we'll be using the latest version of TensorFlow, TensorFlow 2, which affords great flexibility and the ability to imperatively execute operations, just like in Python. You'll notice that TensorFlow 2 is quite similar to Python in its syntax and imperative execution. Let's install TensorFlow and a couple of dependencies.\n", "_____no_output_____" ] ], [ [ "%tensorflow_version 2.x\nimport tensorflow as tf\n\n# Download and import the MIT 6.S191 package\n!pip install mitdeeplearning\nimport mitdeeplearning as mdl\n\nimport numpy as np\nimport matplotlib.pyplot as plt", "Collecting mitdeeplearning\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)\n\u001b[K |████████████████████████████████| 2.1MB 11.9MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)\nRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)\nRequirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)\nRequirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)\nRequirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)\nBuilding wheels for collected packages: mitdeeplearning\n Building wheel for mitdeeplearning (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114585 sha256=d3477985510eba9b1634f2f2eee04c3b956bb86b930d1b8a5143b4da93cd5ddc\n Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74\nSuccessfully built mitdeeplearning\nInstalling collected packages: mitdeeplearning\nSuccessfully installed mitdeeplearning-0.1.2\n" ], [ "sport = tf.constant(\"Tennis\", tf.string)\nnumber = tf.constant(1.41421356237, tf.float64)\n\nprint(\"`sport` is a {}-d Tensor\".format(tf.rank(sport).numpy()))\nprint(\"`number` is a {}-d Tensor\".format(tf.rank(number).numpy()))", "`sport` is a 0-d Tensor\n`number` is a 0-d Tensor\n" ] ], [ [ "## 1.1 Why is TensorFlow called TensorFlow?\n\nTensorFlow is called 'TensorFlow' because it handles the flow (node/mathematical operation) of **Tensors, which are data structures that you can think of as multi-dimensional arrays**. Tensors are represented as n-dimensional arrays of base dataypes such as a string or integer -- they provide a way to generalize vectors and matrices to higher dimensions.\n\nThe ```shape``` of a Tensor defines its number of dimensions and the size of each dimension. The ```rank``` of a Tensor provides the number of dimensions (n-dimensions) -- you can also think of this as the Tensor's order or degree.\n\nLet's first look at 0-d Tensors, of which a scalar is an example:", "_____no_output_____" ], [ "Vectors and lists can be used to create 1-d Tensors:", "_____no_output_____" ] ], [ [ "sports = tf.constant([\"Tennis\", \"Basketball\"], tf.string)\nnumbers = tf.constant([3.141592, 1.414213, 2.71821], tf.float64)\n\nprint(\"`sports` is a {}-d Tensor with shape: {}\".format(tf.rank(sports).numpy(), tf.shape(sports)))\nprint(\"`numbers` is a {}-d Tensor with shape: {}\".format(tf.rank(numbers).numpy(), tf.shape(numbers)))", "`sports` is a 1-d Tensor with shape: [2]\n`numbers` is a 1-d Tensor with shape: [3]\n" ] ], [ [ "Next we consider creating 2-d (i.e., matrices) and higher-rank Tensors. For examples, in future labs involving image processing and computer vision, we will use 4-d Tensors. Here the dimensions correspond to the number of example images in our batch, image height, image width, and the number of color channels.", "_____no_output_____" ] ], [ [ "### Defining higher-order Tensors ###\n\n'''TODO: Define a 2-d Tensor'''\n# matrix = # TODO\n\n# Membuat sebuah tensor baru untuk merepresentasikan matriks \nmatrix = tf.constant([[1, 2, 3], [4, 5, 6]], tf.int32)\n\nassert isinstance(matrix, tf.Tensor), \"matrix must be a tf Tensor object\"\nassert tf.rank(matrix).numpy() == 2", "_____no_output_____" ], [ "'''TODO: Define a 4-d Tensor.'''\n# Use tf.zeros to initialize a 4-d Tensor of zeros with size 10 x 256 x 256 x 3. \n# You can think of this as 10 images where each image is RGB 256 x 256.\n# images = # TODO\n\n# Membuat sebuah tensor baru untuk merepresentasikan 10 gambar \n# Setiap gambar berukuran 256 x 256 dengan 3 kedalaman warna: Red, Green, Blue\nimages = tf.zeros([10, 256, 256, 3], tf.int32)\n\nassert isinstance(images, tf.Tensor), \"matrix must be a tf Tensor object\"\nassert tf.rank(images).numpy() == 4, \"matrix must be of rank 4\"\nassert tf.shape(images).numpy().tolist() == [10, 256, 256, 3], \"matrix is incorrect shape\"", "_____no_output_____" ] ], [ [ "As you have seen, the ```shape``` of a Tensor provides the number of elements in each Tensor dimension. The ```shape``` is quite useful, and we'll use it often. You can also use slicing to access subtensors within a higher-rank Tensor:", "_____no_output_____" ] ], [ [ "row_vector = matrix[1]\ncolumn_vector = matrix[:,2]\nscalar = matrix[1, 2]\n\nprint(\"`row_vector`: {}\".format(row_vector.numpy()))\nprint(\"`column_vector`: {}\".format(column_vector.numpy()))\nprint(\"`scalar`: {}\".format(scalar.numpy()))", "`row_vector`: [4 5 6]\n`column_vector`: [3 6]\n`scalar`: 6\n" ] ], [ [ "## 1.2 Computations on Tensors\n\nA convenient way to think about and visualize computations in TensorFlow is in terms of graphs. We can define this graph in terms of Tensors, which hold data, and the mathematical operations that act on these Tensors in some order. Let's look at a simple example, and define this computation using TensorFlow:\n\n![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/add-graph.png)", "_____no_output_____" ] ], [ [ "# Create the nodes in the graph, and initialize values\na = tf.constant(15)\nb = tf.constant(61)\n\n# Add them!\nc1 = tf.add(a,b)\nc2 = a + b # TensorFlow overrides the \"+\" operation so that it is able to act on Tensors\nprint(c1)\nprint(c2)", "tf.Tensor(76, shape=(), dtype=int32)\ntf.Tensor(76, shape=(), dtype=int32)\n" ] ], [ [ "Notice how we've created a computation graph consisting of TensorFlow operations, and how the output is a Tensor with value 76 -- we've just created a computation graph consisting of operations, and it's executed them and given us back the result.\n\nNow let's consider a slightly more complicated example:\n\n![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph.png)\n\nHere, we take two inputs, `a, b`, and compute an output `e`. Each node in the graph represents an operation that takes some input, does some computation, and passes its output to another node.\n\nLet's define a simple function in TensorFlow to construct this computation function:", "_____no_output_____" ] ], [ [ "### Defining Tensor computations ###\n\n# Construct a simple computation function\ndef func(a,b):\n '''TODO: Define the operation for c, d, e (use tf.add, tf.subtract, tf.multiply).'''\n # c = # TODO\n # d = # TODO\n # e = # TODO\n\n c = tf.add(a,b)\n d = tf.subtract(b,1)\n e = tf.multiply(c,d)\n\n return e", "_____no_output_____" ] ], [ [ "Now, we can call this function to execute the computation graph given some inputs `a,b`:", "_____no_output_____" ] ], [ [ "# Consider example values for a,b\na, b = 1.5, 2.5\n# Execute the computation\ne_out = func(a,b)\nprint(e_out)", "tf.Tensor(6.0, shape=(), dtype=float32)\n" ] ], [ [ "Notice how our output is a Tensor with value defined by the output of the computation, and that the output has no shape as it is a single scalar value.", "_____no_output_____" ], [ "## 1.3 Neural networks in TensorFlow\nWe can also define neural networks in TensorFlow. TensorFlow uses a high-level API called [Keras](https://www.tensorflow.org/guide/keras) that provides a powerful, intuitive framework for building and training deep learning models.\n\nLet's first consider the example of a simple perceptron defined by just one dense layer: $ y = \\sigma(Wx + b)$, where $W$ represents a matrix of weights, $b$ is a bias, $x$ is the input, $\\sigma$ is the sigmoid activation function, and $y$ is the output. We can also visualize this operation using a graph: \n\n![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph-2.png)\n\nTensors can flow through abstract types called [```Layers```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) -- the building blocks of neural networks. ```Layers``` implement common neural networks operations, and are used to update weights, compute losses, and define inter-layer connectivity. We will first define a ```Layer``` to implement the simple perceptron defined above.", "_____no_output_____" ] ], [ [ "### Defining a network Layer ###\n\n# n_output_nodes: number of output nodes\n# input_shape: shape of the input\n# x: input to the layer\n\nclass OurDenseLayer(tf.keras.layers.Layer):\n def __init__(self, n_output_nodes):\n super(OurDenseLayer, self).__init__()\n self.n_output_nodes = n_output_nodes\n\n def build(self, input_shape):\n d = int(input_shape[-1])\n # Define and initialize parameters: a weight matrix W and bias b\n # Note that parameter initialization is random!\n self.W = self.add_weight(\"weight\", shape=[d, self.n_output_nodes]) # note the dimensionality\n self.b = self.add_weight(\"bias\", shape=[1, self.n_output_nodes]) # note the dimensionality\n\n def call(self, x):\n '''TODO: define the operation for z (hint: use tf.matmul)'''\n # z = # TODO\n\n # Membuat fungsi dense layer z berdasarkan skema grafik diatas\n z = tf.add(tf.matmul(x, self.W), self.b)\n\n '''TODO: define the operation for out (hint: use tf.sigmoid)'''\n # y = # TODO\n\n # Membuat fungsi aktivasi y berdasarkan skema grafik diatas\n y = tf.sigmoid(z)\n return y\n\n# Since layer parameters are initialized randomly, we will set a random seed for reproducibility\ntf.random.set_seed(1)\nlayer = OurDenseLayer(3)\nlayer.build((1,2))\nx_input = tf.constant([[1,2.]], shape=(1,2))\ny = layer.call(x_input)\n\n# test the output!\nprint(y.numpy())\nmdl.lab1.test_custom_dense_layer_output(y)", "[[0.2697859 0.45750412 0.66536945]]\n[PASS] test_custom_dense_layer_output\n" ] ], [ [ "Conveniently, TensorFlow has defined a number of ```Layers``` that are commonly used in neural networks, for example a [```Dense```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable). Now, instead of using a single ```Layer``` to define our simple neural network, we'll use the [`Sequential`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Sequential) model from Keras and a single [`Dense` ](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) layer to define our network. With the `Sequential` API, you can readily create neural networks by stacking together layers like building blocks. ", "_____no_output_____" ] ], [ [ "### Defining a neural network using the Sequential API ###\n\n# Import relevant packages\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Dense\n\n# Define the number of outputs\nn_output_nodes = 3\n\n# First define the model \nmodel = Sequential()\n\n'''TODO: Define a dense (fully connected) layer to compute z'''\n# Remember: dense layers are defined by the parameters W and b!\n# You can read more about the initialization of W and b in the TF documentation :) \n# https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable\n# dense_layer = # TODO\n\n# Membuat dense layer menggunakan Sequential API bawaan TensorFlow\n# dense_layer = Dense(n_output_nodes, activation='sigmoid', use_bias=True, kernel_initializer=W, bias_initializer=b) # Salah\ndense_layer = Dense(n_output_nodes, activation='sigmoid')\n\n# Add the dense layer to the model\nmodel.add(dense_layer)\n", "_____no_output_____" ] ], [ [ "That's it! We've defined our model using the Sequential API. Now, we can test it out using an example input:", "_____no_output_____" ] ], [ [ "# Test model with example input\nx_input = tf.constant([[1,2.]], shape=(1,2))\n\n'''TODO: feed input into the model and predict the output!'''\n# model_output = # TODO\n\n# Memasukkan input pada model\n# Solusi dari pengajar: model_output = model(x_input).numpy()\nmodel_output = model.predict(x_input)\nprint(model_output)", "[[0.5607363 0.6566898 0.1249697]]\n" ] ], [ [ "In addition to defining models using the `Sequential` API, we can also define neural networks by directly subclassing the [`Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=stable) class, which groups layers together to enable model training and inference. The `Model` class captures what we refer to as a \"model\" or as a \"network\". Using Subclassing, we can create a class for our model, and then define the forward pass through the network using the `call` function. **Subclassing affords the flexibility to define custom layers, custom training loops, custom activation functions, and custom models**. Let's define the same neural network as above now using Subclassing rather than the `Sequential` model.", "_____no_output_____" ] ], [ [ "### Defining a model using subclassing ###\n\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.layers import Dense\n\nclass SubclassModel(tf.keras.Model):\n\n # In __init__, we define the Model's layers\n def __init__(self, n_output_nodes):\n super(SubclassModel, self).__init__()\n '''TODO: Our model consists of a single Dense layer. Define this layer.''' \n # self.dense_layer = '''TODO: Dense Layer'''\n\n # Membuat obyek dense layer dengan cara subclassing\n self.dense_layer = Dense(n_output_nodes, activation='sigmoid')\n\n # In the call function, we define the Model's forward pass.\n def call(self, inputs):\n return self.dense_layer(inputs)", "_____no_output_____" ] ], [ [ "Just like the model we built using the `Sequential` API, let's test out our `SubclassModel` using an example input.\n\n", "_____no_output_____" ] ], [ [ "n_output_nodes = 3\nmodel = SubclassModel(n_output_nodes)\n\nx_input = tf.constant([[1,2.]], shape=(1,2))\n\nprint(model.call(x_input))", "tf.Tensor([[0.6504887 0.47828162 0.8373661 ]], shape=(1, 3), dtype=float32)\n" ] ], [ [ "Importantly, Subclassing affords us a lot of flexibility to define custom models. For example, we can use boolean arguments in the `call` function to specify different network behaviors, for example different behaviors during training and inference. Let's suppose under some instances we want our network to simply output the input, without any perturbation. We define a boolean argument `isidentity` to control this behavior:", "_____no_output_____" ] ], [ [ "### Defining a model using subclassing and specifying custom behavior ###\n\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.layers import Dense\n\nclass IdentityModel(tf.keras.Model):\n\n # As before, in __init__ we define the Model's layers\n # Since our desired behavior involves the forward pass, this part is unchanged\n def __init__(self, n_output_nodes):\n super(IdentityModel, self).__init__()\n self.dense_layer = tf.keras.layers.Dense(n_output_nodes, activation='sigmoid')\n\n '''TODO: Implement the behavior where the network outputs the input, unchanged, \n under control of the isidentity argument.'''\n def call(self, inputs, isidentity=False):\n x = self.dense_layer(inputs)\n '''TODO: Implement identity behavior'''", "_____no_output_____" ] ], [ [ "Let's test this behavior:", "_____no_output_____" ] ], [ [ "n_output_nodes = 3\nmodel = IdentityModel(n_output_nodes)\n\nx_input = tf.constant([[1,2.]], shape=(1,2))\n'''TODO: pass the input into the model and call with and without the input identity option.'''\nout_activate = # TODO\nout_identity = # TODO\n\nprint(\"Network output with activation: {}; network identity output: {}\".format(out_activate.numpy(), out_identity.numpy()))", "_____no_output_____" ] ], [ [ "Now that we have learned how to define `Layers` as well as neural networks in TensorFlow using both the `Sequential` and Subclassing APIs, we're ready to turn our attention to how to actually implement network training with backpropagation.", "_____no_output_____" ], [ "## 1.4 Automatic differentiation in TensorFlow\n\n[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)\nis one of the most important parts of TensorFlow and is the backbone of training with \n[backpropagation](https://en.wikipedia.org/wiki/Backpropagation). We will use the TensorFlow GradientTape [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape?version=stable) to trace operations for computing gradients later. \n\nWhen a forward pass is made through the network, all forward-pass operations get recorded to a \"tape\"; then, to compute the gradient, the tape is played backwards. By default, the tape is discarded after it is played backwards; this means that a particular `tf.GradientTape` can only\ncompute one gradient, and subsequent calls throw a runtime error. However, we can compute multiple gradients over the same computation by creating a ```persistent``` gradient tape. \n\nFirst, we will look at how we can compute gradients using GradientTape and access them for computation. We define the simple function $ y = x^2$ and compute the gradient:", "_____no_output_____" ] ], [ [ "### Gradient computation with GradientTape ###\n\n# y = x^2\n# Example: x = 3.0\nx = tf.Variable(3.0)\n\n# Initiate the gradient tape\nwith tf.GradientTape() as tape:\n # Define the function\n y = x * x\n# Access the gradient -- derivative of y with respect to x\ndy_dx = tape.gradient(y, x)\n\nassert dy_dx.numpy() == 6.0", "_____no_output_____" ] ], [ [ "In training neural networks, we use differentiation and stochastic gradient descent (SGD) to optimize a loss function. Now that we have a sense of how `GradientTape` can be used to compute and access derivatives, we will look at an example where we use automatic differentiation and SGD to find the minimum of $L=(x-x_f)^2$. Here $x_f$ is a variable for a desired value we are trying to optimize for; $L$ represents a loss that we are trying to minimize. While we can clearly solve this problem analytically ($x_{min}=x_f$), considering how we can compute this using `GradientTape` sets us up nicely for future labs where we use gradient descent to optimize entire neural network losses.", "_____no_output_____" ] ], [ [ "### Function minimization with automatic differentiation and SGD ###\n\n# Initialize a random value for our initial x\nx = tf.Variable([tf.random.normal([1])])\nprint(\"Initializing x={}\".format(x.numpy()))\n\nlearning_rate = 1e-2 # learning rate for SGD\nhistory = []\n# Define the target value\nx_f = 4\n\n# We will run SGD for a number of iterations. At each iteration, we compute the loss, \n# compute the derivative of the loss with respect to x, and perform the SGD update.\nfor i in range(500):\n with tf.GradientTape() as tape:\n '''TODO: define the loss as described above'''\n loss = # TODO\n\n # loss minimization using gradient tape\n grad = tape.gradient(loss, x) # compute the derivative of the loss with respect to x\n new_x = x - learning_rate*grad # sgd update\n x.assign(new_x) # update the value of x\n history.append(x.numpy()[0])\n\n# Plot the evolution of x as we optimize towards x_f!\nplt.plot(history)\nplt.plot([0, 500],[x_f,x_f])\nplt.legend(('Predicted', 'True'))\nplt.xlabel('Iteration')\nplt.ylabel('x value')", "_____no_output_____" ] ], [ [ "`GradientTape` provides an extremely flexible framework for automatic differentiation. In order to back propagate errors through a neural network, we track forward passes on the Tape, use this information to determine the gradients, and then use these gradients for optimization using SGD.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e725e8ad8567ed595e94c00a3b1f41f7909b2be0
1,907
ipynb
Jupyter Notebook
testnotebook.ipynb
sambot83/EmptyNode
59b0fcd2758279f6b25d336b3571110aeada5094
[ "Apache-2.0" ]
null
null
null
testnotebook.ipynb
sambot83/EmptyNode
59b0fcd2758279f6b25d336b3571110aeada5094
[ "Apache-2.0" ]
null
null
null
testnotebook.ipynb
sambot83/EmptyNode
59b0fcd2758279f6b25d336b3571110aeada5094
[ "Apache-2.0" ]
null
null
null
21.188889
229
0.428422
[ [ [ "<a href=\"https://colab.research.google.com/github/sambot83/EmptyNode/blob/master/testnotebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "Test", "_____no_output_____" ] ], [ [ "a=\"test\"", "_____no_output_____" ], [ "print(a)", "test\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e725f1bf010cc33fe23b864162144f6ece148338
172,204
ipynb
Jupyter Notebook
module1/s4_scrape.ipynb
jeanne-etoundi/tac
3dd80c793a2de0e99cfb9dd49087b9f15a6084d8
[ "MIT" ]
null
null
null
module1/s4_scrape.ipynb
jeanne-etoundi/tac
3dd80c793a2de0e99cfb9dd49087b9f15a6084d8
[ "MIT" ]
null
null
null
module1/s4_scrape.ipynb
jeanne-etoundi/tac
3dd80c793a2de0e99cfb9dd49087b9f15a6084d8
[ "MIT" ]
null
null
null
56.257432
1,070
0.754692
[ [ [ "# Scraping: récupération des PDFs des bulletins AVB", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import os\nimport re\nimport time\nimport sys\nimport requests", "_____no_output_____" ] ], [ [ "## Créer un répertoire pour stocker tous les fichiers PDF", "_____no_output_____" ] ], [ [ "pdf_path = '../data/pdf'\n# Créer le dossier s'il n'existe pas\nif not os.path.exists(pdf_path):\n os.mkdir(pdf_path)", "_____no_output_____" ] ], [ [ "## Récupérer les URLs des pdf de tous les bulletins communaux", "_____no_output_____" ], [ "https://archives.bruxelles.be/bulletins/date", "_____no_output_____" ] ], [ [ "root_url = \"https://archives.bruxelles.be/bulletins/date\"\nresp = requests.get(root_url)\nprint(f\"Status: {resp.status_code}\")\nprint(f\"Encoding: {resp.encoding}\")\nhtml = resp.text\nprint(f\"Text length: {len(html)}\")\n\npattern = r\"https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/.*\\.pdf\"\nurls = re.findall(pattern, html)\nprint(f\"{len(urls)} PDF files found\")", "Status: 200\nEncoding: utf-8\nText length: 821446\n2833 PDF files found\n" ], [ "# Impression des 10 premières URLs\nprint(urls[:10])", "['https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1847_Tome_I1_Part_1.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1847_Tome_I1_Part_2.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1847_Tome_I1_Part_3.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1847_Tome_I1_Part_4.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1847_Tome_I1_Part_5.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1848_Tome_I1_Part_1.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1848_Tome_I1_Part_2.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1848_Tome_I1_Part_3.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1849_Tome_I1_Part_1.pdf', 'https://archief.brussel.be/Colossus/BulletinsCommunaux/Bulletins/Documents/Bxl_1849_Tome_I1_Part_2.pdf']\n" ] ], [ [ "## Télécharger tous les PDFs", "_____no_output_____" ] ], [ [ "start_offset = 0\nend_offset = len(urls)\nfor url in urls[start_offset:end_offset]:\n filename = url.split(\"/\")[-1]\n if not os.path.exists(os.path.join(pdf_path, filename)):\n print(f\"Downloading {filename}...\")\n start_time = time.time()\n response = requests.get(url)\n print(f\" done in {(time.time() - start_time):.1f} seconds\")\n output_file = open(os.path.join(pdf_path, filename), \"wb\")\n output_file.write(response.content)\n else:\n print(f\"{filename} already downloaded\")\nprint(\"Done\")", "Bxl_1847_Tome_I1_Part_1.pdf already downloaded\nBxl_1847_Tome_I1_Part_2.pdf already downloaded\nBxl_1847_Tome_I1_Part_3.pdf already downloaded\nBxl_1847_Tome_I1_Part_4.pdf already downloaded\nBxl_1847_Tome_I1_Part_5.pdf already downloaded\nBxl_1848_Tome_I1_Part_1.pdf already downloaded\nBxl_1848_Tome_I1_Part_2.pdf already downloaded\nBxl_1848_Tome_I1_Part_3.pdf already downloaded\nBxl_1849_Tome_I1_Part_1.pdf already downloaded\nBxl_1849_Tome_I1_Part_2.pdf already downloaded\nBxl_1849_Tome_I1_Part_3.pdf already downloaded\nBxl_1849_Tome_I1_Part_4.pdf already downloaded\nBxl_1849_Tome_I1_Part_5.pdf already downloaded\nBxl_1849_Tome_II1_Part_1.pdf already downloaded\nBxl_1849_Tome_II1_Part_2.pdf already downloaded\nBxl_1849_Tome_II1_Part_3.pdf already downloaded\nBxl_1849_Tome_II1_Part_4.pdf already downloaded\nBxl_1849_Tome_II1_Part_5.pdf already downloaded\nBxl_1849_Tome_II1_Part_6.pdf already downloaded\nBxl_1849_Tome_II1_Part_7.pdf already downloaded\nBxl_1850_Tome_I1_Part_1.pdf already downloaded\nBxl_1850_Tome_I1_Part_2.pdf already downloaded\nBxl_1850_Tome_I1_Part_3.pdf already downloaded\nBxl_1850_Tome_I1_Part_4.pdf already downloaded\nBxl_1850_Tome_II1_Part_1.pdf already downloaded\nBxl_1850_Tome_II1_Part_2.pdf already downloaded\nBxl_1850_Tome_II1_Part_3.pdf already downloaded\nBxl_1850_Tome_II1_Part_4.pdf already downloaded\nBxl_1850_Tome_II1_Part_5.pdf already downloaded\nBxl_1850_Tome_II1_Part_6.pdf already downloaded\nBxl_1850_Tome_II1_Part_7.pdf already downloaded\nBxl_1851_Tome_I1_Part_1.pdf already downloaded\nBxl_1851_Tome_I1_Part_2.pdf already downloaded\nBxl_1851_Tome_I1_Part_3.pdf already downloaded\nBxl_1851_Tome_I1_Part_4.pdf already downloaded\nBxl_1851_Tome_I1_Part_5.pdf already downloaded\nBxl_1851_Tome_II1_Part_1.pdf already downloaded\nBxl_1851_Tome_II1_Part_2.pdf already downloaded\nBxl_1851_Tome_II1_Part_3.pdf already downloaded\nBxl_1851_Tome_II1_Part_4.pdf already downloaded\nBxl_1851_Tome_II1_Part_5.pdf already downloaded\nBxl_1851_Tome_II1_Part_6.pdf already downloaded\nBxl_1851_Tome_II1_Part_7.pdf already downloaded\nBxl_1852_Tome_I1_Part_1.pdf already downloaded\nBxl_1852_Tome_I1_Part_2.pdf already downloaded\nBxl_1852_Tome_I1_Part_3.pdf already downloaded\nBxl_1852_Tome_I1_Part_4.pdf already downloaded\nBxl_1852_Tome_I1_Part_5.pdf already downloaded\nBxl_1852_Tome_II1_Part_1.pdf already downloaded\nBxl_1852_Tome_II1_Part_2.pdf already downloaded\nBxl_1852_Tome_II1_Part_3.pdf already downloaded\nBxl_1852_Tome_II1_Part_4.pdf already downloaded\nBxl_1852_Tome_II1_Part_5.pdf already downloaded\nBxl_1854_Tome_I1_Part_1.pdf already downloaded\nBxl_1854_Tome_I1_Part_2.pdf already downloaded\nBxl_1854_Tome_I1_Part_3.pdf already downloaded\nBxl_1854_Tome_II1_Part_1.pdf already downloaded\nBxl_1854_Tome_II1_Part_2.pdf already downloaded\nBxl_1854_Tome_II1_Part_3.pdf already downloaded\nBxl_1854_Tome_II1_Part_4.pdf already downloaded\nBxl_1854_Tome_II1_Part_5.pdf already downloaded\nBxl_1854_Tome_II1_Part_6.pdf already downloaded\nBxl_1855_Tome_I1_Part_1.pdf already downloaded\nBxl_1855_Tome_I1_Part_2.pdf already downloaded\nBxl_1855_Tome_I1_Part_3.pdf already downloaded\nBxl_1855_Tome_I1_Part_4.pdf already downloaded\nBxl_1855_Tome_I1_Part_5.pdf already downloaded\nBxl_1855_Tome_I1_Part_6.pdf already downloaded\nBxl_1855_Tome_II1_Part_1.pdf already downloaded\nBxl_1855_Tome_II1_Part_2.pdf already downloaded\nBxl_1855_Tome_II1_Part_3.pdf already downloaded\nBxl_1855_Tome_II1_Part_4.pdf already downloaded\nBxl_1855_Tome_II1_Part_5.pdf already downloaded\nBxl_1855_Tome_II1_Part_6.pdf already downloaded\nBxl_1855_Tome_II1_Part_7.pdf already downloaded\nBxl_1856_Tome_I1_Part_1.pdf already downloaded\nBxl_1856_Tome_I1_Part_2.pdf already downloaded\nBxl_1856_Tome_I1_Part_3.pdf already downloaded\nBxl_1856_Tome_I1_Part_4.pdf already downloaded\nBxl_1856_Tome_I1_Part_5.pdf already downloaded\nBxl_1856_Tome_II1_Part_1.pdf already downloaded\nBxl_1856_Tome_II1_Part_2.pdf already downloaded\nBxl_1856_Tome_II1_Part_3.pdf already downloaded\nBxl_1856_Tome_II1_Part_4.pdf already downloaded\nBxl_1856_Tome_II1_Part_5.pdf already downloaded\nBxl_1856_Tome_II1_Part_6.pdf already downloaded\nBxl_1856_Tome_II1_Part_7.pdf already downloaded\nBxl_1856_Tome_II1_Part_8.pdf already downloaded\nBxl_1857_Tome_I1_Part_1.pdf already downloaded\nBxl_1857_Tome_I1_Part_2.pdf already downloaded\nBxl_1857_Tome_I1_Part_3.pdf already downloaded\nBxl_1857_Tome_I1_Part_4.pdf already downloaded\nBxl_1857_Tome_I1_Part_5.pdf already downloaded\nBxl_1857_Tome_II1_Part_1.pdf already downloaded\nBxl_1857_Tome_II1_Part_2.pdf already downloaded\nBxl_1857_Tome_II1_Part_3.pdf already downloaded\nBxl_1857_Tome_II1_Part_4.pdf already downloaded\nBxl_1857_Tome_II1_Part_5.pdf already downloaded\nBxl_1857_Tome_II1_Part_6.pdf already downloaded\nBxl_1858_Tome_I1_Part_1.pdf already downloaded\nBxl_1858_Tome_I1_Part_2.pdf already downloaded\nBxl_1858_Tome_I1_Part_3.pdf already downloaded\nBxl_1858_Tome_I1_Part_4.pdf already downloaded\nBxl_1858_Tome_II1_Part_1.pdf already downloaded\nBxl_1858_Tome_II1_Part_2.pdf already downloaded\nBxl_1858_Tome_II1_Part_3.pdf already downloaded\nBxl_1858_Tome_II1_Part_4.pdf already downloaded\nBxl_1858_Tome_II1_Part_5.pdf already downloaded\nBxl_1859_Tome_I1_Part_1.pdf already downloaded\nBxl_1859_Tome_I1_Part_2.pdf already downloaded\nBxl_1859_Tome_I1_Part_3.pdf already downloaded\nBxl_1859_Tome_I1_Part_4.pdf already downloaded\nBxl_1859_Tome_II1_Part_1.pdf already downloaded\nBxl_1859_Tome_II1_Part_2.pdf already downloaded\nBxl_1859_Tome_II1_Part_3.pdf already downloaded\nBxl_1859_Tome_II1_Part_4.pdf already downloaded\nBxl_1859_Tome_II1_Part_5.pdf already downloaded\nBxl_1860_Tome_I1_Part_1.pdf already downloaded\nBxl_1860_Tome_I1_Part_2.pdf already downloaded\nBxl_1860_Tome_I1_Part_3.pdf already downloaded\nBxl_1860_Tome_I1_Part_4.pdf already downloaded\nBxl_1860_Tome_II1_Part_1.pdf already downloaded\nBxl_1860_Tome_II1_Part_2.pdf already downloaded\nBxl_1860_Tome_II1_Part_3.pdf already downloaded\nBxl_1860_Tome_II1_Part_4.pdf already downloaded\nBxl_1860_Tome_II1_Part_5.pdf already downloaded\nBxl_1860_Tome_II1_Part_6.pdf already downloaded\nBxl_1861_Tome_II1_Part_1.pdf already downloaded\nBxl_1861_Tome_II1_Part_2.pdf already downloaded\nBxl_1861_Tome_II1_Part_3.pdf already downloaded\nBxl_1861_Tome_II1_Part_4.pdf already downloaded\nBxl_1861_Tome_II1_Part_5.pdf already downloaded\nBxl_1862_Tome_I1_Part_1.pdf already downloaded\nBxl_1862_Tome_I1_Part_2.pdf already downloaded\nBxl_1862_Tome_I1_Part_3.pdf already downloaded\nBxl_1862_Tome_I1_Part_4.pdf already downloaded\nBxl_1862_Tome_I1_Part_5.pdf already downloaded\nBxl_1862_Tome_II1_Part_1.pdf already downloaded\nBxl_1862_Tome_II1_Part_2.pdf already downloaded\nBxl_1862_Tome_II1_Part_3.pdf already downloaded\nBxl_1862_Tome_II1_Part_4.pdf already downloaded\nBxl_1862_Tome_II1_Part_5.pdf already downloaded\nBxl_1863_Tome_I1_Part_1.pdf already downloaded\nBxl_1863_Tome_I1_Part_2.pdf already downloaded\nBxl_1863_Tome_I1_Part_3.pdf already downloaded\nBxl_1863_Tome_II1_Part_1.pdf already downloaded\nBxl_1863_Tome_II1_Part_2.pdf already downloaded\nBxl_1863_Tome_II1_Part_3.pdf already downloaded\nBxl_1863_Tome_II1_Part_4.pdf already downloaded\nBxl_1863_Tome_II1_Part_5.pdf already downloaded\nBxl_1864_Tome_I1_Part_1.pdf already downloaded\nBxl_1864_Tome_I1_Part_2.pdf already downloaded\nBxl_1864_Tome_I1_Part_3.pdf already downloaded\nBxl_1864_Tome_I1_Part_4.pdf already downloaded\nBxl_1864_Tome_I1_Part_5.pdf already downloaded\nBxl_1865_Tome_I1_Part_1.pdf already downloaded\nBxl_1865_Tome_I1_Part_2.pdf already downloaded\nBxl_1865_Tome_I1_Part_3.pdf already downloaded\nBxl_1865_Tome_I1_Part_4.pdf already downloaded\nBxl_1865_Tome_II1_Part_1.pdf already downloaded\nBxl_1865_Tome_II1_Part_2.pdf already downloaded\nBxl_1865_Tome_II1_Part_3.pdf already downloaded\nBxl_1865_Tome_II1_Part_4.pdf already downloaded\nBxl_1865_Tome_II1_Part_5.pdf already downloaded\nBxl_1865_Tome_II1_Part_6.pdf already downloaded\nBxl_1865_Tome_II1_Part_7.pdf already downloaded\nBxl_1866_Tome_I1_Part_1.pdf already downloaded\nBxl_1866_Tome_I1_Part_2.pdf already downloaded\nBxl_1866_Tome_I1_Part_3.pdf already downloaded\nBxl_1866_Tome_I1_Part_4.pdf already downloaded\nBxl_1866_Tome_I1_Part_5.pdf already downloaded\nBxl_1866_Tome_I1_Part_6.pdf already downloaded\nBxl_1866_Tome_II1_Part_1.pdf already downloaded\nBxl_1866_Tome_II1_Part_2.pdf already downloaded\nBxl_1866_Tome_II1_Part_3.pdf already downloaded\nBxl_1866_Tome_II1_Part_4.pdf already downloaded\nBxl_1866_Tome_II1_Part_5.pdf already downloaded\nBxl_1866_Tome_II1_Part_6.pdf already downloaded\nBxl_1866_Tome_II1_Part_7.pdf already downloaded\nBxl_1867_Tome_I1_Part_1.pdf already downloaded\nBxl_1867_Tome_I1_Part_2.pdf already downloaded\nBxl_1867_Tome_I1_Part_3.pdf already downloaded\nBxl_1867_Tome_I1_Part_4.pdf already downloaded\nBxl_1867_Tome_II1_Part_1.pdf already downloaded\nBxl_1867_Tome_II1_Part_2.pdf already downloaded\nBxl_1867_Tome_II1_Part_3.pdf already downloaded\nBxl_1867_Tome_II1_Part_4.pdf already downloaded\nBxl_1867_Tome_II1_Part_5.pdf already downloaded\nBxl_1867_Tome_II1_Part_6.pdf already downloaded\nBxl_1868_Tome_I1_Part_1.pdf already downloaded\nBxl_1868_Tome_I1_Part_2.pdf already downloaded\nBxl_1868_Tome_I1_Part_3.pdf already downloaded\nBxl_1868_Tome_II1_Part_1.pdf already downloaded\nBxl_1868_Tome_II1_Part_2.pdf already downloaded\nBxl_1868_Tome_II1_Part_3.pdf already downloaded\nBxl_1868_Tome_II1_Part_4.pdf already downloaded\nBxl_1868_Tome_II1_Part_5.pdf already downloaded\nBxl_1868_Tome_II1_Part_6.pdf already downloaded\nBxl_1869_Tome_I1_Part_1.pdf already downloaded\nBxl_1869_Tome_I1_Part_2.pdf already downloaded\nBxl_1869_Tome_I1_Part_3.pdf already downloaded\nBxl_1869_Tome_I1_Part_4.pdf already downloaded\nBxl_1869_Tome_I2_Part_1.pdf already downloaded\nBxl_1869_Tome_I2_Part_2.pdf already downloaded\nBxl_1869_Tome_I2_Part_3.pdf already downloaded\nBxl_1869_Tome_I2_Part_4.pdf already downloaded\nBxl_1869_Tome_II1_Part_1.pdf already downloaded\nBxl_1869_Tome_II1_Part_2.pdf already downloaded\nBxl_1869_Tome_II1_Part_3.pdf already downloaded\nBxl_1869_Tome_II1_Part_4.pdf already downloaded\nBxl_1869_Tome_II1_Part_5.pdf already downloaded\nBxl_1870_Tome_I1_Part_1.pdf already downloaded\nBxl_1870_Tome_I1_Part_2.pdf already downloaded\nBxl_1870_Tome_I1_Part_3.pdf already downloaded\nBxl_1870_Tome_I1_Part_4.pdf already downloaded\nBxl_1870_Tome_I1_Part_5.pdf already downloaded\nBxl_1870_Tome_I1_Part_6.pdf already downloaded\nBxl_1870_Tome_II1_Part_1.pdf already downloaded\nBxl_1870_Tome_II1_Part_2.pdf already downloaded\nBxl_1870_Tome_II1_Part_3.pdf already downloaded\nBxl_1870_Tome_II1_Part_4.pdf already downloaded\nBxl_1870_Tome_II1_Part_5.pdf already downloaded\nBxl_1870_Tome_II1_Part_6.pdf already downloaded\nBxl_1870_Tome_II1_Part_7.pdf already downloaded\nBxl_1871_Tome_I1_Part_1.pdf already downloaded\nBxl_1871_Tome_I1_Part_2.pdf already downloaded\nBxl_1871_Tome_I1_Part_3.pdf already downloaded\nBxl_1871_Tome_I1_Part_4.pdf already downloaded\nBxl_1871_Tome_I1_Part_5.pdf already downloaded\nBxl_1871_Tome_II1_Part_1.pdf already downloaded\nBxl_1871_Tome_II1_Part_2.pdf already downloaded\nBxl_1871_Tome_II1_Part_3.pdf already downloaded\nBxl_1871_Tome_II1_Part_4.pdf already downloaded\nBxl_1871_Tome_II1_Part_5.pdf already downloaded\nBxl_1871_Tome_II1_Part_6.pdf already downloaded\nBxl_1871_Tome_II1_Part_7.pdf already downloaded\nBxl_1871_Tome_II1_Part_8.pdf already downloaded\nBxl_1872_Tome_I1_Part_1.pdf already downloaded\nBxl_1872_Tome_I1_Part_2.pdf already downloaded\nBxl_1872_Tome_I1_Part_3.pdf already downloaded\nBxl_1872_Tome_I1_Part_4.pdf already downloaded\nBxl_1872_Tome_II1_Part_1.pdf already downloaded\nBxl_1872_Tome_II1_Part_2.pdf already downloaded\nBxl_1872_Tome_II1_Part_3.pdf already downloaded\nBxl_1872_Tome_II1_Part_4.pdf already downloaded\nBxl_1872_Tome_II1_Part_5.pdf already downloaded\nBxl_1872_Tome_II1_Part_6.pdf already downloaded\nBxl_1873_Tome_I1_Part_1.pdf already downloaded\nBxl_1873_Tome_I1_Part_2.pdf already downloaded\nBxl_1873_Tome_I1_Part_3.pdf already downloaded\nBxl_1873_Tome_I1_Part_4.pdf already downloaded\nBxl_1874_Tome_I1_Part_1.pdf already downloaded\nBxl_1874_Tome_I1_Part_2.pdf already downloaded\nBxl_1874_Tome_I1_Part_3.pdf already downloaded\nBxl_1874_Tome_I1_Part_4.pdf already downloaded\nBxl_1874_Tome_I1_Part_5.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_10.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_6.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_7.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_8.pdf already downloaded\nLkn_1874_Tome_RptAn_Part_9.pdf already downloaded\nBxl_1876_Tome_I1_Part_1.pdf already downloaded\nBxl_1876_Tome_I1_Part_2.pdf already downloaded\nBxl_1876_Tome_I1_Part_3.pdf already downloaded\nBxl_1876_Tome_I1_Part_4.pdf already downloaded\nBxl_1876_Tome_II1_Part_1.pdf already downloaded\nBxl_1876_Tome_II1_Part_2.pdf already downloaded\nBxl_1876_Tome_II1_Part_3.pdf already downloaded\nBxl_1876_Tome_II1_Part_4.pdf already downloaded\nBxl_1876_Tome_II1_Part_5.pdf already downloaded\nBxl_1876_Tome_II1_Part_6.pdf already downloaded\nBxl_1877_Tome_I1_Part_1.pdf already downloaded\nBxl_1877_Tome_I1_Part_2.pdf already downloaded\nBxl_1877_Tome_I1_Part_3.pdf already downloaded\nBxl_1877_Tome_I1_Part_4.pdf already downloaded\nBxl_1877_Tome_I1_Part_5.pdf already downloaded\nBxl_1877_Tome_II1_Part_1.pdf already downloaded\nBxl_1877_Tome_II1_Part_2.pdf already downloaded\nBxl_1877_Tome_II1_Part_3.pdf already downloaded\nBxl_1877_Tome_II1_Part_4.pdf already downloaded\nBxl_1877_Tome_II1_Part_5.pdf already downloaded\nBxl_1877_Tome_II1_Part_6.pdf already downloaded\nBxl_1877_Tome_II1_Part_7.pdf already downloaded\nBxl_1877_Tome_II1_Part_8.pdf already downloaded\nBxl_1878_Tome_I1_Part_1.pdf already downloaded\nBxl_1878_Tome_I1_Part_2.pdf already downloaded\nBxl_1878_Tome_I1_Part_3.pdf already downloaded\nBxl_1878_Tome_I1_Part_4.pdf already downloaded\nBxl_1878_Tome_I1_Part_5.pdf already downloaded\nBxl_1878_Tome_II1_Part_1.pdf already downloaded\nBxl_1878_Tome_II1_Part_2.pdf already downloaded\nBxl_1878_Tome_II1_Part_3.pdf already downloaded\nBxl_1878_Tome_II1_Part_4.pdf already downloaded\nBxl_1878_Tome_II1_Part_5.pdf already downloaded\nBxl_1878_Tome_II1_Part_6.pdf already downloaded\nBxl_1879_Tome_I1_Part_1.pdf already downloaded\nBxl_1879_Tome_I1_Part_2.pdf already downloaded\nBxl_1879_Tome_I1_Part_3.pdf already downloaded\nBxl_1879_Tome_I1_Part_4.pdf already downloaded\nBxl_1879_Tome_I1_Part_5.pdf already downloaded\nBxl_1879_Tome_II1_Part_1.pdf already downloaded\nBxl_1879_Tome_II1_Part_10.pdf already downloaded\nBxl_1879_Tome_II1_Part_11.pdf already downloaded\nBxl_1879_Tome_II1_Part_2.pdf already downloaded\nBxl_1879_Tome_II1_Part_3.pdf already downloaded\nBxl_1879_Tome_II1_Part_4.pdf already downloaded\nBxl_1879_Tome_II1_Part_5.pdf already downloaded\nBxl_1879_Tome_II1_Part_6.pdf already downloaded\nBxl_1879_Tome_II1_Part_7.pdf already downloaded\nBxl_1879_Tome_II1_Part_8.pdf already downloaded\nBxl_1879_Tome_II1_Part_9.pdf already downloaded\nLkn_1879_Tome_I_Part_1.pdf already downloaded\nLkn_1879_Tome_I_Part_2.pdf already downloaded\nLkn_1879_Tome_I_Part_3.pdf already downloaded\nLkn_1879_Tome_I_Part_4.pdf already downloaded\nLkn_1879_Tome_I_Part_5.pdf already downloaded\nLkn_1879_Tome_I_Part_6.pdf already downloaded\nBxl_1880_Tome_I1_Part_1.pdf already downloaded\nBxl_1880_Tome_I1_Part_10.pdf already downloaded\nBxl_1880_Tome_I1_Part_2.pdf already downloaded\nBxl_1880_Tome_I1_Part_3.pdf already downloaded\nBxl_1880_Tome_I1_Part_4.pdf already downloaded\nBxl_1880_Tome_I1_Part_5.pdf already downloaded\nBxl_1880_Tome_I1_Part_6.pdf already downloaded\nBxl_1880_Tome_I1_Part_7.pdf already downloaded\nBxl_1880_Tome_I1_Part_8.pdf already downloaded\nBxl_1880_Tome_I1_Part_9.pdf already downloaded\nLkn_1880_Tome_I_Part_1.pdf already downloaded\nLkn_1880_Tome_I_Part_2.pdf already downloaded\nLkn_1880_Tome_I_Part_3.pdf already downloaded\nLkn_1880_Tome_I_Part_4.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_6.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_7.pdf already downloaded\nLkn_1880_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1881_Tome_I1_Part_1.pdf already downloaded\nBxl_1881_Tome_I1_Part_10.pdf already downloaded\nBxl_1881_Tome_I1_Part_2.pdf already downloaded\nBxl_1881_Tome_I1_Part_3.pdf already downloaded\nBxl_1881_Tome_I1_Part_4.pdf already downloaded\nBxl_1881_Tome_I1_Part_5.pdf already downloaded\nBxl_1881_Tome_I1_Part_6.pdf already downloaded\nBxl_1881_Tome_I1_Part_7.pdf already downloaded\nBxl_1881_Tome_I1_Part_8.pdf already downloaded\nBxl_1881_Tome_I1_Part_9.pdf already downloaded\nBxl_1881_Tome_I2_Part_1.pdf already downloaded\nBxl_1881_Tome_I2_Part_10.pdf already downloaded\nBxl_1881_Tome_I2_Part_11.pdf already downloaded\nBxl_1881_Tome_I2_Part_12.pdf already downloaded\nBxl_1881_Tome_I2_Part_2.pdf already downloaded\nBxl_1881_Tome_I2_Part_3.pdf already downloaded\nBxl_1881_Tome_I2_Part_4.pdf already downloaded\nBxl_1881_Tome_I2_Part_5.pdf already downloaded\nBxl_1881_Tome_I2_Part_6.pdf already downloaded\nBxl_1881_Tome_I2_Part_7.pdf already downloaded\nBxl_1881_Tome_I2_Part_8.pdf already downloaded\nBxl_1881_Tome_I2_Part_9.pdf already downloaded\nLkn_1881_Tome_I_Part_1.pdf already downloaded\nLkn_1881_Tome_I_Part_2.pdf already downloaded\nLkn_1881_Tome_I_Part_3.pdf already downloaded\nLkn_1881_Tome_I_Part_4.pdf already downloaded\nBxl_1882_Tome_I1_Part_1.pdf already downloaded\nBxl_1882_Tome_I1_Part_2.pdf already downloaded\nBxl_1882_Tome_I1_Part_3.pdf already downloaded\nBxl_1882_Tome_I1_Part_4.pdf already downloaded\nBxl_1882_Tome_I1_Part_5.pdf already downloaded\nBxl_1882_Tome_I1_Part_6.pdf already downloaded\nBxl_1882_Tome_I1_Part_7.pdf already downloaded\nBxl_1882_Tome_I2_Part_1.pdf already downloaded\nBxl_1882_Tome_I2_Part_10.pdf already downloaded\nBxl_1882_Tome_I2_Part_11.pdf already downloaded\nBxl_1882_Tome_I2_Part_12.pdf already downloaded\nBxl_1882_Tome_I2_Part_13.pdf already downloaded\nBxl_1882_Tome_I2_Part_2.pdf already downloaded\nBxl_1882_Tome_I2_Part_3.pdf already downloaded\nBxl_1882_Tome_I2_Part_4.pdf already downloaded\nBxl_1882_Tome_I2_Part_5.pdf already downloaded\nBxl_1882_Tome_I2_Part_6.pdf already downloaded\nBxl_1882_Tome_I2_Part_7.pdf already downloaded\nBxl_1882_Tome_I2_Part_8.pdf already downloaded\nBxl_1882_Tome_I2_Part_9.pdf already downloaded\nLkn_1882_Tome_I_Part_1.pdf already downloaded\nLkn_1882_Tome_I_Part_2.pdf already downloaded\nLkn_1882_Tome_I_Part_3.pdf already downloaded\nLkn_1882_Tome_I_Part_4.pdf already downloaded\nLkn_1882_Tome_I_Part_5.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_6.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_7.pdf already downloaded\nLkn_1882_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1883_Tome_I1_Part_1.pdf already downloaded\nBxl_1883_Tome_I1_Part_2.pdf already downloaded\nBxl_1883_Tome_I1_Part_3.pdf already downloaded\nBxl_1883_Tome_I1_Part_4.pdf already downloaded\nBxl_1883_Tome_I1_Part_5.pdf already downloaded\nBxl_1883_Tome_I1_Part_6.pdf already downloaded\nBxl_1883_Tome_I1_Part_7.pdf already downloaded\nBxl_1883_Tome_I2_Part_1.pdf already downloaded\nBxl_1883_Tome_I2_Part_10.pdf already downloaded\nBxl_1883_Tome_I2_Part_11.pdf already downloaded\nBxl_1883_Tome_I2_Part_12.pdf already downloaded\nBxl_1883_Tome_I2_Part_2.pdf already downloaded\nBxl_1883_Tome_I2_Part_3.pdf already downloaded\nBxl_1883_Tome_I2_Part_4.pdf already downloaded\nBxl_1883_Tome_I2_Part_5.pdf already downloaded\nBxl_1883_Tome_I2_Part_6.pdf already downloaded\nBxl_1883_Tome_I2_Part_7.pdf already downloaded\nBxl_1883_Tome_I2_Part_8.pdf already downloaded\nBxl_1883_Tome_I2_Part_9.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1883_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1884_Tome_I1_Part_1.pdf already downloaded\nBxl_1884_Tome_I1_Part_2.pdf already downloaded\nBxl_1884_Tome_I1_Part_3.pdf already downloaded\nBxl_1884_Tome_I1_Part_4.pdf already downloaded\nBxl_1884_Tome_I1_Part_5.pdf already downloaded\nBxl_1884_Tome_I1_Part_6.pdf already downloaded\nBxl_1884_Tome_I1_Part_7.pdf already downloaded\nBxl_1884_Tome_I2_Part_1.pdf already downloaded\nBxl_1884_Tome_I2_Part_10.pdf already downloaded\nBxl_1884_Tome_I2_Part_2.pdf already downloaded\nBxl_1884_Tome_I2_Part_3.pdf already downloaded\nBxl_1884_Tome_I2_Part_4.pdf already downloaded\nBxl_1884_Tome_I2_Part_5.pdf already downloaded\nBxl_1884_Tome_I2_Part_6.pdf already downloaded\nBxl_1884_Tome_I2_Part_7.pdf already downloaded\nBxl_1884_Tome_I2_Part_8.pdf already downloaded\nBxl_1884_Tome_I2_Part_9.pdf already downloaded\nLkn_1884_Tome_I_Part_1.pdf already downloaded\nLkn_1884_Tome_I_Part_2.pdf already downloaded\nLkn_1884_Tome_I_Part_3.pdf already downloaded\nLkn_1884_Tome_I_Part_4.pdf already downloaded\nLkn_1884_Tome_I_Part_5.pdf already downloaded\nLkn_1884_Tome_I_Part_6.pdf already downloaded\nLkn_1884_Tome_I_Part_7.pdf already downloaded\nLkn_1884_Tome_I_Part_8.pdf already downloaded\nBxl_1885_Tome_I1_Part_1.pdf already downloaded\nBxl_1885_Tome_I1_Part_2.pdf already downloaded\nBxl_1885_Tome_I1_Part_3.pdf already downloaded\nBxl_1885_Tome_I1_Part_4.pdf already downloaded\nBxl_1885_Tome_I1_Part_5.pdf already downloaded\nBxl_1885_Tome_I1_Part_6.pdf already downloaded\nBxl_1885_Tome_I2_Part_1.pdf already downloaded\nBxl_1885_Tome_I2_Part_10.pdf already downloaded\nBxl_1885_Tome_I2_Part_11.pdf already downloaded\nBxl_1885_Tome_I2_Part_12.pdf already downloaded\nBxl_1885_Tome_I2_Part_13.pdf already downloaded\nBxl_1885_Tome_I2_Part_14.pdf already downloaded\nBxl_1885_Tome_I2_Part_2.pdf already downloaded\nBxl_1885_Tome_I2_Part_3.pdf already downloaded\nBxl_1885_Tome_I2_Part_4.pdf already downloaded\nBxl_1885_Tome_I2_Part_5.pdf already downloaded\nBxl_1885_Tome_I2_Part_6.pdf already downloaded\nBxl_1885_Tome_I2_Part_7.pdf already downloaded\nBxl_1885_Tome_I2_Part_8.pdf already downloaded\nBxl_1885_Tome_I2_Part_9.pdf already downloaded\nLkn_1885_Tome_I_Part_1.pdf already downloaded\nLkn_1885_Tome_I_Part_2.pdf already downloaded\nLkn_1885_Tome_I_Part_3.pdf already downloaded\nLkn_1885_Tome_I_Part_4.pdf already downloaded\nLkn_1885_Tome_I_Part_5.pdf already downloaded\nLkn_1885_Tome_I_Part_6.pdf already downloaded\nLkn_1885_Tome_I_Part_7.pdf already downloaded\nLkn_1885_Tome_I_Part_8.pdf already downloaded\nLkn_1885_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1885_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1885_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1885_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1885_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1886_Tome_I1_Part_1.pdf already downloaded\nBxl_1886_Tome_I1_Part_2.pdf already downloaded\nBxl_1886_Tome_I1_Part_3.pdf already downloaded\nBxl_1886_Tome_I1_Part_4.pdf already downloaded\nBxl_1886_Tome_I1_Part_5.pdf already downloaded\nBxl_1886_Tome_I1_Part_6.pdf already downloaded\nBxl_1886_Tome_I1_Part_7.pdf already downloaded\nBxl_1886_Tome_I1_Part_8.pdf already downloaded\nBxl_1886_Tome_I1_Part_9.pdf already downloaded\nBxl_1886_Tome_I2_Part_1.pdf already downloaded\nBxl_1886_Tome_I2_Part_10.pdf already downloaded\nBxl_1886_Tome_I2_Part_11.pdf already downloaded\nBxl_1886_Tome_I2_Part_12.pdf already downloaded\nBxl_1886_Tome_I2_Part_13.pdf already downloaded\nBxl_1886_Tome_I2_Part_2.pdf already downloaded\nBxl_1886_Tome_I2_Part_3.pdf already downloaded\nBxl_1886_Tome_I2_Part_4.pdf already downloaded\nBxl_1886_Tome_I2_Part_5.pdf already downloaded\nBxl_1886_Tome_I2_Part_6.pdf already downloaded\nBxl_1886_Tome_I2_Part_7.pdf already downloaded\nBxl_1886_Tome_I2_Part_8.pdf already downloaded\nBxl_1886_Tome_I2_Part_9.pdf already downloaded\nLkn_1886_Tome_I_Part_1.pdf already downloaded\nLkn_1886_Tome_I_Part_2.pdf already downloaded\nLkn_1886_Tome_I_Part_3.pdf already downloaded\nLkn_1886_Tome_I_Part_4.pdf already downloaded\nLkn_1886_Tome_I_Part_5.pdf already downloaded\nLkn_1886_Tome_I_Part_6.pdf already downloaded\nLkn_1886_Tome_I_Part_7.pdf already downloaded\nBxl_1887_Tome_I1_Part_1.pdf already downloaded\nBxl_1887_Tome_I1_Part_2.pdf already downloaded\nBxl_1887_Tome_I1_Part_3.pdf already downloaded\nBxl_1887_Tome_I1_Part_4.pdf already downloaded\nBxl_1887_Tome_I1_Part_5.pdf already downloaded\nBxl_1887_Tome_I1_Part_6.pdf already downloaded\nBxl_1887_Tome_I1_Part_7.pdf already downloaded\nBxl_1887_Tome_I1_Part_8.pdf already downloaded\nBxl_1887_Tome_I2_Part_1.pdf already downloaded\nBxl_1887_Tome_I2_Part_10.pdf already downloaded\nBxl_1887_Tome_I2_Part_11.pdf already downloaded\nBxl_1887_Tome_I2_Part_12.pdf already downloaded\nBxl_1887_Tome_I2_Part_2.pdf already downloaded\nBxl_1887_Tome_I2_Part_3.pdf already downloaded\nBxl_1887_Tome_I2_Part_4.pdf already downloaded\nBxl_1887_Tome_I2_Part_5.pdf already downloaded\nBxl_1887_Tome_I2_Part_6.pdf already downloaded\nBxl_1887_Tome_I2_Part_7.pdf already downloaded\nBxl_1887_Tome_I2_Part_8.pdf already downloaded\nBxl_1887_Tome_I2_Part_9.pdf already downloaded\nLkn_1887_Tome_I_Part_1.pdf already downloaded\nLkn_1887_Tome_I_Part_2.pdf already downloaded\nLkn_1887_Tome_I_Part_3.pdf already downloaded\nLkn_1887_Tome_I_Part_4.pdf already downloaded\nLkn_1887_Tome_I_Part_5.pdf already downloaded\nBxl_1888_Tome_I1_Part_1.pdf already downloaded\nBxl_1888_Tome_I1_Part_2.pdf already downloaded\nBxl_1888_Tome_I1_Part_3.pdf already downloaded\nBxl_1888_Tome_I1_Part_4.pdf already downloaded\nBxl_1888_Tome_I1_Part_5.pdf already downloaded\nBxl_1888_Tome_I2_Part_1.pdf already downloaded\nBxl_1888_Tome_I2_Part_10.pdf already downloaded\nBxl_1888_Tome_I2_Part_11.pdf already downloaded\nBxl_1888_Tome_I2_Part_12.pdf already downloaded\nBxl_1888_Tome_I2_Part_13.pdf already downloaded\nBxl_1888_Tome_I2_Part_14.pdf already downloaded\nBxl_1888_Tome_I2_Part_2.pdf already downloaded\nBxl_1888_Tome_I2_Part_3.pdf already downloaded\nBxl_1888_Tome_I2_Part_4.pdf already downloaded\nBxl_1888_Tome_I2_Part_5.pdf already downloaded\nBxl_1888_Tome_I2_Part_6.pdf already downloaded\nBxl_1888_Tome_I2_Part_7.pdf already downloaded\nBxl_1888_Tome_I2_Part_8.pdf already downloaded\nBxl_1888_Tome_I2_Part_9.pdf already downloaded\nLkn_1888_Tome_I_Part_1.pdf already downloaded\nLkn_1888_Tome_I_Part_10.pdf already downloaded\nLkn_1888_Tome_I_Part_11.pdf already downloaded\nLkn_1888_Tome_I_Part_12.pdf already downloaded\nLkn_1888_Tome_I_Part_2.pdf already downloaded\nLkn_1888_Tome_I_Part_3.pdf already downloaded\nLkn_1888_Tome_I_Part_4.pdf already downloaded\nLkn_1888_Tome_I_Part_5.pdf already downloaded\nLkn_1888_Tome_I_Part_6.pdf already downloaded\nLkn_1888_Tome_I_Part_7.pdf already downloaded\nLkn_1888_Tome_I_Part_8.pdf already downloaded\nLkn_1888_Tome_I_Part_9.pdf already downloaded\nBxl_1889_Tome_I1_Part_1.pdf already downloaded\nBxl_1889_Tome_I1_Part_2.pdf already downloaded\nBxl_1889_Tome_I1_Part_3.pdf already downloaded\nBxl_1889_Tome_I1_Part_4.pdf already downloaded\nBxl_1889_Tome_I1_Part_5.pdf already downloaded\nBxl_1889_Tome_I1_Part_6.pdf already downloaded\nBxl_1889_Tome_I1_Part_7.pdf already downloaded\nBxl_1889_Tome_I1_Part_8.pdf already downloaded\nBxl_1889_Tome_I2_Part_1.pdf already downloaded\nBxl_1889_Tome_I2_Part_10.pdf already downloaded\nBxl_1889_Tome_I2_Part_11.pdf already downloaded\nBxl_1889_Tome_I2_Part_12.pdf already downloaded\nBxl_1889_Tome_I2_Part_13.pdf already downloaded\nBxl_1889_Tome_I2_Part_2.pdf already downloaded\nBxl_1889_Tome_I2_Part_3.pdf already downloaded\nBxl_1889_Tome_I2_Part_4.pdf already downloaded\nBxl_1889_Tome_I2_Part_5.pdf already downloaded\nBxl_1889_Tome_I2_Part_6.pdf already downloaded\nBxl_1889_Tome_I2_Part_7.pdf already downloaded\nBxl_1889_Tome_I2_Part_8.pdf already downloaded\nBxl_1889_Tome_I2_Part_9.pdf already downloaded\nBxl_1890_Tome_I1_Part_1.pdf already downloaded\nBxl_1890_Tome_I1_Part_2.pdf already downloaded\nBxl_1890_Tome_I1_Part_3.pdf already downloaded\nBxl_1890_Tome_I1_Part_4.pdf already downloaded\nBxl_1890_Tome_I1_Part_5.pdf already downloaded\nBxl_1890_Tome_I1_Part_6.pdf already downloaded\nBxl_1890_Tome_I1_Part_7.pdf already downloaded\nBxl_1890_Tome_I1_Part_8.pdf already downloaded\nBxl_1890_Tome_I2_Part_1.pdf already downloaded\nBxl_1890_Tome_I2_Part_10.pdf already downloaded\nBxl_1890_Tome_I2_Part_11.pdf already downloaded\nBxl_1890_Tome_I2_Part_12.pdf already downloaded\nBxl_1890_Tome_I2_Part_13.pdf already downloaded\nBxl_1890_Tome_I2_Part_14.pdf already downloaded\nBxl_1890_Tome_I2_Part_2.pdf already downloaded\nBxl_1890_Tome_I2_Part_3.pdf already downloaded\nBxl_1890_Tome_I2_Part_4.pdf already downloaded\nBxl_1890_Tome_I2_Part_5.pdf already downloaded\nBxl_1890_Tome_I2_Part_6.pdf already downloaded\nBxl_1890_Tome_I2_Part_7.pdf already downloaded\nBxl_1890_Tome_I2_Part_8.pdf already downloaded\nBxl_1890_Tome_I2_Part_9.pdf already downloaded\nLkn_1890_Tome_I_Part_1.pdf already downloaded\nLkn_1890_Tome_I_Part_2.pdf already downloaded\nLkn_1890_Tome_I_Part_3.pdf already downloaded\nLkn_1890_Tome_I_Part_4.pdf already downloaded\nLkn_1890_Tome_I_Part_5.pdf already downloaded\nLkn_1890_Tome_I_Part_6.pdf already downloaded\nLkn_1890_Tome_I_Part_7.pdf already downloaded\nLkn_1890_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1890_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1890_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1890_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1891_Tome_I1_Part_1.pdf already downloaded\nBxl_1891_Tome_I1_Part_2.pdf already downloaded\nBxl_1891_Tome_I1_Part_3.pdf already downloaded\nBxl_1891_Tome_I1_Part_4.pdf already downloaded\nBxl_1891_Tome_I1_Part_5.pdf already downloaded\nBxl_1891_Tome_I1_Part_6.pdf already downloaded\nBxl_1891_Tome_I1_Part_7.pdf already downloaded\nBxl_1891_Tome_I1_Part_8.pdf already downloaded\nBxl_1891_Tome_I1_Part_9.pdf already downloaded\nBxl_1891_Tome_I2_Part_1.pdf already downloaded\nBxl_1891_Tome_I2_Part_10.pdf already downloaded\nBxl_1891_Tome_I2_Part_11.pdf already downloaded\nBxl_1891_Tome_I2_Part_12.pdf already downloaded\nBxl_1891_Tome_I2_Part_13.pdf already downloaded\nBxl_1891_Tome_I2_Part_14.pdf already downloaded\nBxl_1891_Tome_I2_Part_2.pdf already downloaded\nBxl_1891_Tome_I2_Part_3.pdf already downloaded\nBxl_1891_Tome_I2_Part_4.pdf already downloaded\nBxl_1891_Tome_I2_Part_5.pdf already downloaded\nBxl_1891_Tome_I2_Part_6.pdf already downloaded\nBxl_1891_Tome_I2_Part_7.pdf already downloaded\nBxl_1891_Tome_I2_Part_8.pdf already downloaded\nBxl_1891_Tome_I2_Part_9.pdf already downloaded\nLkn_1891_Tome_I_Part_1.pdf already downloaded\nLkn_1891_Tome_I_Part_2.pdf already downloaded\nLkn_1891_Tome_I_Part_3.pdf already downloaded\nLkn_1891_Tome_I_Part_4.pdf already downloaded\nLkn_1891_Tome_I_Part_5.pdf already downloaded\nBxl_1892_Tome_I1_Part_1.pdf already downloaded\nBxl_1892_Tome_I1_Part_2.pdf already downloaded\nBxl_1892_Tome_I1_Part_3.pdf already downloaded\nBxl_1892_Tome_I1_Part_4.pdf already downloaded\nBxl_1892_Tome_I1_Part_5.pdf already downloaded\nBxl_1892_Tome_I1_Part_6.pdf already downloaded\nBxl_1892_Tome_I1_Part_7.pdf already downloaded\nBxl_1892_Tome_I1_Part_8.pdf already downloaded\nBxl_1892_Tome_I1_Part_9.pdf already downloaded\nBxl_1892_Tome_I2_Part_1.pdf already downloaded\nBxl_1892_Tome_I2_Part_10.pdf already downloaded\nBxl_1892_Tome_I2_Part_11.pdf already downloaded\nBxl_1892_Tome_I2_Part_12.pdf already downloaded\nBxl_1892_Tome_I2_Part_2.pdf already downloaded\nBxl_1892_Tome_I2_Part_3.pdf already downloaded\nBxl_1892_Tome_I2_Part_4.pdf already downloaded\nBxl_1892_Tome_I2_Part_5.pdf already downloaded\nBxl_1892_Tome_I2_Part_6.pdf already downloaded\nBxl_1892_Tome_I2_Part_7.pdf already downloaded\nBxl_1892_Tome_I2_Part_8.pdf already downloaded\nBxl_1892_Tome_I2_Part_9.pdf already downloaded\nLkn_1892_Tome_I_Part_1.pdf already downloaded\nLkn_1892_Tome_I_Part_2.pdf already downloaded\nLkn_1892_Tome_I_Part_3.pdf already downloaded\nLkn_1892_Tome_I_Part_4.pdf already downloaded\nLkn_1892_Tome_I_Part_5.pdf already downloaded\nBxl_1893_Tome_I1_Part_1.pdf already downloaded\nBxl_1893_Tome_I1_Part_2.pdf already downloaded\nBxl_1893_Tome_I1_Part_3.pdf already downloaded\nBxl_1893_Tome_I1_Part_4.pdf already downloaded\nBxl_1893_Tome_I1_Part_5.pdf already downloaded\nBxl_1893_Tome_I1_Part_6.pdf already downloaded\nBxl_1893_Tome_I1_Part_7.pdf already downloaded\nBxl_1893_Tome_I1_Part_8.pdf already downloaded\nBxl_1893_Tome_II2_Part_1.pdf already downloaded\nBxl_1893_Tome_II2_Part_10.pdf already downloaded\nBxl_1893_Tome_II2_Part_2.pdf already downloaded\nBxl_1893_Tome_II2_Part_3.pdf already downloaded\nBxl_1893_Tome_II2_Part_4.pdf already downloaded\nBxl_1893_Tome_II2_Part_5.pdf already downloaded\nBxl_1893_Tome_II2_Part_6.pdf already downloaded\nBxl_1893_Tome_II2_Part_7.pdf already downloaded\nBxl_1893_Tome_II2_Part_8.pdf already downloaded\nBxl_1893_Tome_II2_Part_9.pdf already downloaded\nLkn_1893_Tome_I_Part_1.pdf already downloaded\nLkn_1893_Tome_I_Part_2.pdf already downloaded\nLkn_1893_Tome_I_Part_3.pdf already downloaded\nLkn_1893_Tome_I_Part_4.pdf already downloaded\nLkn_1893_Tome_I_Part_5.pdf already downloaded\nLkn_1893_Tome_I_Part_6.pdf already downloaded\nLkn_1893_Tome_I_Part_7.pdf already downloaded\nBxl_1894_Tome_I1_Part_1.pdf already downloaded\nBxl_1894_Tome_I1_Part_2.pdf already downloaded\nBxl_1894_Tome_I1_Part_3.pdf already downloaded\nBxl_1894_Tome_I1_Part_4.pdf already downloaded\nBxl_1894_Tome_I1_Part_5.pdf already downloaded\nBxl_1894_Tome_I1_Part_6.pdf already downloaded\nBxl_1894_Tome_I1_Part_7.pdf already downloaded\nLkn_1894_Tome_I_Part_1.pdf already downloaded\nLkn_1894_Tome_I_Part_2.pdf already downloaded\nLkn_1894_Tome_I_Part_3.pdf already downloaded\nLkn_1894_Tome_I_Part_4.pdf already downloaded\nLkn_1894_Tome_I_Part_5.pdf already downloaded\nLkn_1894_Tome_I_Part_6.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1894_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1895_Tome_I1_Part_1.pdf already downloaded\nBxl_1895_Tome_I1_Part_2.pdf already downloaded\nBxl_1895_Tome_I1_Part_3.pdf already downloaded\nBxl_1895_Tome_I1_Part_4.pdf already downloaded\nBxl_1895_Tome_I1_Part_5.pdf already downloaded\nBxl_1895_Tome_I1_Part_6.pdf already downloaded\nBxl_1895_Tome_I1_Part_7.pdf already downloaded\nBxl_1895_Tome_I1_Part_8.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_1.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_2.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_3.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_4.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_5.pdf already downloaded\nBxl_1896_Tome_I1_1_Part_6.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_1.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_2.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_3.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_4.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_5.pdf already downloaded\nBxl_1896_Tome_I1_2_Part_6.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1896_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1896_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1897_Tome_I1_Part_1.pdf already downloaded\nBxl_1897_Tome_I1_Part_2.pdf already downloaded\nBxl_1897_Tome_I1_Part_3.pdf already downloaded\nBxl_1897_Tome_I1_Part_4.pdf already downloaded\nBxl_1897_Tome_I1_Part_5.pdf already downloaded\nBxl_1897_Tome_I1_Part_6.pdf already downloaded\nBxl_1897_Tome_I2_Part_1.pdf already downloaded\nBxl_1897_Tome_I2_Part_10.pdf already downloaded\nBxl_1897_Tome_I2_Part_11.pdf already downloaded\nBxl_1897_Tome_I2_Part_12.pdf already downloaded\nBxl_1897_Tome_I2_Part_13.pdf already downloaded\nBxl_1897_Tome_I2_Part_14.pdf already downloaded\nBxl_1897_Tome_I2_Part_15.pdf already downloaded\nBxl_1897_Tome_I2_Part_16.pdf already downloaded\nBxl_1897_Tome_I2_Part_17.pdf already downloaded\nBxl_1897_Tome_I2_Part_2.pdf already downloaded\nBxl_1897_Tome_I2_Part_3.pdf already downloaded\nBxl_1897_Tome_I2_Part_4.pdf already downloaded\nBxl_1897_Tome_I2_Part_5.pdf already downloaded\nBxl_1897_Tome_I2_Part_6.pdf already downloaded\nBxl_1897_Tome_I2_Part_7.pdf already downloaded\nBxl_1897_Tome_I2_Part_8.pdf already downloaded\nBxl_1897_Tome_I2_Part_9.pdf already downloaded\nBxl_1897_Tome_II1_Part_1.pdf already downloaded\nBxl_1897_Tome_II1_Part_2.pdf already downloaded\nBxl_1898_Tome_I1_Part_1.pdf already downloaded\nBxl_1898_Tome_I1_Part_10.pdf already downloaded\nBxl_1898_Tome_I1_Part_11.pdf already downloaded\nBxl_1898_Tome_I1_Part_12.pdf already downloaded\nBxl_1898_Tome_I1_Part_2.pdf already downloaded\nBxl_1898_Tome_I1_Part_3.pdf already downloaded\nBxl_1898_Tome_I1_Part_4.pdf already downloaded\nBxl_1898_Tome_I1_Part_5.pdf already downloaded\nBxl_1898_Tome_I1_Part_6.pdf already downloaded\nBxl_1898_Tome_I1_Part_7.pdf already downloaded\nBxl_1898_Tome_I1_Part_8.pdf already downloaded\nBxl_1898_Tome_I1_Part_9.pdf already downloaded\nBxl_1898_Tome_I2_Part_1.pdf already downloaded\nBxl_1898_Tome_I2_Part_2.pdf already downloaded\nBxl_1898_Tome_I2_Part_3.pdf already downloaded\nBxl_1898_Tome_I2_Part_4.pdf already downloaded\nBxl_1898_Tome_I2_Part_5.pdf already downloaded\nBxl_1898_Tome_I2_Part_6.pdf already downloaded\nBxl_1898_Tome_I2_Part_7.pdf already downloaded\nBxl_1898_Tome_I2_Part_8.pdf already downloaded\nBxl_1898_Tome_II1_Part_1.pdf already downloaded\nBxl_1899_Tome_I1_Part_1.pdf already downloaded\nBxl_1899_Tome_I1_Part_10.pdf already downloaded\nBxl_1899_Tome_I1_Part_2.pdf already downloaded\nBxl_1899_Tome_I1_Part_3.pdf already downloaded\nBxl_1899_Tome_I1_Part_4.pdf already downloaded\nBxl_1899_Tome_I1_Part_5.pdf already downloaded\nBxl_1899_Tome_I1_Part_6.pdf already downloaded\nBxl_1899_Tome_I1_Part_7.pdf already downloaded\nBxl_1899_Tome_I1_Part_8.pdf already downloaded\nBxl_1899_Tome_I1_Part_9.pdf already downloaded\nBxl_1899_Tome_I2_Part_1.pdf already downloaded\nBxl_1899_Tome_I2_Part_10.pdf already downloaded\nBxl_1899_Tome_I2_Part_11.pdf already downloaded\nBxl_1899_Tome_I2_Part_12.pdf already downloaded\nBxl_1899_Tome_I2_Part_13.pdf already downloaded\nBxl_1899_Tome_I2_Part_2.pdf already downloaded\nBxl_1899_Tome_I2_Part_3.pdf already downloaded\nBxl_1899_Tome_I2_Part_4.pdf already downloaded\nBxl_1899_Tome_I2_Part_5.pdf already downloaded\nBxl_1899_Tome_I2_Part_6.pdf already downloaded\nBxl_1899_Tome_I2_Part_7.pdf already downloaded\nBxl_1899_Tome_I2_Part_8.pdf already downloaded\nBxl_1899_Tome_I2_Part_9.pdf already downloaded\nBxl_1899_Tome_II1_Part_1.pdf already downloaded\nBxl_1899_Tome_II1_Part_2.pdf already downloaded\nLkn_1899_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1899_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1899_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1899_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1899_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1900_Tome_I1_Part_1.pdf already downloaded\nBxl_1900_Tome_I1_Part_2.pdf already downloaded\nBxl_1900_Tome_I1_Part_3.pdf already downloaded\nBxl_1900_Tome_I1_Part_4.pdf already downloaded\nBxl_1900_Tome_I1_Part_5.pdf already downloaded\nBxl_1900_Tome_I1_Part_6.pdf already downloaded\nBxl_1900_Tome_I1_Part_7.pdf already downloaded\nBxl_1900_Tome_I1_Part_8.pdf already downloaded\nBxl_1900_Tome_I2_Part_1.pdf already downloaded\nBxl_1900_Tome_I2_Part_10.pdf already downloaded\nBxl_1900_Tome_I2_Part_11.pdf already downloaded\nBxl_1900_Tome_I2_Part_12.pdf already downloaded\nBxl_1900_Tome_I2_Part_13.pdf already downloaded\nBxl_1900_Tome_I2_Part_14.pdf already downloaded\nBxl_1900_Tome_I2_Part_2.pdf already downloaded\nBxl_1900_Tome_I2_Part_3.pdf already downloaded\nBxl_1900_Tome_I2_Part_4.pdf already downloaded\nBxl_1900_Tome_I2_Part_5.pdf already downloaded\nBxl_1900_Tome_I2_Part_6.pdf already downloaded\nBxl_1900_Tome_I2_Part_7.pdf already downloaded\nBxl_1900_Tome_I2_Part_8.pdf already downloaded\nBxl_1900_Tome_I2_Part_9.pdf already downloaded\nBxl_1900_Tome_II1_Part_1.pdf already downloaded\nBxl_1900_Tome_II1_Part_2.pdf already downloaded\nBxl_1901_Tome_I1_Part_1.pdf already downloaded\nBxl_1901_Tome_I1_Part_10.pdf already downloaded\nBxl_1901_Tome_I1_Part_2.pdf already downloaded\nBxl_1901_Tome_I1_Part_3.pdf already downloaded\nBxl_1901_Tome_I1_Part_4.pdf already downloaded\nBxl_1901_Tome_I1_Part_5.pdf already downloaded\nBxl_1901_Tome_I1_Part_6.pdf already downloaded\nBxl_1901_Tome_I1_Part_7.pdf already downloaded\nBxl_1901_Tome_I1_Part_8.pdf already downloaded\nBxl_1901_Tome_I1_Part_9.pdf already downloaded\nBxl_1901_Tome_I2_Part_1.pdf already downloaded\nBxl_1901_Tome_I2_Part_10.pdf already downloaded\nBxl_1901_Tome_I2_Part_11.pdf already downloaded\nBxl_1901_Tome_I2_Part_12.pdf already downloaded\nBxl_1901_Tome_I2_Part_13.pdf already downloaded\nBxl_1901_Tome_I2_Part_14.pdf already downloaded\nBxl_1901_Tome_I2_Part_15.pdf already downloaded\nBxl_1901_Tome_I2_Part_2.pdf already downloaded\nBxl_1901_Tome_I2_Part_3.pdf already downloaded\nBxl_1901_Tome_I2_Part_4.pdf already downloaded\nBxl_1901_Tome_I2_Part_5.pdf already downloaded\nBxl_1901_Tome_I2_Part_6.pdf already downloaded\nBxl_1901_Tome_I2_Part_7.pdf already downloaded\nBxl_1901_Tome_I2_Part_8.pdf already downloaded\nBxl_1901_Tome_I2_Part_9.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_1.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_10.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_2.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_3.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_4.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_5.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_6.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_7.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_8.pdf already downloaded\nLkn_1901_Tome_RptAn_Part_9.pdf already downloaded\nBxl_1902_Tome_I1_Part_1.pdf already downloaded\nBxl_1902_Tome_I1_Part_2.pdf already downloaded\nBxl_1902_Tome_I1_Part_3.pdf already downloaded\nBxl_1902_Tome_I1_Part_4.pdf already downloaded\nBxl_1902_Tome_I1_Part_5.pdf already downloaded\nBxl_1902_Tome_I1_Part_6.pdf already downloaded\nBxl_1902_Tome_I1_Part_7.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1902_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1902_Tome_II1_Part_1.pdf already downloaded\nBxl_1902_Tome_II1_Part_2.pdf already downloaded\nBxl_1903_Tome_I1_Part_1.pdf already downloaded\nBxl_1903_Tome_I1_Part_2.pdf already downloaded\nBxl_1903_Tome_I1_Part_3.pdf already downloaded\nBxl_1903_Tome_I1_Part_4.pdf already downloaded\nBxl_1903_Tome_I1_Part_5.pdf already downloaded\nBxl_1903_Tome_I1_Part_6.pdf already downloaded\nBxl_1903_Tome_I1_Part_7.pdf already downloaded\nBxl_1903_Tome_I1_Part_8.pdf already downloaded\nBxl_1903_Tome_I1_Part_9.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1903_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_11.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_12.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1903_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1903_Tome_II1_Part_1.pdf already downloaded\nBxl_1903_Tome_II1_Part_2.pdf already downloaded\nBxl_1904_Tome_I2_Part_1.pdf already downloaded\nBxl_1904_Tome_I2_Part_10.pdf already downloaded\nBxl_1904_Tome_I2_Part_11.pdf already downloaded\nBxl_1904_Tome_I2_Part_12.pdf already downloaded\nBxl_1904_Tome_I2_Part_13.pdf already downloaded\nBxl_1904_Tome_I2_Part_14.pdf already downloaded\nBxl_1904_Tome_I2_Part_15.pdf already downloaded\nBxl_1904_Tome_I2_Part_16.pdf already downloaded\nBxl_1904_Tome_I2_Part_17.pdf already downloaded\nBxl_1904_Tome_I2_Part_18.pdf already downloaded\nBxl_1904_Tome_I2_Part_2.pdf already downloaded\nBxl_1904_Tome_I2_Part_3.pdf already downloaded\nBxl_1904_Tome_I2_Part_4.pdf already downloaded\nBxl_1904_Tome_I2_Part_5.pdf already downloaded\nBxl_1904_Tome_I2_Part_6.pdf already downloaded\nBxl_1904_Tome_I2_Part_7.pdf already downloaded\nBxl_1904_Tome_I2_Part_8.pdf already downloaded\nBxl_1904_Tome_I2_Part_9.pdf already downloaded\nBxl_1904_Tome_II1_Part_1.pdf already downloaded\nBxl_1904_Tome_II1_Part_2.pdf already downloaded\nBxl_1905_Tome_I1_Part_1.pdf already downloaded\nBxl_1905_Tome_I1_Part_10.pdf already downloaded\nBxl_1905_Tome_I1_Part_11.pdf already downloaded\nBxl_1905_Tome_I1_Part_12.pdf already downloaded\nBxl_1905_Tome_I1_Part_2.pdf already downloaded\nBxl_1905_Tome_I1_Part_3.pdf already downloaded\nBxl_1905_Tome_I1_Part_4.pdf already downloaded\nBxl_1905_Tome_I1_Part_5.pdf already downloaded\nBxl_1905_Tome_I1_Part_6.pdf already downloaded\nBxl_1905_Tome_I1_Part_7.pdf already downloaded\nBxl_1905_Tome_I1_Part_8.pdf already downloaded\nBxl_1905_Tome_I1_Part_9.pdf already downloaded\nBxl_1905_Tome_I2_Part_1.pdf already downloaded\nBxl_1905_Tome_I2_Part_10.pdf already downloaded\nBxl_1905_Tome_I2_Part_11.pdf already downloaded\nBxl_1905_Tome_I2_Part_12.pdf already downloaded\nBxl_1905_Tome_I2_Part_13.pdf already downloaded\nBxl_1905_Tome_I2_Part_14.pdf already downloaded\nBxl_1905_Tome_I2_Part_2.pdf already downloaded\nBxl_1905_Tome_I2_Part_3.pdf already downloaded\nBxl_1905_Tome_I2_Part_4.pdf already downloaded\nBxl_1905_Tome_I2_Part_5.pdf already downloaded\nBxl_1905_Tome_I2_Part_6.pdf already downloaded\nBxl_1905_Tome_I2_Part_7.pdf already downloaded\nBxl_1905_Tome_I2_Part_8.pdf already downloaded\nBxl_1905_Tome_I2_Part_9.pdf already downloaded\nBxl_1905_Tome_II1_Part_1.pdf already downloaded\nBxl_1905_Tome_II1_Part_2.pdf already downloaded\nBxl_1905_Tome_II1_Part_3.pdf already downloaded\nBxl_1906_Tome_I1_Part_1.pdf already downloaded\nBxl_1906_Tome_I1_Part_10.pdf already downloaded\nBxl_1906_Tome_I1_Part_11.pdf already downloaded\nBxl_1906_Tome_I1_Part_12.pdf already downloaded\nBxl_1906_Tome_I1_Part_13.pdf already downloaded\nBxl_1906_Tome_I1_Part_2.pdf already downloaded\nBxl_1906_Tome_I1_Part_3.pdf already downloaded\nBxl_1906_Tome_I1_Part_4.pdf already downloaded\nBxl_1906_Tome_I1_Part_5.pdf already downloaded\nBxl_1906_Tome_I1_Part_6.pdf already downloaded\nBxl_1906_Tome_I1_Part_7.pdf already downloaded\nBxl_1906_Tome_I1_Part_8.pdf already downloaded\nBxl_1906_Tome_I1_Part_9.pdf already downloaded\nBxl_1906_Tome_II1_Part_1.pdf already downloaded\nBxl_1906_Tome_II1_Part_2.pdf already downloaded\nBxl_1906_Tome_II1_Part_3.pdf already downloaded\nBxl_1907_Tome_I2_Part_1.pdf already downloaded\nBxl_1907_Tome_I2_Part_10.pdf already downloaded\nBxl_1907_Tome_I2_Part_11.pdf already downloaded\nBxl_1907_Tome_I2_Part_12.pdf already downloaded\nBxl_1907_Tome_I2_Part_13.pdf already downloaded\nBxl_1907_Tome_I2_Part_14.pdf already downloaded\nBxl_1907_Tome_I2_Part_15.pdf already downloaded\nBxl_1907_Tome_I2_Part_16.pdf already downloaded\nBxl_1907_Tome_I2_Part_2.pdf already downloaded\nBxl_1907_Tome_I2_Part_3.pdf already downloaded\nBxl_1907_Tome_I2_Part_4.pdf already downloaded\nBxl_1907_Tome_I2_Part_5.pdf already downloaded\nBxl_1907_Tome_I2_Part_6.pdf already downloaded\nBxl_1907_Tome_I2_Part_7.pdf already downloaded\nBxl_1907_Tome_I2_Part_8.pdf already downloaded\nBxl_1907_Tome_I2_Part_9.pdf already downloaded\nBxl_1907_Tome_II1_Part_1.pdf already downloaded\nBxl_1907_Tome_II1_Part_2.pdf already downloaded\nBxl_1907_Tome_II1_Part_3.pdf already downloaded\nBxl_1908_Tome_I1_Part_1.pdf already downloaded\nBxl_1908_Tome_I1_Part_10.pdf already downloaded\nBxl_1908_Tome_I1_Part_11.pdf already downloaded\nBxl_1908_Tome_I1_Part_12.pdf already downloaded\nBxl_1908_Tome_I1_Part_13.pdf already downloaded\nBxl_1908_Tome_I1_Part_2.pdf already downloaded\nBxl_1908_Tome_I1_Part_3.pdf already downloaded\nBxl_1908_Tome_I1_Part_4.pdf already downloaded\nBxl_1908_Tome_I1_Part_5.pdf already downloaded\nBxl_1908_Tome_I1_Part_6.pdf already downloaded\nBxl_1908_Tome_I1_Part_7.pdf already downloaded\nBxl_1908_Tome_I1_Part_8.pdf already downloaded\nBxl_1908_Tome_I1_Part_9.pdf already downloaded\nBxl_1908_Tome_II1_Part_1.pdf already downloaded\nBxl_1908_Tome_II1_Part_2.pdf already downloaded\nBxl_1908_Tome_II1_Part_3.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1909_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1909_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1909_Tome_II1_Part_1.pdf already downloaded\nBxl_1909_Tome_II1_Part_2.pdf already downloaded\nBxl_1909_Tome_II1_Part_3.pdf already downloaded\nBxl_1910_Tome_I1_Part_1.pdf already downloaded\nBxl_1910_Tome_I1_Part_2.pdf already downloaded\nBxl_1910_Tome_I1_Part_3.pdf already downloaded\nBxl_1910_Tome_I1_Part_4.pdf already downloaded\nBxl_1910_Tome_I1_Part_5.pdf already downloaded\nBxl_1910_Tome_I1_Part_6.pdf already downloaded\nBxl_1910_Tome_I1_Part_7.pdf already downloaded\nBxl_1910_Tome_I1_Part_8.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_11.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_12.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_13.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1910_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_11.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_12.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1910_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1910_Tome_II1_Part_1.pdf already downloaded\nBxl_1910_Tome_II1_Part_2.pdf already downloaded\nBxl_1910_Tome_II1_Part_3.pdf already downloaded\nBxl_1911_Tome_I1_Part_1.pdf already downloaded\nBxl_1911_Tome_I1_Part_10.pdf already downloaded\nBxl_1911_Tome_I1_Part_11.pdf already downloaded\nBxl_1911_Tome_I1_Part_12.pdf already downloaded\nBxl_1911_Tome_I1_Part_13.pdf already downloaded\nBxl_1911_Tome_I1_Part_2.pdf already downloaded\nBxl_1911_Tome_I1_Part_3.pdf already downloaded\nBxl_1911_Tome_I1_Part_4.pdf already downloaded\nBxl_1911_Tome_I1_Part_5.pdf already downloaded\nBxl_1911_Tome_I1_Part_6.pdf already downloaded\nBxl_1911_Tome_I1_Part_7.pdf already downloaded\nBxl_1911_Tome_I1_Part_8.pdf already downloaded\nBxl_1911_Tome_I1_Part_9.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1911_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_11.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1911_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1911_Tome_II1_Part_1.pdf already downloaded\nBxl_1911_Tome_II1_Part_2.pdf already downloaded\nBxl_1911_Tome_II1_Part_3.pdf already downloaded\nBxl_1911_Tome_II1_Part_4.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_1.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_2.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_3.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_4.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_5.pdf already downloaded\nBxl_1912_Tome_I1_1_Part_6.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_1.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_2.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_3.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_4.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_5.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_6.pdf already downloaded\nBxl_1912_Tome_I1_2_Part_7.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1912_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_11.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_12.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1912_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1912_Tome_II1_Part_1.pdf already downloaded\nBxl_1912_Tome_II1_Part_2.pdf already downloaded\nBxl_1912_Tome_II1_Part_3.pdf already downloaded\nBxl_1913_Tome_I1_Part_1.pdf already downloaded\nBxl_1913_Tome_I1_Part_10.pdf already downloaded\nBxl_1913_Tome_I1_Part_11.pdf already downloaded\nBxl_1913_Tome_I1_Part_12.pdf already downloaded\nBxl_1913_Tome_I1_Part_2.pdf already downloaded\nBxl_1913_Tome_I1_Part_3.pdf already downloaded\nBxl_1913_Tome_I1_Part_4.pdf already downloaded\nBxl_1913_Tome_I1_Part_5.pdf already downloaded\nBxl_1913_Tome_I1_Part_6.pdf already downloaded\nBxl_1913_Tome_I1_Part_7.pdf already downloaded\nBxl_1913_Tome_I1_Part_8.pdf already downloaded\nBxl_1913_Tome_I1_Part_9.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_1.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_10.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_2.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_3.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_4.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_5.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_6.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_7.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_8.pdf already downloaded\nBxl_1913_Tome_I2_1_Part_9.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_11.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1913_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1913_Tome_II1_Part_1.pdf already downloaded\nBxl_1913_Tome_II1_Part_2.pdf already downloaded\nBxl_1913_Tome_II1_Part_3.pdf already downloaded\nBxl_1914_Tome_II1_Part_1.pdf already downloaded\nBxl_1914_Tome_II1_Part_2.pdf already downloaded\nBxl_1915_Tome_II1_Part_1.pdf already downloaded\nBxl_1915_Tome_II1_Part_2.pdf already downloaded\nBxl_1915_Tome_II1_Part_3.pdf already downloaded\nBxl_1915_Tome_II1_Part_4.pdf already downloaded\nBxl_1915_Tome_II1_Part_5.pdf already downloaded\nBxl_1918_Tome_I2_Part_1.pdf already downloaded\nBxl_1919_Tome_budget_Part_1.pdf already downloaded\nBxl_1919_Tome_budget_Part_2.pdf already downloaded\nBxl_1919_Tome_budget_Part_3.pdf already downloaded\nBxl_1919_Tome_budget_Part_4.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_1.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_10.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_2.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_3.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_4.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_5.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_6.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_7.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_8.pdf already downloaded\nBxl_1919_Tome_I2_2_Part_9.pdf already downloaded\nBxl_1920_Tome_I2_Part_1.pdf already downloaded\nBxl_1920_Tome_I2_Part_10.pdf already downloaded\nBxl_1920_Tome_I2_Part_11.pdf already downloaded\nBxl_1920_Tome_I2_Part_12.pdf already downloaded\nBxl_1920_Tome_I2_Part_13.pdf already downloaded\nBxl_1920_Tome_I2_Part_14.pdf already downloaded\nBxl_1920_Tome_I2_Part_15.pdf already downloaded\nBxl_1920_Tome_I2_Part_16.pdf already downloaded\nBxl_1920_Tome_I2_Part_17.pdf already downloaded\nBxl_1920_Tome_I2_Part_18.pdf already downloaded\nBxl_1920_Tome_I2_Part_19.pdf already downloaded\nBxl_1920_Tome_I2_Part_2.pdf already downloaded\nBxl_1920_Tome_I2_Part_3.pdf already downloaded\nBxl_1920_Tome_I2_Part_4.pdf already downloaded\nBxl_1920_Tome_I2_Part_5.pdf already downloaded\nBxl_1920_Tome_I2_Part_6.pdf already downloaded\nBxl_1920_Tome_I2_Part_7.pdf already downloaded\nBxl_1920_Tome_I2_Part_8.pdf already downloaded\nBxl_1920_Tome_I2_Part_9.pdf already downloaded\nBxl_1921_Tome_I1_Part_1.pdf already downloaded\nBxl_1921_Tome_I1_Part_2.pdf already downloaded\nBxl_1921_Tome_I1_Part_3.pdf already downloaded\nBxl_1921_Tome_I1_Part_4.pdf already downloaded\nBxl_1921_Tome_I1_Part_5.pdf already downloaded\nBxl_1921_Tome_I1_Part_6.pdf already downloaded\nBxl_1921_Tome_I1_Part_7.pdf already downloaded\nBxl_1921_Tome_I1_Part_8.pdf already downloaded\nBxl_1921_Tome_I1_Part_9.pdf already downloaded\nBxl_1922_Tome_II1_Part_1.pdf already downloaded\nBxl_1922_Tome_II1_Part_2.pdf already downloaded\nBxl_1922_Tome_II1_Part_3.pdf already downloaded\nBxl_1922_Tome_II1_Part_4.pdf already downloaded\nBxl_1923_Tome_I1_Part_1.pdf already downloaded\nBxl_1923_Tome_I1_Part_10.pdf already downloaded\nBxl_1923_Tome_I1_Part_2.pdf already downloaded\nBxl_1923_Tome_I1_Part_3.pdf already downloaded\nBxl_1923_Tome_I1_Part_4.pdf already downloaded\nBxl_1923_Tome_I1_Part_5.pdf already downloaded\nBxl_1923_Tome_I1_Part_6.pdf already downloaded\nBxl_1923_Tome_I1_Part_7.pdf already downloaded\nBxl_1923_Tome_I1_Part_8.pdf already downloaded\nBxl_1923_Tome_I1_Part_9.pdf already downloaded\nBxl_1923_Tome_I2_Part_1.pdf already downloaded\nBxl_1923_Tome_I2_Part_2.pdf already downloaded\nBxl_1923_Tome_I2_Part_3.pdf already downloaded\nBxl_1923_Tome_I2_Part_4.pdf already downloaded\nBxl_1923_Tome_I2_Part_5.pdf already downloaded\nBxl_1923_Tome_I2_Part_6.pdf already downloaded\nBxl_1923_Tome_I2_Part_7.pdf already downloaded\nBxl_1923_Tome_I2_Part_8.pdf already downloaded\nBxl_1923_Tome_I2_Part_9.pdf already downloaded\nBxl_1923_Tome_II1_Part_1.pdf already downloaded\nBxl_1923_Tome_II1_Part_2.pdf already downloaded\nBxl_1923_Tome_II1_Part_3.pdf already downloaded\nBxl_1924_Tome_I1_Part_1.pdf already downloaded\nBxl_1924_Tome_I1_Part_2.pdf already downloaded\nBxl_1924_Tome_I1_Part_3.pdf already downloaded\nBxl_1924_Tome_I1_Part_4.pdf already downloaded\nBxl_1924_Tome_I1_Part_5.pdf already downloaded\nBxl_1924_Tome_I1_Part_6.pdf already downloaded\nBxl_1924_Tome_I1_Part_7.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_1.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_2.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_3.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_4.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_5.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_6.pdf already downloaded\nBxl_1925_Tome_II1_1_Part_7.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_1.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_2.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_3.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_4.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_5.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_6.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_7.pdf already downloaded\nBxl_1925_Tome_II1_2_Part_8.pdf already downloaded\nBxl_1926_Tome_I1_Part_1.pdf already downloaded\nBxl_1926_Tome_I1_Part_2.pdf already downloaded\nBxl_1926_Tome_I1_Part_3.pdf already downloaded\nBxl_1926_Tome_I1_Part_4.pdf already downloaded\nBxl_1926_Tome_I1_Part_5.pdf already downloaded\nBxl_1926_Tome_I1_Part_6.pdf already downloaded\nBxl_1926_Tome_I1_Part_7.pdf already downloaded\nBxl_1926_Tome_I1_Part_8.pdf already downloaded\nBxl_1926_Tome_I1_Part_9.pdf already downloaded\nBxl_1926_Tome_I2_Part_1.pdf already downloaded\nBxl_1926_Tome_I2_Part_10.pdf already downloaded\nBxl_1926_Tome_I2_Part_11.pdf already downloaded\nBxl_1926_Tome_I2_Part_12.pdf already downloaded\nBxl_1926_Tome_I2_Part_13.pdf already downloaded\nBxl_1926_Tome_I2_Part_14.pdf already downloaded\nBxl_1926_Tome_I2_Part_15.pdf already downloaded\nBxl_1926_Tome_I2_Part_16.pdf already downloaded\nBxl_1926_Tome_I2_Part_2.pdf already downloaded\nBxl_1926_Tome_I2_Part_3.pdf already downloaded\nBxl_1926_Tome_I2_Part_4.pdf already downloaded\nBxl_1926_Tome_I2_Part_5.pdf already downloaded\nBxl_1926_Tome_I2_Part_6.pdf already downloaded\nBxl_1926_Tome_I2_Part_7.pdf already downloaded\nBxl_1926_Tome_I2_Part_8.pdf already downloaded\nBxl_1926_Tome_I2_Part_9.pdf already downloaded\nBxl_1926_Tome_II1_Part_1.pdf already downloaded\nBxl_1926_Tome_II1_Part_10.pdf already downloaded\nBxl_1926_Tome_II1_Part_11.pdf already downloaded\nBxl_1926_Tome_II1_Part_12.pdf already downloaded\nBxl_1926_Tome_II1_Part_13.pdf already downloaded\nBxl_1926_Tome_II1_Part_14.pdf already downloaded\nBxl_1926_Tome_II1_Part_15.pdf already downloaded\nBxl_1926_Tome_II1_Part_16.pdf already downloaded\nBxl_1926_Tome_II1_Part_2.pdf already downloaded\nBxl_1926_Tome_II1_Part_3.pdf already downloaded\nBxl_1926_Tome_II1_Part_4.pdf already downloaded\nBxl_1926_Tome_II1_Part_5.pdf already downloaded\nBxl_1926_Tome_II1_Part_6.pdf already downloaded\nBxl_1926_Tome_II1_Part_7.pdf already downloaded\nBxl_1926_Tome_II1_Part_8.pdf already downloaded\nBxl_1926_Tome_II1_Part_9.pdf already downloaded\nBxl_1927_Tome_I_Part_1.pdf already downloaded\nBxl_1927_Tome_I_Part_10.pdf already downloaded\nBxl_1927_Tome_I_Part_11.pdf already downloaded\nBxl_1927_Tome_I_Part_12.pdf already downloaded\nBxl_1927_Tome_I_Part_13.pdf already downloaded\nBxl_1927_Tome_I_Part_2.pdf already downloaded\nBxl_1927_Tome_I_Part_3.pdf already downloaded\nBxl_1927_Tome_I_Part_4.pdf already downloaded\nBxl_1927_Tome_I_Part_5.pdf already downloaded\nBxl_1927_Tome_I_Part_6.pdf already downloaded\nBxl_1927_Tome_I_Part_7.pdf already downloaded\nBxl_1927_Tome_I_Part_8.pdf already downloaded\nBxl_1927_Tome_I_Part_9.pdf already downloaded\nBxl_1928_Tome_II1_Part_1.pdf already downloaded\nBxl_1928_Tome_II1_Part_10.pdf already downloaded\nBxl_1928_Tome_II1_Part_11.pdf already downloaded\nBxl_1928_Tome_II1_Part_12.pdf already downloaded\nBxl_1928_Tome_II1_Part_2.pdf already downloaded\nBxl_1928_Tome_II1_Part_3.pdf already downloaded\nBxl_1928_Tome_II1_Part_4.pdf already downloaded\nBxl_1928_Tome_II1_Part_5.pdf already downloaded\nBxl_1928_Tome_II1_Part_6.pdf already downloaded\nBxl_1928_Tome_II1_Part_7.pdf already downloaded\nBxl_1928_Tome_II1_Part_8.pdf already downloaded\nBxl_1928_Tome_II1_Part_9.pdf already downloaded\nBxl_1928_Tome_II2_Part_1.pdf already downloaded\nBxl_1928_Tome_II2_Part_10.pdf already downloaded\nBxl_1928_Tome_II2_Part_11.pdf already downloaded\nBxl_1928_Tome_II2_Part_12.pdf already downloaded\nBxl_1928_Tome_II2_Part_2.pdf already downloaded\nBxl_1928_Tome_II2_Part_3.pdf already downloaded\nBxl_1928_Tome_II2_Part_4.pdf already downloaded\nBxl_1928_Tome_II2_Part_5.pdf already downloaded\nBxl_1928_Tome_II2_Part_6.pdf already downloaded\nBxl_1928_Tome_II2_Part_7.pdf already downloaded\nBxl_1928_Tome_II2_Part_8.pdf already downloaded\nBxl_1928_Tome_II2_Part_9.pdf already downloaded\nBxl_1929_Tome_II1_Part_1.pdf already downloaded\nBxl_1929_Tome_II1_Part_2.pdf already downloaded\nBxl_1929_Tome_II1_Part_3.pdf already downloaded\nBxl_1929_Tome_II1_Part_4.pdf already downloaded\nBxl_1929_Tome_II1_Part_5.pdf already downloaded\nBxl_1929_Tome_II1_Part_6.pdf already downloaded\nBxl_1929_Tome_II1_Part_7.pdf already downloaded\nBxl_1929_Tome_II2_Part_1.pdf already downloaded\nBxl_1929_Tome_II2_Part_2.pdf already downloaded\nBxl_1929_Tome_II2_Part_3.pdf already downloaded\nBxl_1929_Tome_II2_Part_4.pdf already downloaded\nBxl_1929_Tome_II2_Part_5.pdf already downloaded\nBxl_1929_Tome_II2_Part_6.pdf already downloaded\nBxl_1929_Tome_II2_Part_7.pdf already downloaded\nBxl_1929_Tome_I_Part_1.pdf already downloaded\nBxl_1929_Tome_I_Part_10.pdf already downloaded\nBxl_1929_Tome_I_Part_2.pdf already downloaded\nBxl_1929_Tome_I_Part_3.pdf already downloaded\nBxl_1929_Tome_I_Part_4.pdf already downloaded\nBxl_1929_Tome_I_Part_5.pdf already downloaded\nBxl_1929_Tome_I_Part_6.pdf already downloaded\nBxl_1929_Tome_I_Part_7.pdf already downloaded\nBxl_1929_Tome_I_Part_8.pdf already downloaded\nBxl_1929_Tome_I_Part_9.pdf already downloaded\nBxl_1930_Tome_II1_Part_1.pdf already downloaded\nBxl_1930_Tome_II1_Part_10.pdf already downloaded\nBxl_1930_Tome_II1_Part_11.pdf already downloaded\nBxl_1930_Tome_II1_Part_2.pdf already downloaded\nBxl_1930_Tome_II1_Part_3.pdf already downloaded\nBxl_1930_Tome_II1_Part_4.pdf already downloaded\nBxl_1930_Tome_II1_Part_5.pdf already downloaded\nBxl_1930_Tome_II1_Part_6.pdf already downloaded\nBxl_1930_Tome_II1_Part_7.pdf already downloaded\nBxl_1930_Tome_II1_Part_8.pdf already downloaded\nBxl_1930_Tome_II1_Part_9.pdf already downloaded\nBxl_1930_Tome_II2_Part_1.pdf already downloaded\nBxl_1930_Tome_II2_Part_10.pdf already downloaded\nBxl_1930_Tome_II2_Part_11.pdf already downloaded\nBxl_1930_Tome_II2_Part_2.pdf already downloaded\nBxl_1930_Tome_II2_Part_3.pdf already downloaded\nBxl_1930_Tome_II2_Part_4.pdf already downloaded\nBxl_1930_Tome_II2_Part_5.pdf already downloaded\nBxl_1930_Tome_II2_Part_6.pdf already downloaded\nBxl_1930_Tome_II2_Part_7.pdf already downloaded\nBxl_1930_Tome_II2_Part_8.pdf already downloaded\nBxl_1930_Tome_II2_Part_9.pdf already downloaded\nBxl_1930_Tome_I_Part_1.pdf already downloaded\nBxl_1930_Tome_I_Part_2.pdf already downloaded\nBxl_1930_Tome_I_Part_3.pdf already downloaded\nBxl_1930_Tome_I_Part_4.pdf already downloaded\nBxl_1930_Tome_I_Part_5.pdf already downloaded\nBxl_1930_Tome_I_Part_6.pdf already downloaded\nBxl_1930_Tome_I_Part_7.pdf already downloaded\nBxl_1930_Tome_I_Part_8.pdf already downloaded\nBxl_1930_Tome_I_Part_9.pdf already downloaded\nBxl_1931_Tome_II2_Part_1.pdf already downloaded\nBxl_1931_Tome_II2_Part_2.pdf already downloaded\nBxl_1931_Tome_II2_Part_3.pdf already downloaded\nBxl_1931_Tome_II2_Part_4.pdf already downloaded\nBxl_1931_Tome_II2_Part_5.pdf already downloaded\nBxl_1931_Tome_II2_Part_6.pdf already downloaded\nBxl_1931_Tome_II2_Part_7.pdf already downloaded\nBxl_1931_Tome_II2_Part_8.pdf already downloaded\nBxl_1931_Tome_I_Part_1.pdf already downloaded\nBxl_1931_Tome_I_Part_10.pdf already downloaded\nBxl_1931_Tome_I_Part_2.pdf already downloaded\nBxl_1931_Tome_I_Part_3.pdf already downloaded\nBxl_1931_Tome_I_Part_4.pdf already downloaded\nBxl_1931_Tome_I_Part_5.pdf already downloaded\nBxl_1931_Tome_I_Part_6.pdf already downloaded\nBxl_1931_Tome_I_Part_7.pdf already downloaded\nBxl_1931_Tome_I_Part_8.pdf already downloaded\nBxl_1931_Tome_I_Part_9.pdf already downloaded\nBxl_1932_Tome_II1_Part_1.pdf already downloaded\nBxl_1932_Tome_II1_Part_2.pdf already downloaded\nBxl_1932_Tome_II1_Part_3.pdf already downloaded\nBxl_1932_Tome_II1_Part_4.pdf already downloaded\nBxl_1932_Tome_II1_Part_5.pdf already downloaded\nBxl_1932_Tome_II1_Part_6.pdf already downloaded\nBxl_1932_Tome_II1_Part_7.pdf already downloaded\nBxl_1932_Tome_II2_Part_1.pdf already downloaded\nBxl_1932_Tome_II2_Part_2.pdf already downloaded\nBxl_1932_Tome_II2_Part_3.pdf already downloaded\nBxl_1932_Tome_II2_Part_4.pdf already downloaded\nBxl_1932_Tome_II2_Part_5.pdf already downloaded\nBxl_1932_Tome_II2_Part_6.pdf already downloaded\nBxl_1932_Tome_II2_Part_7.pdf already downloaded\nBxl_1932_Tome_II2_Part_8.pdf already downloaded\nBxl_1932_Tome_II2_Part_9.pdf already downloaded\nBxl_1932_Tome_I_Part_1.pdf already downloaded\nBxl_1932_Tome_I_Part_2.pdf already downloaded\nBxl_1932_Tome_I_Part_3.pdf already downloaded\nBxl_1932_Tome_I_Part_4.pdf already downloaded\nBxl_1932_Tome_I_Part_5.pdf already downloaded\nBxl_1932_Tome_I_Part_6.pdf already downloaded\nBxl_1932_Tome_I_Part_7.pdf already downloaded\nBxl_1932_Tome_I_Part_8.pdf already downloaded\nBxl_1932_Tome_I_Part_9.pdf already downloaded\nBxl_1933_Tome_II2_Part_1.pdf already downloaded\nBxl_1933_Tome_II2_Part_10.pdf already downloaded\nBxl_1933_Tome_II2_Part_2.pdf already downloaded\nBxl_1933_Tome_II2_Part_3.pdf already downloaded\nBxl_1933_Tome_II2_Part_4.pdf already downloaded\nBxl_1933_Tome_II2_Part_5.pdf already downloaded\nBxl_1933_Tome_II2_Part_6.pdf already downloaded\nBxl_1933_Tome_II2_Part_7.pdf already downloaded\nBxl_1933_Tome_II2_Part_8.pdf already downloaded\nBxl_1933_Tome_II2_Part_9.pdf already downloaded\nBxl_1934_Tome_I_Part_1.pdf already downloaded\nBxl_1934_Tome_I_Part_10.pdf already downloaded\nBxl_1934_Tome_I_Part_2.pdf already downloaded\nBxl_1934_Tome_I_Part_3.pdf already downloaded\nBxl_1934_Tome_I_Part_4.pdf already downloaded\nBxl_1934_Tome_I_Part_5.pdf already downloaded\nBxl_1934_Tome_I_Part_6.pdf already downloaded\nBxl_1934_Tome_I_Part_7.pdf already downloaded\nBxl_1934_Tome_I_Part_8.pdf already downloaded\nBxl_1934_Tome_I_Part_9.pdf already downloaded\nBxl_1935_Tome_II_Part_1.pdf already downloaded\nBxl_1935_Tome_II_Part_10.pdf already downloaded\nBxl_1935_Tome_II_Part_2.pdf already downloaded\nBxl_1935_Tome_II_Part_3.pdf already downloaded\nBxl_1935_Tome_II_Part_4.pdf already downloaded\nBxl_1935_Tome_II_Part_5.pdf already downloaded\nBxl_1935_Tome_II_Part_6.pdf already downloaded\nBxl_1935_Tome_II_Part_7.pdf already downloaded\nBxl_1935_Tome_II_Part_8.pdf already downloaded\nBxl_1935_Tome_II_Part_9.pdf already downloaded\nBxl_1935_Tome_I_Part_1.pdf already downloaded\nBxl_1935_Tome_I_Part_2.pdf already downloaded\nBxl_1935_Tome_I_Part_3.pdf already downloaded\nBxl_1935_Tome_I_Part_4.pdf already downloaded\nBxl_1935_Tome_I_Part_5.pdf already downloaded\nBxl_1935_Tome_I_Part_6.pdf already downloaded\nBxl_1935_Tome_I_Part_7.pdf already downloaded\nBxl_1935_Tome_I_Part_8.pdf already downloaded\nBxl_1936_Tome_I2_Part_1.pdf already downloaded\nBxl_1936_Tome_I2_Part_2.pdf already downloaded\nBxl_1936_Tome_I2_Part_3.pdf already downloaded\nBxl_1936_Tome_I2_Part_4.pdf already downloaded\nBxl_1936_Tome_I2_Part_5.pdf already downloaded\nBxl_1936_Tome_II2_Part_1.pdf already downloaded\nBxl_1936_Tome_II2_Part_10.pdf already downloaded\nBxl_1936_Tome_II2_Part_11.pdf already downloaded\nBxl_1936_Tome_II2_Part_12.pdf already downloaded\nBxl_1936_Tome_II2_Part_13.pdf already downloaded\nBxl_1936_Tome_II2_Part_14.pdf already downloaded\nBxl_1936_Tome_II2_Part_15.pdf already downloaded\nBxl_1936_Tome_II2_Part_16.pdf already downloaded\nBxl_1936_Tome_II2_Part_2.pdf already downloaded\nBxl_1936_Tome_II2_Part_3.pdf already downloaded\nBxl_1936_Tome_II2_Part_4.pdf already downloaded\nBxl_1936_Tome_II2_Part_5.pdf already downloaded\nBxl_1936_Tome_II2_Part_6.pdf already downloaded\nBxl_1936_Tome_II2_Part_7.pdf already downloaded\nBxl_1936_Tome_II2_Part_8.pdf already downloaded\nBxl_1936_Tome_II2_Part_9.pdf already downloaded\nBxl_1936_Tome_I_Part_1.pdf already downloaded\nBxl_1936_Tome_I_Part_2.pdf already downloaded\nBxl_1936_Tome_I_Part_3.pdf already downloaded\nBxl_1936_Tome_I_Part_4.pdf already downloaded\nBxl_1936_Tome_I_Part_5.pdf already downloaded\nBxl_1936_Tome_I_Part_6.pdf already downloaded\nBxl_1937_Tome_II1_Part_1.pdf already downloaded\nBxl_1937_Tome_II1_Part_2.pdf already downloaded\nBxl_1937_Tome_II1_Part_3.pdf already downloaded\nBxl_1937_Tome_II1_Part_4.pdf already downloaded\nBxl_1937_Tome_II1_Part_5.pdf already downloaded\nBxl_1937_Tome_II2_Part_1.pdf already downloaded\nBxl_1937_Tome_II2_Part_10.pdf already downloaded\nBxl_1937_Tome_II2_Part_11.pdf already downloaded\nBxl_1937_Tome_II2_Part_2.pdf already downloaded\nBxl_1937_Tome_II2_Part_3.pdf already downloaded\nBxl_1937_Tome_II2_Part_4.pdf already downloaded\nBxl_1937_Tome_II2_Part_5.pdf already downloaded\nBxl_1937_Tome_II2_Part_6.pdf already downloaded\nBxl_1937_Tome_II2_Part_7.pdf already downloaded\nBxl_1937_Tome_II2_Part_8.pdf already downloaded\nBxl_1937_Tome_II2_Part_9.pdf already downloaded\nBxl_1937_Tome_I_Part_1.pdf already downloaded\nBxl_1937_Tome_I_Part_2.pdf already downloaded\nBxl_1937_Tome_I_Part_3.pdf already downloaded\nBxl_1937_Tome_I_Part_4.pdf already downloaded\nBxl_1937_Tome_I_Part_5.pdf already downloaded\nBxl_1937_Tome_I_Part_6.pdf already downloaded\nBxl_1937_Tome_I_Part_7.pdf already downloaded\nBxl_1937_Tome_I_Part_8.pdf already downloaded\nBxl_1938_Tome_II_Part_1.pdf already downloaded\nBxl_1938_Tome_II_Part_10.pdf already downloaded\nBxl_1938_Tome_II_Part_11.pdf already downloaded\nBxl_1938_Tome_II_Part_2.pdf already downloaded\nBxl_1938_Tome_II_Part_3.pdf already downloaded\nBxl_1938_Tome_II_Part_4.pdf already downloaded\nBxl_1938_Tome_II_Part_5.pdf already downloaded\nBxl_1938_Tome_II_Part_6.pdf already downloaded\nBxl_1938_Tome_II_Part_7.pdf already downloaded\nBxl_1938_Tome_II_Part_8.pdf already downloaded\nBxl_1938_Tome_II_Part_9.pdf already downloaded\nBxl_1938_Tome_I_Part_1.pdf already downloaded\nBxl_1938_Tome_I_Part_2.pdf already downloaded\nBxl_1938_Tome_I_Part_3.pdf already downloaded\nBxl_1938_Tome_I_Part_4.pdf already downloaded\nBxl_1938_Tome_I_Part_5.pdf already downloaded\nBxl_1938_Tome_I_Part_6.pdf already downloaded\nBxl_1938_Tome_I_Part_7.pdf already downloaded\nBxl_1938_Tome_I_Part_8.pdf already downloaded\nBxl_1939_Tome_III_Part_1.pdf already downloaded\nBxl_1939_Tome_III_Part_2.pdf already downloaded\nBxl_1939_Tome_III_Part_3.pdf already downloaded\nBxl_1939_Tome_III_Part_4.pdf already downloaded\nBxl_1939_Tome_III_Part_5.pdf already downloaded\nBxl_1939_Tome_III_Part_6.pdf already downloaded\nBxl_1939_Tome_II_Part_1.pdf already downloaded\nBxl_1939_Tome_II_Part_2.pdf already downloaded\nBxl_1939_Tome_II_Part_3.pdf already downloaded\nBxl_1939_Tome_II_Part_4.pdf already downloaded\nBxl_1939_Tome_II_Part_5.pdf already downloaded\nBxl_1939_Tome_II_Part_6.pdf already downloaded\nBxl_1939_Tome_II_Part_7.pdf already downloaded\nBxl_1939_Tome_II_Part_8.pdf already downloaded\nBxl_1939_Tome_II_Part_9.pdf already downloaded\nBxl_1939_Tome_I_Part_1.pdf already downloaded\nBxl_1939_Tome_I_Part_2.pdf already downloaded\nBxl_1939_Tome_I_Part_3.pdf already downloaded\nBxl_1939_Tome_I_Part_4.pdf already downloaded\nBxl_1939_Tome_I_Part_5.pdf already downloaded\nBxl_1939_Tome_I_Part_6.pdf already downloaded\nBxl_1939_Tome_I_Part_7.pdf already downloaded\nBxl_1939_Tome_I_Part_8.pdf already downloaded\nBxl_1939_Tome_I_Part_9.pdf already downloaded\nBxl_1945_Tome_II1_Part_1.pdf already downloaded\nBxl_1945_Tome_II1_Part_2.pdf already downloaded\nBxl_1945_Tome_II1_Part_3.pdf already downloaded\nBxl_1945_Tome_II1_Part_4.pdf already downloaded\nBxl_1945_Tome_II1_Part_5.pdf already downloaded\nBxl_1945_Tome_II1_Part_6.pdf already downloaded\nBxl_1945_Tome_II1_Part_7.pdf already downloaded\nBxl_1945_Tome_II1_Part_8.pdf already downloaded\nBxl_1945_Tome_II1_Part_9.pdf already downloaded\nBxl_1945_Tome_II2_Part_1.pdf already downloaded\nBxl_1945_Tome_II2_Part_2.pdf already downloaded\nBxl_1945_Tome_II2_Part_3.pdf already downloaded\nBxl_1945_Tome_II2_Part_4.pdf already downloaded\nBxl_1945_Tome_II2_Part_5.pdf already downloaded\nBxl_1945_Tome_I_Part_1.pdf already downloaded\nBxl_1945_Tome_I_Part_2.pdf already downloaded\nBxl_1945_Tome_I_Part_3.pdf already downloaded\nBxl_1945_Tome_I_Part_4.pdf already downloaded\nBxl_1946_Tome_II_Part_1.pdf already downloaded\nBxl_1946_Tome_II_Part_10.pdf already downloaded\nBxl_1946_Tome_II_Part_11.pdf already downloaded\nBxl_1946_Tome_II_Part_12.pdf already downloaded\nBxl_1946_Tome_II_Part_13.pdf already downloaded\nBxl_1946_Tome_II_Part_14.pdf already downloaded\nBxl_1946_Tome_II_Part_2.pdf already downloaded\nBxl_1946_Tome_II_Part_3.pdf already downloaded\nBxl_1946_Tome_II_Part_4.pdf already downloaded\nBxl_1946_Tome_II_Part_5.pdf already downloaded\nBxl_1946_Tome_II_Part_6.pdf already downloaded\nBxl_1946_Tome_II_Part_7.pdf already downloaded\nBxl_1946_Tome_II_Part_8.pdf already downloaded\nBxl_1946_Tome_II_Part_9.pdf already downloaded\nBxl_1946_Tome_I_Part_1.pdf already downloaded\nBxl_1946_Tome_I_Part_2.pdf already downloaded\nBxl_1946_Tome_I_Part_3.pdf already downloaded\nBxl_1946_Tome_I_Part_4.pdf already downloaded\nBxl_1946_Tome_I_Part_5.pdf already downloaded\nBxl_1946_Tome_I_Part_6.pdf already downloaded\nBxl_1946_Tome_I_Part_7.pdf already downloaded\nBxl_1947_Tome_II_Part_1.pdf already downloaded\nBxl_1947_Tome_II_Part_10.pdf already downloaded\nBxl_1947_Tome_II_Part_11.pdf already downloaded\nBxl_1947_Tome_II_Part_12.pdf already downloaded\nBxl_1947_Tome_II_Part_13.pdf already downloaded\nBxl_1947_Tome_II_Part_2.pdf already downloaded\nBxl_1947_Tome_II_Part_3.pdf already downloaded\nBxl_1947_Tome_II_Part_4.pdf already downloaded\nBxl_1947_Tome_II_Part_5.pdf already downloaded\nBxl_1947_Tome_II_Part_6.pdf already downloaded\nBxl_1947_Tome_II_Part_7.pdf already downloaded\nBxl_1947_Tome_II_Part_8.pdf already downloaded\nBxl_1947_Tome_II_Part_9.pdf already downloaded\nBxl_1947_Tome_I_Part_1.pdf already downloaded\nBxl_1947_Tome_I_Part_2.pdf already downloaded\nBxl_1947_Tome_I_Part_3.pdf already downloaded\nBxl_1947_Tome_I_Part_4.pdf already downloaded\nBxl_1947_Tome_I_Part_5.pdf already downloaded\nBxl_1947_Tome_I_Part_6.pdf already downloaded\nBxl_1947_Tome_I_Part_7.pdf already downloaded\nBxl_1947_Tome_I_Part_8.pdf already downloaded\nBxl_1947_Tome_I_Part_9.pdf already downloaded\nBxl_1948_Tome_II_Part_1.pdf already downloaded\nBxl_1948_Tome_II_Part_10.pdf already downloaded\nBxl_1948_Tome_II_Part_11.pdf already downloaded\nBxl_1948_Tome_II_Part_2.pdf already downloaded\nBxl_1948_Tome_II_Part_3.pdf already downloaded\nBxl_1948_Tome_II_Part_4.pdf already downloaded\nBxl_1948_Tome_II_Part_5.pdf already downloaded\nBxl_1948_Tome_II_Part_6.pdf already downloaded\nBxl_1948_Tome_II_Part_7.pdf already downloaded\nBxl_1948_Tome_II_Part_8.pdf already downloaded\nBxl_1948_Tome_II_Part_9.pdf already downloaded\nBxl_1948_Tome_I_Part_1.pdf already downloaded\nBxl_1948_Tome_I_Part_2.pdf already downloaded\nBxl_1948_Tome_I_Part_3.pdf already downloaded\nBxl_1948_Tome_I_Part_4.pdf already downloaded\nBxl_1948_Tome_I_Part_5.pdf already downloaded\nBxl_1948_Tome_I_Part_6.pdf already downloaded\nBxl_1948_Tome_I_Part_7.pdf already downloaded\nBxl_1948_Tome_I_Part_8.pdf already downloaded\nBxl_1948_Tome_I_Part_9.pdf already downloaded\nBxl_1949_Tome_II_Part_1.pdf already downloaded\nBxl_1949_Tome_II_Part_10.pdf already downloaded\nBxl_1949_Tome_II_Part_11.pdf already downloaded\nBxl_1949_Tome_II_Part_12.pdf already downloaded\nBxl_1949_Tome_II_Part_13.pdf already downloaded\nBxl_1949_Tome_II_Part_14.pdf already downloaded\nBxl_1949_Tome_II_Part_2.pdf already downloaded\nBxl_1949_Tome_II_Part_3.pdf already downloaded\nBxl_1949_Tome_II_Part_4.pdf already downloaded\nBxl_1949_Tome_II_Part_5.pdf already downloaded\nBxl_1949_Tome_II_Part_6.pdf already downloaded\nBxl_1949_Tome_II_Part_7.pdf already downloaded\nBxl_1949_Tome_II_Part_8.pdf already downloaded\nBxl_1949_Tome_II_Part_9.pdf already downloaded\nBxl_1949_Tome_I_Part_1.pdf already downloaded\nBxl_1949_Tome_I_Part_2.pdf already downloaded\nBxl_1949_Tome_I_Part_3.pdf already downloaded\nBxl_1949_Tome_I_Part_4.pdf already downloaded\nBxl_1949_Tome_I_Part_5.pdf already downloaded\nBxl_1949_Tome_I_Part_6.pdf already downloaded\nBxl_1949_Tome_I_Part_7.pdf already downloaded\nBxl_1950_Tome_III_Part_1.pdf already downloaded\nBxl_1950_Tome_III_Part_2.pdf already downloaded\nBxl_1950_Tome_III_Part_3.pdf already downloaded\nBxl_1950_Tome_III_Part_4.pdf already downloaded\nBxl_1950_Tome_III_Part_5.pdf already downloaded\nBxl_1950_Tome_III_Part_6.pdf already downloaded\nBxl_1950_Tome_III_Part_7.pdf already downloaded\nBxl_1950_Tome_II_Part_1.pdf already downloaded\nBxl_1950_Tome_II_Part_10.pdf already downloaded\nBxl_1950_Tome_II_Part_2.pdf already downloaded\nBxl_1950_Tome_II_Part_3.pdf already downloaded\nBxl_1950_Tome_II_Part_4.pdf already downloaded\nBxl_1950_Tome_II_Part_5.pdf already downloaded\nBxl_1950_Tome_II_Part_6.pdf already downloaded\nBxl_1950_Tome_II_Part_7.pdf already downloaded\nBxl_1950_Tome_II_Part_8.pdf already downloaded\nBxl_1950_Tome_II_Part_9.pdf already downloaded\nBxl_1950_Tome_I_Part_1.pdf already downloaded\nBxl_1950_Tome_I_Part_2.pdf already downloaded\nBxl_1950_Tome_I_Part_3.pdf already downloaded\nBxl_1950_Tome_I_Part_4.pdf already downloaded\nBxl_1950_Tome_I_Part_5.pdf already downloaded\nBxl_1950_Tome_I_Part_6.pdf already downloaded\nBxl_1950_Tome_I_Part_7.pdf already downloaded\nBxl_1950_Tome_I_Part_8.pdf already downloaded\nBxl_1951_Tome_II_Part_1.pdf already downloaded\nBxl_1951_Tome_II_Part_10.pdf already downloaded\nBxl_1951_Tome_II_Part_11.pdf already downloaded\nBxl_1951_Tome_II_Part_12.pdf already downloaded\nBxl_1951_Tome_II_Part_13.pdf already downloaded\nBxl_1951_Tome_II_Part_14.pdf already downloaded\nBxl_1951_Tome_II_Part_15.pdf already downloaded\nBxl_1951_Tome_II_Part_16.pdf already downloaded\nBxl_1951_Tome_II_Part_17.pdf already downloaded\nBxl_1951_Tome_II_Part_18.pdf already downloaded\nBxl_1951_Tome_II_Part_2.pdf already downloaded\nBxl_1951_Tome_II_Part_3.pdf already downloaded\nBxl_1951_Tome_II_Part_4.pdf already downloaded\nBxl_1951_Tome_II_Part_5.pdf already downloaded\nBxl_1951_Tome_II_Part_6.pdf already downloaded\nBxl_1951_Tome_II_Part_7.pdf already downloaded\nBxl_1951_Tome_II_Part_8.pdf already downloaded\nBxl_1951_Tome_II_Part_9.pdf already downloaded\nBxl_1951_Tome_I_Part_1.pdf already downloaded\nBxl_1951_Tome_I_Part_2.pdf already downloaded\nBxl_1951_Tome_I_Part_3.pdf already downloaded\nBxl_1951_Tome_I_Part_4.pdf already downloaded\nBxl_1951_Tome_I_Part_5.pdf already downloaded\nBxl_1951_Tome_I_Part_6.pdf already downloaded\nBxl_1951_Tome_I_Part_7.pdf already downloaded\nBxl_1952_Tome_III_Part_1.pdf already downloaded\nBxl_1952_Tome_III_Part_2.pdf already downloaded\nBxl_1952_Tome_III_Part_3.pdf already downloaded\nBxl_1952_Tome_III_Part_4.pdf already downloaded\nBxl_1952_Tome_III_Part_5.pdf already downloaded\nBxl_1952_Tome_III_Part_6.pdf already downloaded\nBxl_1952_Tome_II_Part_1.pdf already downloaded\nBxl_1952_Tome_II_Part_10.pdf already downloaded\nBxl_1952_Tome_II_Part_2.pdf already downloaded\nBxl_1952_Tome_II_Part_3.pdf already downloaded\nBxl_1952_Tome_II_Part_4.pdf already downloaded\nBxl_1952_Tome_II_Part_5.pdf already downloaded\nBxl_1952_Tome_II_Part_6.pdf already downloaded\nBxl_1952_Tome_II_Part_7.pdf already downloaded\nBxl_1952_Tome_II_Part_8.pdf already downloaded\nBxl_1952_Tome_II_Part_9.pdf already downloaded\nBxl_1952_Tome_I_Part_1.pdf already downloaded\nBxl_1952_Tome_I_Part_2.pdf already downloaded\nBxl_1952_Tome_I_Part_3.pdf already downloaded\nBxl_1952_Tome_I_Part_4.pdf already downloaded\nBxl_1952_Tome_I_Part_5.pdf already downloaded\nBxl_1952_Tome_I_Part_6.pdf already downloaded\nBxl_1952_Tome_I_Part_7.pdf already downloaded\nBxl_1952_Tome_I_Part_8.pdf already downloaded\nBxl_1952_Tome_I_Part_9.pdf already downloaded\nBxl_1953_Tome_III_Part_1.pdf already downloaded\nBxl_1953_Tome_III_Part_2.pdf already downloaded\nBxl_1953_Tome_III_Part_3.pdf already downloaded\nBxl_1953_Tome_III_Part_4.pdf already downloaded\nBxl_1953_Tome_III_Part_5.pdf already downloaded\nBxl_1953_Tome_III_Part_6.pdf already downloaded\nBxl_1953_Tome_III_Part_7.pdf already downloaded\nBxl_1953_Tome_III_Part_8.pdf already downloaded\nBxl_1953_Tome_III_Part_9.pdf already downloaded\nBxl_1953_Tome_II_Part_1.pdf already downloaded\nBxl_1953_Tome_II_Part_2.pdf already downloaded\nBxl_1953_Tome_II_Part_3.pdf already downloaded\nBxl_1953_Tome_II_Part_4.pdf already downloaded\nBxl_1953_Tome_II_Part_5.pdf already downloaded\nBxl_1953_Tome_II_Part_6.pdf already downloaded\nBxl_1953_Tome_II_Part_7.pdf already downloaded\nBxl_1953_Tome_II_Part_8.pdf already downloaded\nBxl_1953_Tome_II_Part_9.pdf already downloaded\nBxl_1953_Tome_I_Part_1.pdf already downloaded\nBxl_1953_Tome_I_Part_10.pdf already downloaded\nBxl_1953_Tome_I_Part_11.pdf already downloaded\nBxl_1953_Tome_I_Part_12.pdf already downloaded\nBxl_1953_Tome_I_Part_13.pdf already downloaded\nBxl_1953_Tome_I_Part_2.pdf already downloaded\nBxl_1953_Tome_I_Part_3.pdf already downloaded\nBxl_1953_Tome_I_Part_4.pdf already downloaded\nBxl_1953_Tome_I_Part_5.pdf already downloaded\nBxl_1953_Tome_I_Part_6.pdf already downloaded\nBxl_1953_Tome_I_Part_7.pdf already downloaded\nBxl_1953_Tome_I_Part_8.pdf already downloaded\nBxl_1953_Tome_I_Part_9.pdf already downloaded\nBxl_1954_Tome_III_Part_1.pdf already downloaded\nBxl_1954_Tome_III_Part_2.pdf already downloaded\nBxl_1954_Tome_III_Part_3.pdf already downloaded\nBxl_1954_Tome_III_Part_4.pdf already downloaded\nBxl_1954_Tome_III_Part_5.pdf already downloaded\nBxl_1954_Tome_III_Part_6.pdf already downloaded\nBxl_1954_Tome_III_Part_7.pdf already downloaded\nBxl_1954_Tome_III_Part_8.pdf already downloaded\nBxl_1954_Tome_III_Part_9.pdf already downloaded\nBxl_1954_Tome_II_Part_1.pdf already downloaded\nBxl_1954_Tome_II_Part_10.pdf already downloaded\nBxl_1954_Tome_II_Part_2.pdf already downloaded\nBxl_1954_Tome_II_Part_3.pdf already downloaded\nBxl_1954_Tome_II_Part_4.pdf already downloaded\nBxl_1954_Tome_II_Part_5.pdf already downloaded\nBxl_1954_Tome_II_Part_6.pdf already downloaded\nBxl_1954_Tome_II_Part_7.pdf already downloaded\nBxl_1954_Tome_II_Part_8.pdf already downloaded\nBxl_1954_Tome_II_Part_9.pdf already downloaded\nBxl_1954_Tome_I_Part_1.pdf already downloaded\nBxl_1954_Tome_I_Part_10.pdf already downloaded\nBxl_1954_Tome_I_Part_2.pdf already downloaded\nBxl_1954_Tome_I_Part_3.pdf already downloaded\nBxl_1954_Tome_I_Part_4.pdf already downloaded\nBxl_1954_Tome_I_Part_5.pdf already downloaded\nBxl_1954_Tome_I_Part_6.pdf already downloaded\nBxl_1954_Tome_I_Part_7.pdf already downloaded\nBxl_1954_Tome_I_Part_8.pdf already downloaded\nBxl_1954_Tome_I_Part_9.pdf already downloaded\nBxl_1955_Tome_II1_Part_1.pdf already downloaded\nBxl_1955_Tome_II1_Part_10.pdf already downloaded\nBxl_1955_Tome_II1_Part_11.pdf already downloaded\nBxl_1955_Tome_II1_Part_12.pdf already downloaded\nBxl_1955_Tome_II1_Part_2.pdf already downloaded\nBxl_1955_Tome_II1_Part_3.pdf already downloaded\nBxl_1955_Tome_II1_Part_4.pdf already downloaded\nBxl_1955_Tome_II1_Part_5.pdf already downloaded\nBxl_1955_Tome_II1_Part_6.pdf already downloaded\nBxl_1955_Tome_II1_Part_7.pdf already downloaded\nBxl_1955_Tome_II1_Part_8.pdf already downloaded\nBxl_1955_Tome_II1_Part_9.pdf already downloaded\nBxl_1955_Tome_II2_Part_1.pdf already downloaded\nBxl_1955_Tome_II2_Part_2.pdf already downloaded\nBxl_1955_Tome_II2_Part_3.pdf already downloaded\nBxl_1955_Tome_II2_Part_4.pdf already downloaded\nBxl_1955_Tome_II2_Part_5.pdf already downloaded\nBxl_1955_Tome_II2_Part_6.pdf already downloaded\nBxl_1955_Tome_II2_Part_7.pdf already downloaded\nBxl_1955_Tome_II2_Part_8.pdf already downloaded\nBxl_1955_Tome_II2_Part_9.pdf already downloaded\nBxl_1955_Tome_I_Part_1.pdf already downloaded\nBxl_1955_Tome_I_Part_10.pdf already downloaded\nBxl_1955_Tome_I_Part_2.pdf already downloaded\nBxl_1955_Tome_I_Part_3.pdf already downloaded\nBxl_1955_Tome_I_Part_4.pdf already downloaded\nBxl_1955_Tome_I_Part_5.pdf already downloaded\nBxl_1955_Tome_I_Part_6.pdf already downloaded\nBxl_1955_Tome_I_Part_7.pdf already downloaded\nBxl_1955_Tome_I_Part_8.pdf already downloaded\nBxl_1955_Tome_I_Part_9.pdf already downloaded\nBxl_1956_Tome_II1_Part_1.pdf already downloaded\nBxl_1956_Tome_II1_Part_10.pdf already downloaded\nBxl_1956_Tome_II1_Part_11.pdf already downloaded\nBxl_1956_Tome_II1_Part_12.pdf already downloaded\nBxl_1956_Tome_II1_Part_13.pdf already downloaded\nBxl_1956_Tome_II1_Part_2.pdf already downloaded\nBxl_1956_Tome_II1_Part_3.pdf already downloaded\nBxl_1956_Tome_II1_Part_4.pdf already downloaded\nBxl_1956_Tome_II1_Part_5.pdf already downloaded\nBxl_1956_Tome_II1_Part_6.pdf already downloaded\nBxl_1956_Tome_II1_Part_7.pdf already downloaded\nBxl_1956_Tome_II1_Part_8.pdf already downloaded\nBxl_1956_Tome_II1_Part_9.pdf already downloaded\nBxl_1956_Tome_II2_Part_1.pdf already downloaded\nBxl_1956_Tome_II2_Part_10.pdf already downloaded\nBxl_1956_Tome_II2_Part_11.pdf already downloaded\nBxl_1956_Tome_II2_Part_2.pdf already downloaded\nBxl_1956_Tome_II2_Part_3.pdf already downloaded\nBxl_1956_Tome_II2_Part_4.pdf already downloaded\nBxl_1956_Tome_II2_Part_5.pdf already downloaded\nBxl_1956_Tome_II2_Part_6.pdf already downloaded\nBxl_1956_Tome_II2_Part_7.pdf already downloaded\nBxl_1956_Tome_II2_Part_8.pdf already downloaded\nBxl_1956_Tome_II2_Part_9.pdf already downloaded\nBxl_1956_Tome_I_Part_1.pdf already downloaded\nBxl_1956_Tome_I_Part_10.pdf already downloaded\nBxl_1956_Tome_I_Part_11.pdf already downloaded\nBxl_1956_Tome_I_Part_12.pdf already downloaded\nBxl_1956_Tome_I_Part_13.pdf already downloaded\nBxl_1956_Tome_I_Part_2.pdf already downloaded\nBxl_1956_Tome_I_Part_3.pdf already downloaded\nBxl_1956_Tome_I_Part_4.pdf already downloaded\nBxl_1956_Tome_I_Part_5.pdf already downloaded\nBxl_1956_Tome_I_Part_6.pdf already downloaded\nBxl_1956_Tome_I_Part_7.pdf already downloaded\nBxl_1956_Tome_I_Part_8.pdf already downloaded\nBxl_1956_Tome_I_Part_9.pdf already downloaded\nBxl_1957_Tome_II1_Part_1.pdf already downloaded\nBxl_1957_Tome_II1_Part_10.pdf already downloaded\nBxl_1957_Tome_II1_Part_11.pdf already downloaded\nBxl_1957_Tome_II1_Part_12.pdf already downloaded\nBxl_1957_Tome_II1_Part_13.pdf already downloaded\nBxl_1957_Tome_II1_Part_2.pdf already downloaded\nBxl_1957_Tome_II1_Part_3.pdf already downloaded\nBxl_1957_Tome_II1_Part_4.pdf already downloaded\nBxl_1957_Tome_II1_Part_5.pdf already downloaded\nBxl_1957_Tome_II1_Part_6.pdf already downloaded\nBxl_1957_Tome_II1_Part_7.pdf already downloaded\nBxl_1957_Tome_II1_Part_8.pdf already downloaded\nBxl_1957_Tome_II1_Part_9.pdf already downloaded\nBxl_1957_Tome_II2_Part_1.pdf already downloaded\nBxl_1957_Tome_II2_Part_10.pdf already downloaded\nBxl_1957_Tome_II2_Part_2.pdf already downloaded\nBxl_1957_Tome_II2_Part_3.pdf already downloaded\nBxl_1957_Tome_II2_Part_4.pdf already downloaded\nBxl_1957_Tome_II2_Part_5.pdf already downloaded\nBxl_1957_Tome_II2_Part_6.pdf already downloaded\nBxl_1957_Tome_II2_Part_7.pdf already downloaded\nBxl_1957_Tome_II2_Part_8.pdf already downloaded\nBxl_1957_Tome_II2_Part_9.pdf already downloaded\nBxl_1957_Tome_I_Part_1.pdf already downloaded\nBxl_1957_Tome_I_Part_10.pdf already downloaded\nBxl_1957_Tome_I_Part_11.pdf already downloaded\nBxl_1957_Tome_I_Part_12.pdf already downloaded\nBxl_1957_Tome_I_Part_2.pdf already downloaded\nBxl_1957_Tome_I_Part_3.pdf already downloaded\nBxl_1957_Tome_I_Part_4.pdf already downloaded\nBxl_1957_Tome_I_Part_5.pdf already downloaded\nBxl_1957_Tome_I_Part_6.pdf already downloaded\nBxl_1957_Tome_I_Part_7.pdf already downloaded\nBxl_1957_Tome_I_Part_8.pdf already downloaded\nBxl_1957_Tome_I_Part_9.pdf already downloaded\nBxl_1958_Tome_II_Part_1.pdf already downloaded\nBxl_1958_Tome_II_Part_10.pdf already downloaded\nBxl_1958_Tome_II_Part_11.pdf already downloaded\nBxl_1958_Tome_II_Part_12.pdf already downloaded\nBxl_1958_Tome_II_Part_13.pdf already downloaded\nBxl_1958_Tome_II_Part_14.pdf already downloaded\nBxl_1958_Tome_II_Part_15.pdf already downloaded\nBxl_1958_Tome_II_Part_2.pdf already downloaded\nBxl_1958_Tome_II_Part_3.pdf already downloaded\nBxl_1958_Tome_II_Part_4.pdf already downloaded\nBxl_1958_Tome_II_Part_5.pdf already downloaded\nBxl_1958_Tome_II_Part_6.pdf already downloaded\nBxl_1958_Tome_II_Part_7.pdf already downloaded\nBxl_1958_Tome_II_Part_8.pdf already downloaded\nBxl_1958_Tome_II_Part_9.pdf already downloaded\nBxl_1958_Tome_I_Part_1.pdf already downloaded\nBxl_1958_Tome_I_Part_10.pdf already downloaded\nBxl_1958_Tome_I_Part_11.pdf already downloaded\nBxl_1958_Tome_I_Part_2.pdf already downloaded\nBxl_1958_Tome_I_Part_3.pdf already downloaded\nBxl_1958_Tome_I_Part_4.pdf already downloaded\nBxl_1958_Tome_I_Part_5.pdf already downloaded\nBxl_1958_Tome_I_Part_6.pdf already downloaded\nBxl_1958_Tome_I_Part_7.pdf already downloaded\nBxl_1958_Tome_I_Part_8.pdf already downloaded\nBxl_1958_Tome_I_Part_9.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_10.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1958_Tome_RptAn_Part_9.pdf already downloaded\nBxl_1959_Tome_II1_Part_1.pdf already downloaded\nBxl_1959_Tome_II1_Part_2.pdf already downloaded\nBxl_1959_Tome_II1_Part_3.pdf already downloaded\nBxl_1959_Tome_II1_Part_4.pdf already downloaded\nBxl_1959_Tome_II1_Part_5.pdf already downloaded\nBxl_1959_Tome_II1_Part_6.pdf already downloaded\nBxl_1959_Tome_II1_Part_7.pdf already downloaded\nBxl_1959_Tome_II1_Part_8.pdf already downloaded\nBxl_1959_Tome_II2_Part_1.pdf already downloaded\nBxl_1959_Tome_II2_Part_10.pdf already downloaded\nBxl_1959_Tome_II2_Part_2.pdf already downloaded\nBxl_1959_Tome_II2_Part_3.pdf already downloaded\nBxl_1959_Tome_II2_Part_4.pdf already downloaded\nBxl_1959_Tome_II2_Part_5.pdf already downloaded\nBxl_1959_Tome_II2_Part_6.pdf already downloaded\nBxl_1959_Tome_II2_Part_7.pdf already downloaded\nBxl_1959_Tome_II2_Part_8.pdf already downloaded\nBxl_1959_Tome_II2_Part_9.pdf already downloaded\nBxl_1959_Tome_I_Part_1.pdf already downloaded\nBxl_1959_Tome_I_Part_10.pdf already downloaded\nBxl_1959_Tome_I_Part_11.pdf already downloaded\nBxl_1959_Tome_I_Part_12.pdf already downloaded\nBxl_1959_Tome_I_Part_2.pdf already downloaded\nBxl_1959_Tome_I_Part_3.pdf already downloaded\nBxl_1959_Tome_I_Part_4.pdf already downloaded\nBxl_1959_Tome_I_Part_5.pdf already downloaded\nBxl_1959_Tome_I_Part_6.pdf already downloaded\nBxl_1959_Tome_I_Part_7.pdf already downloaded\nBxl_1959_Tome_I_Part_8.pdf already downloaded\nBxl_1959_Tome_I_Part_9.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_10.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1959_Tome_RptAn_Part_9.pdf already downloaded\nBxl_1960_Tome_II1_Part_1.pdf already downloaded\nBxl_1960_Tome_II1_Part_2.pdf already downloaded\nBxl_1960_Tome_II1_Part_3.pdf already downloaded\nBxl_1960_Tome_II1_Part_4.pdf already downloaded\nBxl_1960_Tome_II1_Part_5.pdf already downloaded\nBxl_1960_Tome_II1_Part_6.pdf already downloaded\nBxl_1960_Tome_II1_Part_7.pdf already downloaded\nBxl_1960_Tome_II1_Part_8.pdf already downloaded\nBxl_1960_Tome_II2_Part_1.pdf already downloaded\nBxl_1960_Tome_II2_Part_2.pdf already downloaded\nBxl_1960_Tome_II2_Part_3.pdf already downloaded\nBxl_1960_Tome_II2_Part_4.pdf already downloaded\nBxl_1960_Tome_II2_Part_5.pdf already downloaded\nBxl_1960_Tome_II2_Part_6.pdf already downloaded\nBxl_1960_Tome_II2_Part_7.pdf already downloaded\nBxl_1960_Tome_II2_Part_8.pdf already downloaded\nBxl_1960_Tome_II2_Part_9.pdf already downloaded\nBxl_1960_Tome_I_Part_1.pdf already downloaded\nBxl_1960_Tome_I_Part_10.pdf already downloaded\nBxl_1960_Tome_I_Part_11.pdf already downloaded\nBxl_1960_Tome_I_Part_12.pdf already downloaded\nBxl_1960_Tome_I_Part_13.pdf already downloaded\nBxl_1960_Tome_I_Part_2.pdf already downloaded\nBxl_1960_Tome_I_Part_3.pdf already downloaded\nBxl_1960_Tome_I_Part_4.pdf already downloaded\nBxl_1960_Tome_I_Part_5.pdf already downloaded\nBxl_1960_Tome_I_Part_6.pdf already downloaded\nBxl_1960_Tome_I_Part_7.pdf already downloaded\nBxl_1960_Tome_I_Part_8.pdf already downloaded\nBxl_1960_Tome_I_Part_9.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1960_Tome_RptAn_Part_9.pdf already downloaded\nBxl_1961_Tome_II1_Part_1.pdf already downloaded\nBxl_1961_Tome_II1_Part_2.pdf already downloaded\nBxl_1961_Tome_II1_Part_3.pdf already downloaded\nBxl_1961_Tome_II1_Part_4.pdf already downloaded\nBxl_1961_Tome_II1_Part_5.pdf already downloaded\nBxl_1961_Tome_II1_Part_6.pdf already downloaded\nBxl_1961_Tome_II1_Part_7.pdf already downloaded\nBxl_1961_Tome_II1_Part_8.pdf already downloaded\nBxl_1961_Tome_II2_Part_1.pdf already downloaded\nBxl_1961_Tome_II2_Part_2.pdf already downloaded\nBxl_1961_Tome_II2_Part_3.pdf already downloaded\nBxl_1961_Tome_II2_Part_4.pdf already downloaded\nBxl_1961_Tome_II2_Part_5.pdf already downloaded\nBxl_1961_Tome_II2_Part_6.pdf already downloaded\nBxl_1961_Tome_II2_Part_7.pdf already downloaded\nBxl_1961_Tome_II2_Part_8.pdf already downloaded\nBxl_1961_Tome_I_Part_1.pdf already downloaded\nBxl_1961_Tome_I_Part_10.pdf already downloaded\nBxl_1961_Tome_I_Part_11.pdf already downloaded\nBxl_1961_Tome_I_Part_12.pdf already downloaded\nBxl_1961_Tome_I_Part_2.pdf already downloaded\nBxl_1961_Tome_I_Part_3.pdf already downloaded\nBxl_1961_Tome_I_Part_4.pdf already downloaded\nBxl_1961_Tome_I_Part_5.pdf already downloaded\nBxl_1961_Tome_I_Part_6.pdf already downloaded\nBxl_1961_Tome_I_Part_7.pdf already downloaded\nBxl_1961_Tome_I_Part_8.pdf already downloaded\nBxl_1961_Tome_I_Part_9.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1961_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1962_Tome_II1_Part_1.pdf already downloaded\nBxl_1962_Tome_II1_Part_2.pdf already downloaded\nBxl_1962_Tome_II1_Part_3.pdf already downloaded\nBxl_1962_Tome_II1_Part_4.pdf already downloaded\nBxl_1962_Tome_II1_Part_5.pdf already downloaded\nBxl_1962_Tome_II1_Part_6.pdf already downloaded\nBxl_1962_Tome_II1_Part_7.pdf already downloaded\nBxl_1962_Tome_II1_Part_8.pdf already downloaded\nBxl_1962_Tome_II1_Part_9.pdf already downloaded\nBxl_1962_Tome_II2_Part_1.pdf already downloaded\nBxl_1962_Tome_II2_Part_2.pdf already downloaded\nBxl_1962_Tome_II2_Part_3.pdf already downloaded\nBxl_1962_Tome_II2_Part_4.pdf already downloaded\nBxl_1962_Tome_II2_Part_5.pdf already downloaded\nBxl_1962_Tome_II2_Part_6.pdf already downloaded\nBxl_1962_Tome_II2_Part_7.pdf already downloaded\nBxl_1962_Tome_II2_Part_8.pdf already downloaded\nBxl_1962_Tome_II2_Part_9.pdf already downloaded\nBxl_1962_Tome_I_Part_1.pdf already downloaded\nBxl_1962_Tome_I_Part_10.pdf already downloaded\nBxl_1962_Tome_I_Part_11.pdf already downloaded\nBxl_1962_Tome_I_Part_12.pdf already downloaded\nBxl_1962_Tome_I_Part_13.pdf already downloaded\nBxl_1962_Tome_I_Part_2.pdf already downloaded\nBxl_1962_Tome_I_Part_3.pdf already downloaded\nBxl_1962_Tome_I_Part_4.pdf already downloaded\nBxl_1962_Tome_I_Part_5.pdf already downloaded\nBxl_1962_Tome_I_Part_6.pdf already downloaded\nBxl_1962_Tome_I_Part_7.pdf already downloaded\nBxl_1962_Tome_I_Part_8.pdf already downloaded\nBxl_1962_Tome_I_Part_9.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1962_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1963_Tome_II1_Part_1.pdf already downloaded\nBxl_1963_Tome_II1_Part_2.pdf already downloaded\nBxl_1963_Tome_II1_Part_3.pdf already downloaded\nBxl_1963_Tome_II1_Part_4.pdf already downloaded\nBxl_1963_Tome_II1_Part_5.pdf already downloaded\nBxl_1963_Tome_II1_Part_6.pdf already downloaded\nBxl_1963_Tome_II1_Part_7.pdf already downloaded\nBxl_1963_Tome_II1_Part_8.pdf already downloaded\nBxl_1963_Tome_II1_Part_9.pdf already downloaded\nBxl_1963_Tome_II2_Part_1.pdf already downloaded\nBxl_1963_Tome_II2_Part_10.pdf already downloaded\nBxl_1963_Tome_II2_Part_11.pdf already downloaded\nBxl_1963_Tome_II2_Part_12.pdf already downloaded\nBxl_1963_Tome_II2_Part_2.pdf already downloaded\nBxl_1963_Tome_II2_Part_3.pdf already downloaded\nBxl_1963_Tome_II2_Part_4.pdf already downloaded\nBxl_1963_Tome_II2_Part_5.pdf already downloaded\nBxl_1963_Tome_II2_Part_6.pdf already downloaded\nBxl_1963_Tome_II2_Part_7.pdf already downloaded\nBxl_1963_Tome_II2_Part_8.pdf already downloaded\nBxl_1963_Tome_II2_Part_9.pdf already downloaded\nBxl_1963_Tome_I_Part_1.pdf already downloaded\nBxl_1963_Tome_I_Part_10.pdf already downloaded\nBxl_1963_Tome_I_Part_11.pdf already downloaded\nBxl_1963_Tome_I_Part_12.pdf already downloaded\nBxl_1963_Tome_I_Part_13.pdf already downloaded\nBxl_1963_Tome_I_Part_2.pdf already downloaded\nBxl_1963_Tome_I_Part_3.pdf already downloaded\nBxl_1963_Tome_I_Part_4.pdf already downloaded\nBxl_1963_Tome_I_Part_5.pdf already downloaded\nBxl_1963_Tome_I_Part_6.pdf already downloaded\nBxl_1963_Tome_I_Part_7.pdf already downloaded\nBxl_1963_Tome_I_Part_8.pdf already downloaded\nBxl_1963_Tome_I_Part_9.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1963_Tome_RptAn_Part_8.pdf already downloaded\nBxl_1964_Tome_II1_Part_1.pdf already downloaded\nBxl_1964_Tome_II1_Part_10.pdf already downloaded\nBxl_1964_Tome_II1_Part_11.pdf already downloaded\nBxl_1964_Tome_II1_Part_2.pdf already downloaded\nBxl_1964_Tome_II1_Part_3.pdf already downloaded\nBxl_1964_Tome_II1_Part_4.pdf already downloaded\nBxl_1964_Tome_II1_Part_5.pdf already downloaded\nBxl_1964_Tome_II1_Part_6.pdf already downloaded\nBxl_1964_Tome_II1_Part_7.pdf already downloaded\nBxl_1964_Tome_II1_Part_8.pdf already downloaded\nBxl_1964_Tome_II1_Part_9.pdf already downloaded\nBxl_1964_Tome_II2_Part_1.pdf already downloaded\nBxl_1964_Tome_II2_Part_10.pdf already downloaded\nBxl_1964_Tome_II2_Part_11.pdf already downloaded\nBxl_1964_Tome_II2_Part_2.pdf already downloaded\nBxl_1964_Tome_II2_Part_3.pdf already downloaded\nBxl_1964_Tome_II2_Part_4.pdf already downloaded\nBxl_1964_Tome_II2_Part_5.pdf already downloaded\nBxl_1964_Tome_II2_Part_6.pdf already downloaded\nBxl_1964_Tome_II2_Part_7.pdf already downloaded\nBxl_1964_Tome_II2_Part_8.pdf already downloaded\nBxl_1964_Tome_II2_Part_9.pdf already downloaded\nBxl_1964_Tome_I_Part_1.pdf already downloaded\nBxl_1964_Tome_I_Part_10.pdf already downloaded\nBxl_1964_Tome_I_Part_11.pdf already downloaded\nBxl_1964_Tome_I_Part_12.pdf already downloaded\nBxl_1964_Tome_I_Part_13.pdf already downloaded\nBxl_1964_Tome_I_Part_14.pdf already downloaded\nBxl_1964_Tome_I_Part_2.pdf already downloaded\nBxl_1964_Tome_I_Part_3.pdf already downloaded\nBxl_1964_Tome_I_Part_4.pdf already downloaded\nBxl_1964_Tome_I_Part_5.pdf already downloaded\nBxl_1964_Tome_I_Part_6.pdf already downloaded\nBxl_1964_Tome_I_Part_7.pdf already downloaded\nBxl_1964_Tome_I_Part_8.pdf already downloaded\nBxl_1964_Tome_I_Part_9.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1964_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1965_Tome_II1_Part_1.pdf already downloaded\nBxl_1965_Tome_II1_Part_2.pdf already downloaded\nBxl_1965_Tome_II1_Part_3.pdf already downloaded\nBxl_1965_Tome_II1_Part_4.pdf already downloaded\nBxl_1965_Tome_II1_Part_5.pdf already downloaded\nBxl_1965_Tome_II1_Part_6.pdf already downloaded\nBxl_1965_Tome_II1_Part_7.pdf already downloaded\nBxl_1965_Tome_II1_Part_8.pdf already downloaded\nBxl_1965_Tome_II1_Part_9.pdf already downloaded\nBxl_1965_Tome_II2_Part_1.pdf already downloaded\nBxl_1965_Tome_II2_Part_10.pdf already downloaded\nBxl_1965_Tome_II2_Part_11.pdf already downloaded\nBxl_1965_Tome_II2_Part_12.pdf already downloaded\nBxl_1965_Tome_II2_Part_13.pdf already downloaded\nBxl_1965_Tome_II2_Part_14.pdf already downloaded\nBxl_1965_Tome_II2_Part_2.pdf already downloaded\nBxl_1965_Tome_II2_Part_3.pdf already downloaded\nBxl_1965_Tome_II2_Part_4.pdf already downloaded\nBxl_1965_Tome_II2_Part_5.pdf already downloaded\nBxl_1965_Tome_II2_Part_6.pdf already downloaded\nBxl_1965_Tome_II2_Part_7.pdf already downloaded\nBxl_1965_Tome_II2_Part_8.pdf already downloaded\nBxl_1965_Tome_II2_Part_9.pdf already downloaded\nBxl_1965_Tome_I_Part_1.pdf already downloaded\nBxl_1965_Tome_I_Part_10.pdf already downloaded\nBxl_1965_Tome_I_Part_11.pdf already downloaded\nBxl_1965_Tome_I_Part_12.pdf already downloaded\nBxl_1965_Tome_I_Part_13.pdf already downloaded\nBxl_1965_Tome_I_Part_14.pdf already downloaded\nBxl_1965_Tome_I_Part_2.pdf already downloaded\nBxl_1965_Tome_I_Part_3.pdf already downloaded\nBxl_1965_Tome_I_Part_4.pdf already downloaded\nBxl_1965_Tome_I_Part_5.pdf already downloaded\nBxl_1965_Tome_I_Part_6.pdf already downloaded\nBxl_1965_Tome_I_Part_7.pdf already downloaded\nBxl_1965_Tome_I_Part_8.pdf already downloaded\nBxl_1965_Tome_I_Part_9.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1965_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1966_Tome_I1_Part_1.pdf already downloaded\nBxl_1966_Tome_I1_Part_10.pdf already downloaded\nBxl_1966_Tome_I1_Part_2.pdf already downloaded\nBxl_1966_Tome_I1_Part_3.pdf already downloaded\nBxl_1966_Tome_I1_Part_4.pdf already downloaded\nBxl_1966_Tome_I1_Part_5.pdf already downloaded\nBxl_1966_Tome_I1_Part_6.pdf already downloaded\nBxl_1966_Tome_I1_Part_7.pdf already downloaded\nBxl_1966_Tome_I1_Part_8.pdf already downloaded\nBxl_1966_Tome_I1_Part_9.pdf already downloaded\nBxl_1966_Tome_I2_Part_1.pdf already downloaded\nBxl_1966_Tome_I2_Part_10.pdf already downloaded\nBxl_1966_Tome_I2_Part_11.pdf already downloaded\nBxl_1966_Tome_I2_Part_2.pdf already downloaded\nBxl_1966_Tome_I2_Part_3.pdf already downloaded\nBxl_1966_Tome_I2_Part_4.pdf already downloaded\nBxl_1966_Tome_I2_Part_5.pdf already downloaded\nBxl_1966_Tome_I2_Part_6.pdf already downloaded\nBxl_1966_Tome_I2_Part_7.pdf already downloaded\nBxl_1966_Tome_I2_Part_8.pdf already downloaded\nBxl_1966_Tome_I2_Part_9.pdf already downloaded\nBxl_1966_Tome_II1_Part_1.pdf already downloaded\nBxl_1966_Tome_II1_Part_10.pdf already downloaded\nBxl_1966_Tome_II1_Part_11.pdf already downloaded\nBxl_1966_Tome_II1_Part_2.pdf already downloaded\nBxl_1966_Tome_II1_Part_3.pdf already downloaded\nBxl_1966_Tome_II1_Part_4.pdf already downloaded\nBxl_1966_Tome_II1_Part_5.pdf already downloaded\nBxl_1966_Tome_II1_Part_6.pdf already downloaded\nBxl_1966_Tome_II1_Part_7.pdf already downloaded\nBxl_1966_Tome_II1_Part_8.pdf already downloaded\nBxl_1966_Tome_II1_Part_9.pdf already downloaded\nBxl_1966_Tome_II2_Part_1.pdf already downloaded\nBxl_1966_Tome_II2_Part_10.pdf already downloaded\nBxl_1966_Tome_II2_Part_11.pdf already downloaded\nBxl_1966_Tome_II2_Part_12.pdf already downloaded\nBxl_1966_Tome_II2_Part_13.pdf already downloaded\nBxl_1966_Tome_II2_Part_14.pdf already downloaded\nBxl_1966_Tome_II2_Part_2.pdf already downloaded\nBxl_1966_Tome_II2_Part_3.pdf already downloaded\nBxl_1966_Tome_II2_Part_4.pdf already downloaded\nBxl_1966_Tome_II2_Part_5.pdf already downloaded\nBxl_1966_Tome_II2_Part_6.pdf already downloaded\nBxl_1966_Tome_II2_Part_7.pdf already downloaded\nBxl_1966_Tome_II2_Part_8.pdf already downloaded\nBxl_1966_Tome_II2_Part_9.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1966_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1967_Tome_I1_Part_1.pdf already downloaded\nBxl_1967_Tome_I1_Part_10.pdf already downloaded\nBxl_1967_Tome_I1_Part_11.pdf already downloaded\nBxl_1967_Tome_I1_Part_2.pdf already downloaded\nBxl_1967_Tome_I1_Part_3.pdf already downloaded\nBxl_1967_Tome_I1_Part_4.pdf already downloaded\nBxl_1967_Tome_I1_Part_5.pdf already downloaded\nBxl_1967_Tome_I1_Part_6.pdf already downloaded\nBxl_1967_Tome_I1_Part_7.pdf already downloaded\nBxl_1967_Tome_I1_Part_8.pdf already downloaded\nBxl_1967_Tome_I1_Part_9.pdf already downloaded\nBxl_1967_Tome_I2_Part_1.pdf already downloaded\nBxl_1967_Tome_I2_Part_10.pdf already downloaded\nBxl_1967_Tome_I2_Part_2.pdf already downloaded\nBxl_1967_Tome_I2_Part_3.pdf already downloaded\nBxl_1967_Tome_I2_Part_4.pdf already downloaded\nBxl_1967_Tome_I2_Part_5.pdf already downloaded\nBxl_1967_Tome_I2_Part_6.pdf already downloaded\nBxl_1967_Tome_I2_Part_7.pdf already downloaded\nBxl_1967_Tome_I2_Part_8.pdf already downloaded\nBxl_1967_Tome_I2_Part_9.pdf already downloaded\nBxl_1967_Tome_II1_Part_1.pdf already downloaded\nBxl_1967_Tome_II1_Part_10.pdf already downloaded\nBxl_1967_Tome_II1_Part_2.pdf already downloaded\nBxl_1967_Tome_II1_Part_3.pdf already downloaded\nBxl_1967_Tome_II1_Part_4.pdf already downloaded\nBxl_1967_Tome_II1_Part_5.pdf already downloaded\nBxl_1967_Tome_II1_Part_6.pdf already downloaded\nBxl_1967_Tome_II1_Part_7.pdf already downloaded\nBxl_1967_Tome_II1_Part_8.pdf already downloaded\nBxl_1967_Tome_II1_Part_9.pdf already downloaded\nBxl_1967_Tome_II2_Part_1.pdf already downloaded\nBxl_1967_Tome_II2_Part_10.pdf already downloaded\nBxl_1967_Tome_II2_Part_11.pdf already downloaded\nBxl_1967_Tome_II2_Part_2.pdf already downloaded\nBxl_1967_Tome_II2_Part_3.pdf already downloaded\nBxl_1967_Tome_II2_Part_4.pdf already downloaded\nBxl_1967_Tome_II2_Part_5.pdf already downloaded\nBxl_1967_Tome_II2_Part_6.pdf already downloaded\nBxl_1967_Tome_II2_Part_7.pdf already downloaded\nBxl_1967_Tome_II2_Part_8.pdf already downloaded\nBxl_1967_Tome_II2_Part_9.pdf already downloaded\nBxl_1968_Tome_I1_Part_1.pdf already downloaded\nBxl_1968_Tome_I1_Part_2.pdf already downloaded\nBxl_1968_Tome_I1_Part_3.pdf already downloaded\nBxl_1968_Tome_I1_Part_4.pdf already downloaded\nBxl_1968_Tome_I1_Part_5.pdf already downloaded\nBxl_1968_Tome_I1_Part_6.pdf already downloaded\nBxl_1968_Tome_I1_Part_7.pdf already downloaded\nBxl_1968_Tome_I1_Part_8.pdf already downloaded\nBxl_1968_Tome_I1_Part_9.pdf already downloaded\nBxl_1968_Tome_I2_Part_1.pdf already downloaded\nBxl_1968_Tome_I2_Part_2.pdf already downloaded\nBxl_1968_Tome_I2_Part_3.pdf already downloaded\nBxl_1968_Tome_I2_Part_4.pdf already downloaded\nBxl_1968_Tome_I2_Part_5.pdf already downloaded\nBxl_1968_Tome_I2_Part_6.pdf already downloaded\nBxl_1968_Tome_I2_Part_7.pdf already downloaded\nBxl_1968_Tome_I2_Part_8.pdf already downloaded\nBxl_1968_Tome_I2_Part_9.pdf already downloaded\nBxl_1968_Tome_II1_Part_1.pdf already downloaded\nBxl_1968_Tome_II1_Part_2.pdf already downloaded\nBxl_1968_Tome_II1_Part_3.pdf already downloaded\nBxl_1968_Tome_II1_Part_4.pdf already downloaded\nBxl_1968_Tome_II1_Part_5.pdf already downloaded\nBxl_1968_Tome_II1_Part_6.pdf already downloaded\nBxl_1968_Tome_II1_Part_7.pdf already downloaded\nBxl_1968_Tome_II1_Part_8.pdf already downloaded\nBxl_1968_Tome_II1_Part_9.pdf already downloaded\nBxl_1968_Tome_II2_Part_1.pdf already downloaded\nBxl_1968_Tome_II2_Part_10.pdf already downloaded\nBxl_1968_Tome_II2_Part_11.pdf already downloaded\nBxl_1968_Tome_II2_Part_12.pdf already downloaded\nBxl_1968_Tome_II2_Part_13.pdf already downloaded\nBxl_1968_Tome_II2_Part_2.pdf already downloaded\nBxl_1968_Tome_II2_Part_3.pdf already downloaded\nBxl_1968_Tome_II2_Part_4.pdf already downloaded\nBxl_1968_Tome_II2_Part_5.pdf already downloaded\nBxl_1968_Tome_II2_Part_6.pdf already downloaded\nBxl_1968_Tome_II2_Part_7.pdf already downloaded\nBxl_1968_Tome_II2_Part_8.pdf already downloaded\nBxl_1968_Tome_II2_Part_9.pdf already downloaded\nBxl_1969_Tome_I1_Part_1.pdf already downloaded\nBxl_1969_Tome_I1_Part_2.pdf already downloaded\nBxl_1969_Tome_I1_Part_3.pdf already downloaded\nBxl_1969_Tome_I1_Part_4.pdf already downloaded\nBxl_1969_Tome_I1_Part_5.pdf already downloaded\nBxl_1969_Tome_I1_Part_6.pdf already downloaded\nBxl_1969_Tome_I1_Part_7.pdf already downloaded\nBxl_1969_Tome_I1_Part_8.pdf already downloaded\nBxl_1969_Tome_I2_Part_1.pdf already downloaded\nBxl_1969_Tome_I2_Part_10.pdf already downloaded\nBxl_1969_Tome_I2_Part_2.pdf already downloaded\nBxl_1969_Tome_I2_Part_3.pdf already downloaded\nBxl_1969_Tome_I2_Part_4.pdf already downloaded\nBxl_1969_Tome_I2_Part_5.pdf already downloaded\nBxl_1969_Tome_I2_Part_6.pdf already downloaded\nBxl_1969_Tome_I2_Part_7.pdf already downloaded\nBxl_1969_Tome_I2_Part_8.pdf already downloaded\nBxl_1969_Tome_I2_Part_9.pdf already downloaded\nBxl_1969_Tome_II1_Part_1.pdf already downloaded\nBxl_1969_Tome_II1_Part_2.pdf already downloaded\nBxl_1969_Tome_II1_Part_3.pdf already downloaded\nBxl_1969_Tome_II1_Part_4.pdf already downloaded\nBxl_1969_Tome_II1_Part_5.pdf already downloaded\nBxl_1969_Tome_II1_Part_6.pdf already downloaded\nBxl_1969_Tome_II1_Part_7.pdf already downloaded\nBxl_1969_Tome_II1_Part_8.pdf already downloaded\nBxl_1969_Tome_II2_Part_1.pdf already downloaded\nBxl_1969_Tome_II2_Part_10.pdf already downloaded\nBxl_1969_Tome_II2_Part_11.pdf already downloaded\nBxl_1969_Tome_II2_Part_12.pdf already downloaded\nBxl_1969_Tome_II2_Part_13.pdf already downloaded\nBxl_1969_Tome_II2_Part_14.pdf already downloaded\nBxl_1969_Tome_II2_Part_2.pdf already downloaded\nBxl_1969_Tome_II2_Part_3.pdf already downloaded\nBxl_1969_Tome_II2_Part_4.pdf already downloaded\nBxl_1969_Tome_II2_Part_5.pdf already downloaded\nBxl_1969_Tome_II2_Part_6.pdf already downloaded\nBxl_1969_Tome_II2_Part_7.pdf already downloaded\nBxl_1969_Tome_II2_Part_8.pdf already downloaded\nBxl_1969_Tome_II2_Part_9.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1969_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1970_Tome_I1_Part_1.pdf already downloaded\nBxl_1970_Tome_I1_Part_10.pdf already downloaded\nBxl_1970_Tome_I1_Part_11.pdf already downloaded\nBxl_1970_Tome_I1_Part_12.pdf already downloaded\nBxl_1970_Tome_I1_Part_13.pdf already downloaded\nBxl_1970_Tome_I1_Part_2.pdf already downloaded\nBxl_1970_Tome_I1_Part_3.pdf already downloaded\nBxl_1970_Tome_I1_Part_4.pdf already downloaded\nBxl_1970_Tome_I1_Part_5.pdf already downloaded\nBxl_1970_Tome_I1_Part_6.pdf already downloaded\nBxl_1970_Tome_I1_Part_7.pdf already downloaded\nBxl_1970_Tome_I1_Part_8.pdf already downloaded\nBxl_1970_Tome_I1_Part_9.pdf already downloaded\nBxl_1970_Tome_I2_Part_1.pdf already downloaded\nBxl_1970_Tome_I2_Part_10.pdf already downloaded\nBxl_1970_Tome_I2_Part_11.pdf already downloaded\nBxl_1970_Tome_I2_Part_12.pdf already downloaded\nBxl_1970_Tome_I2_Part_13.pdf already downloaded\nBxl_1970_Tome_I2_Part_14.pdf already downloaded\nBxl_1970_Tome_I2_Part_2.pdf already downloaded\nBxl_1970_Tome_I2_Part_3.pdf already downloaded\nBxl_1970_Tome_I2_Part_4.pdf already downloaded\nBxl_1970_Tome_I2_Part_5.pdf already downloaded\nBxl_1970_Tome_I2_Part_6.pdf already downloaded\nBxl_1970_Tome_I2_Part_7.pdf already downloaded\nBxl_1970_Tome_I2_Part_8.pdf already downloaded\nBxl_1970_Tome_I2_Part_9.pdf already downloaded\nBxl_1970_Tome_II1_Part_1.pdf already downloaded\nBxl_1970_Tome_II1_Part_10.pdf already downloaded\nBxl_1970_Tome_II1_Part_11.pdf already downloaded\nBxl_1970_Tome_II1_Part_2.pdf already downloaded\nBxl_1970_Tome_II1_Part_3.pdf already downloaded\nBxl_1970_Tome_II1_Part_4.pdf already downloaded\nBxl_1970_Tome_II1_Part_5.pdf already downloaded\nBxl_1970_Tome_II1_Part_6.pdf already downloaded\nBxl_1970_Tome_II1_Part_7.pdf already downloaded\nBxl_1970_Tome_II1_Part_8.pdf already downloaded\nBxl_1970_Tome_II1_Part_9.pdf already downloaded\nBxl_1970_Tome_II2_Part_1.pdf already downloaded\nBxl_1970_Tome_II2_Part_10.pdf already downloaded\nBxl_1970_Tome_II2_Part_2.pdf already downloaded\nBxl_1970_Tome_II2_Part_3.pdf already downloaded\nBxl_1970_Tome_II2_Part_4.pdf already downloaded\nBxl_1970_Tome_II2_Part_5.pdf already downloaded\nBxl_1970_Tome_II2_Part_6.pdf already downloaded\nBxl_1970_Tome_II2_Part_7.pdf already downloaded\nBxl_1970_Tome_II2_Part_8.pdf already downloaded\nBxl_1970_Tome_II2_Part_9.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1970_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1971_Tome_II1_Part_1.pdf already downloaded\nBxl_1971_Tome_II1_Part_10.pdf already downloaded\nBxl_1971_Tome_II1_Part_11.pdf already downloaded\nBxl_1971_Tome_II1_Part_12.pdf already downloaded\nBxl_1971_Tome_II1_Part_2.pdf already downloaded\nBxl_1971_Tome_II1_Part_3.pdf already downloaded\nBxl_1971_Tome_II1_Part_4.pdf already downloaded\nBxl_1971_Tome_II1_Part_5.pdf already downloaded\nBxl_1971_Tome_II1_Part_6.pdf already downloaded\nBxl_1971_Tome_II1_Part_7.pdf already downloaded\nBxl_1971_Tome_II1_Part_8.pdf already downloaded\nBxl_1971_Tome_II1_Part_9.pdf already downloaded\nBxl_1971_Tome_I_Part_1.pdf already downloaded\nBxl_1971_Tome_I_Part_10.pdf already downloaded\nBxl_1971_Tome_I_Part_11.pdf already downloaded\nBxl_1971_Tome_I_Part_12.pdf already downloaded\nBxl_1971_Tome_I_Part_13.pdf already downloaded\nBxl_1971_Tome_I_Part_14.pdf already downloaded\nBxl_1971_Tome_I_Part_2.pdf already downloaded\nBxl_1971_Tome_I_Part_3.pdf already downloaded\nBxl_1971_Tome_I_Part_4.pdf already downloaded\nBxl_1971_Tome_I_Part_5.pdf already downloaded\nBxl_1971_Tome_I_Part_6.pdf already downloaded\nBxl_1971_Tome_I_Part_7.pdf already downloaded\nBxl_1971_Tome_I_Part_8.pdf already downloaded\nBxl_1971_Tome_I_Part_9.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1971_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1972_Tome_I1_Part_1.pdf already downloaded\nBxl_1972_Tome_I1_Part_10.pdf already downloaded\nBxl_1972_Tome_I1_Part_11.pdf already downloaded\nBxl_1972_Tome_I1_Part_2.pdf already downloaded\nBxl_1972_Tome_I1_Part_3.pdf already downloaded\nBxl_1972_Tome_I1_Part_4.pdf already downloaded\nBxl_1972_Tome_I1_Part_5.pdf already downloaded\nBxl_1972_Tome_I1_Part_6.pdf already downloaded\nBxl_1972_Tome_I1_Part_7.pdf already downloaded\nBxl_1972_Tome_I1_Part_8.pdf already downloaded\nBxl_1972_Tome_I1_Part_9.pdf already downloaded\nBxl_1972_Tome_I2_Part_1.pdf already downloaded\nBxl_1972_Tome_I2_Part_2.pdf already downloaded\nBxl_1972_Tome_I2_Part_3.pdf already downloaded\nBxl_1972_Tome_I2_Part_4.pdf already downloaded\nBxl_1972_Tome_I2_Part_5.pdf already downloaded\nBxl_1972_Tome_I2_Part_6.pdf already downloaded\nBxl_1972_Tome_I2_Part_7.pdf already downloaded\nBxl_1972_Tome_I2_Part_8.pdf already downloaded\nBxl_1972_Tome_I2_Part_9.pdf already downloaded\nBxl_1972_Tome_II1_Part_1.pdf already downloaded\nBxl_1972_Tome_II1_Part_10.pdf already downloaded\nBxl_1972_Tome_II1_Part_11.pdf already downloaded\nBxl_1972_Tome_II1_Part_12.pdf already downloaded\nBxl_1972_Tome_II1_Part_13.pdf already downloaded\nBxl_1972_Tome_II1_Part_2.pdf already downloaded\nBxl_1972_Tome_II1_Part_3.pdf already downloaded\nBxl_1972_Tome_II1_Part_4.pdf already downloaded\nBxl_1972_Tome_II1_Part_5.pdf already downloaded\nBxl_1972_Tome_II1_Part_6.pdf already downloaded\nBxl_1972_Tome_II1_Part_7.pdf already downloaded\nBxl_1972_Tome_II1_Part_8.pdf already downloaded\nBxl_1972_Tome_II1_Part_9.pdf already downloaded\nBxl_1972_Tome_II2_Part_1.pdf already downloaded\nBxl_1972_Tome_II2_Part_10.pdf already downloaded\nBxl_1972_Tome_II2_Part_11.pdf already downloaded\nBxl_1972_Tome_II2_Part_12.pdf already downloaded\nBxl_1972_Tome_II2_Part_13.pdf already downloaded\nBxl_1972_Tome_II2_Part_2.pdf already downloaded\nBxl_1972_Tome_II2_Part_3.pdf already downloaded\nBxl_1972_Tome_II2_Part_4.pdf already downloaded\nBxl_1972_Tome_II2_Part_5.pdf already downloaded\nBxl_1972_Tome_II2_Part_6.pdf already downloaded\nBxl_1972_Tome_II2_Part_7.pdf already downloaded\nBxl_1972_Tome_II2_Part_8.pdf already downloaded\nBxl_1972_Tome_II2_Part_9.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_5.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_6.pdf already downloaded\nBxl_1972_Tome_RptAn_Part_7.pdf already downloaded\nBxl_1973_Tome_II1_Part_1.pdf already downloaded\nBxl_1973_Tome_II1_Part_10.pdf already downloaded\nBxl_1973_Tome_II1_Part_11.pdf already downloaded\nBxl_1973_Tome_II1_Part_12.pdf already downloaded\nBxl_1973_Tome_II1_Part_13.pdf already downloaded\nBxl_1973_Tome_II1_Part_2.pdf already downloaded\nBxl_1973_Tome_II1_Part_3.pdf already downloaded\nBxl_1973_Tome_II1_Part_4.pdf already downloaded\nBxl_1973_Tome_II1_Part_5.pdf already downloaded\nBxl_1973_Tome_II1_Part_6.pdf already downloaded\nBxl_1973_Tome_II1_Part_7.pdf already downloaded\nBxl_1973_Tome_II1_Part_8.pdf already downloaded\nBxl_1973_Tome_II1_Part_9.pdf already downloaded\nBxl_1973_Tome_II2_Part_1.pdf already downloaded\nBxl_1973_Tome_II2_Part_10.pdf already downloaded\nBxl_1973_Tome_II2_Part_11.pdf already downloaded\nBxl_1973_Tome_II2_Part_12.pdf already downloaded\nBxl_1973_Tome_II2_Part_13.pdf already downloaded\nBxl_1973_Tome_II2_Part_14.pdf already downloaded\nBxl_1973_Tome_II2_Part_15.pdf already downloaded\nBxl_1973_Tome_II2_Part_2.pdf already downloaded\nBxl_1973_Tome_II2_Part_3.pdf already downloaded\nBxl_1973_Tome_II2_Part_4.pdf already downloaded\nBxl_1973_Tome_II2_Part_5.pdf already downloaded\nBxl_1973_Tome_II2_Part_6.pdf already downloaded\nBxl_1973_Tome_II2_Part_7.pdf already downloaded\nBxl_1973_Tome_II2_Part_8.pdf already downloaded\nBxl_1973_Tome_II2_Part_9.pdf already downloaded\nBxl_1973_Tome_I_Part_1.pdf already downloaded\nBxl_1973_Tome_I_Part_10.pdf already downloaded\nBxl_1973_Tome_I_Part_11.pdf already downloaded\nBxl_1973_Tome_I_Part_12.pdf already downloaded\nBxl_1973_Tome_I_Part_13.pdf already downloaded\nBxl_1973_Tome_I_Part_14.pdf already downloaded\nBxl_1973_Tome_I_Part_2.pdf already downloaded\nBxl_1973_Tome_I_Part_3.pdf already downloaded\nBxl_1973_Tome_I_Part_4.pdf already downloaded\nBxl_1973_Tome_I_Part_5.pdf already downloaded\nBxl_1973_Tome_I_Part_6.pdf already downloaded\nBxl_1973_Tome_I_Part_7.pdf already downloaded\nBxl_1973_Tome_I_Part_8.pdf already downloaded\nBxl_1973_Tome_I_Part_9.pdf already downloaded\nBxl_1973_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1973_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1973_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1973_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1974_Tome_II1_Part_1.pdf already downloaded\nBxl_1974_Tome_II1_Part_10.pdf already downloaded\nBxl_1974_Tome_II1_Part_11.pdf already downloaded\nBxl_1974_Tome_II1_Part_12.pdf already downloaded\nBxl_1974_Tome_II1_Part_13.pdf already downloaded\nBxl_1974_Tome_II1_Part_2.pdf already downloaded\nBxl_1974_Tome_II1_Part_3.pdf already downloaded\nBxl_1974_Tome_II1_Part_4.pdf already downloaded\nBxl_1974_Tome_II1_Part_5.pdf already downloaded\nBxl_1974_Tome_II1_Part_6.pdf already downloaded\nBxl_1974_Tome_II1_Part_7.pdf already downloaded\nBxl_1974_Tome_II1_Part_8.pdf already downloaded\nBxl_1974_Tome_II1_Part_9.pdf already downloaded\nBxl_1974_Tome_II2_Part_1.pdf already downloaded\nBxl_1974_Tome_II2_Part_10.pdf already downloaded\nBxl_1974_Tome_II2_Part_11.pdf already downloaded\nBxl_1974_Tome_II2_Part_2.pdf already downloaded\nBxl_1974_Tome_II2_Part_3.pdf already downloaded\nBxl_1974_Tome_II2_Part_4.pdf already downloaded\nBxl_1974_Tome_II2_Part_5.pdf already downloaded\nBxl_1974_Tome_II2_Part_6.pdf already downloaded\nBxl_1974_Tome_II2_Part_7.pdf already downloaded\nBxl_1974_Tome_II2_Part_8.pdf already downloaded\nBxl_1974_Tome_II2_Part_9.pdf already downloaded\nBxl_1974_Tome_I_Part_1.pdf already downloaded\nBxl_1974_Tome_I_Part_10.pdf already downloaded\nBxl_1974_Tome_I_Part_11.pdf already downloaded\nBxl_1974_Tome_I_Part_12.pdf already downloaded\nBxl_1974_Tome_I_Part_13.pdf already downloaded\nBxl_1974_Tome_I_Part_14.pdf already downloaded\nBxl_1974_Tome_I_Part_15.pdf already downloaded\nBxl_1974_Tome_I_Part_16.pdf already downloaded\nBxl_1974_Tome_I_Part_2.pdf already downloaded\nBxl_1974_Tome_I_Part_3.pdf already downloaded\nBxl_1974_Tome_I_Part_4.pdf already downloaded\nBxl_1974_Tome_I_Part_5.pdf already downloaded\nBxl_1974_Tome_I_Part_6.pdf already downloaded\nBxl_1974_Tome_I_Part_7.pdf already downloaded\nBxl_1974_Tome_I_Part_8.pdf already downloaded\nBxl_1974_Tome_I_Part_9.pdf already downloaded\nBxl_1974_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1974_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1974_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1974_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1975_Tome_I1_Part_1.pdf already downloaded\nBxl_1975_Tome_I1_Part_2.pdf already downloaded\nBxl_1975_Tome_I1_Part_3.pdf already downloaded\nBxl_1975_Tome_I1_Part_4.pdf already downloaded\nBxl_1975_Tome_I1_Part_5.pdf already downloaded\nBxl_1975_Tome_I1_Part_6.pdf already downloaded\nBxl_1975_Tome_I1_Part_7.pdf already downloaded\nBxl_1975_Tome_I2_Part_1.pdf already downloaded\nBxl_1975_Tome_I2_Part_2.pdf already downloaded\nBxl_1975_Tome_I2_Part_3.pdf already downloaded\nBxl_1975_Tome_I2_Part_4.pdf already downloaded\nBxl_1975_Tome_I2_Part_5.pdf already downloaded\nBxl_1975_Tome_I2_Part_6.pdf already downloaded\nBxl_1975_Tome_I2_Part_7.pdf already downloaded\nBxl_1975_Tome_I2_Part_8.pdf already downloaded\nBxl_1975_Tome_I2_Part_9.pdf already downloaded\nBxl_1975_Tome_II1_Part_1.pdf already downloaded\nBxl_1975_Tome_II1_Part_10.pdf already downloaded\nBxl_1975_Tome_II1_Part_2.pdf already downloaded\nBxl_1975_Tome_II1_Part_3.pdf already downloaded\nBxl_1975_Tome_II1_Part_4.pdf already downloaded\nBxl_1975_Tome_II1_Part_5.pdf already downloaded\nBxl_1975_Tome_II1_Part_6.pdf already downloaded\nBxl_1975_Tome_II1_Part_7.pdf already downloaded\nBxl_1975_Tome_II1_Part_8.pdf already downloaded\nBxl_1975_Tome_II1_Part_9.pdf already downloaded\nBxl_1975_Tome_II2_Part_1.pdf already downloaded\nBxl_1975_Tome_II2_Part_10.pdf already downloaded\nBxl_1975_Tome_II2_Part_2.pdf already downloaded\nBxl_1975_Tome_II2_Part_3.pdf already downloaded\nBxl_1975_Tome_II2_Part_4.pdf already downloaded\nBxl_1975_Tome_II2_Part_5.pdf already downloaded\nBxl_1975_Tome_II2_Part_6.pdf already downloaded\nBxl_1975_Tome_II2_Part_7.pdf already downloaded\nBxl_1975_Tome_II2_Part_8.pdf already downloaded\nBxl_1975_Tome_II2_Part_9.pdf already downloaded\nBxl_1975_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1975_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1975_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1975_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1976_Tome_I1_Part_1.pdf already downloaded\nBxl_1976_Tome_I1_Part_2.pdf already downloaded\nBxl_1976_Tome_I1_Part_3.pdf already downloaded\nBxl_1976_Tome_I1_Part_4.pdf already downloaded\nBxl_1976_Tome_I1_Part_5.pdf already downloaded\nBxl_1976_Tome_I1_Part_6.pdf already downloaded\nBxl_1976_Tome_I1_Part_7.pdf already downloaded\nBxl_1976_Tome_I2_Part_1.pdf already downloaded\nBxl_1976_Tome_I2_Part_2.pdf already downloaded\nBxl_1976_Tome_I2_Part_3.pdf already downloaded\nBxl_1976_Tome_I2_Part_4.pdf already downloaded\nBxl_1976_Tome_I2_Part_5.pdf already downloaded\nBxl_1976_Tome_I2_Part_6.pdf already downloaded\nBxl_1976_Tome_I2_Part_7.pdf already downloaded\nBxl_1976_Tome_II1_Part_1.pdf already downloaded\nBxl_1976_Tome_II1_Part_10.pdf already downloaded\nBxl_1976_Tome_II1_Part_11.pdf already downloaded\nBxl_1976_Tome_II1_Part_12.pdf already downloaded\nBxl_1976_Tome_II1_Part_2.pdf already downloaded\nBxl_1976_Tome_II1_Part_3.pdf already downloaded\nBxl_1976_Tome_II1_Part_4.pdf already downloaded\nBxl_1976_Tome_II1_Part_5.pdf already downloaded\nBxl_1976_Tome_II1_Part_6.pdf already downloaded\nBxl_1976_Tome_II1_Part_7.pdf already downloaded\nBxl_1976_Tome_II1_Part_8.pdf already downloaded\nBxl_1976_Tome_II1_Part_9.pdf already downloaded\nBxl_1976_Tome_II2_Part_1.pdf already downloaded\nBxl_1976_Tome_II2_Part_2.pdf already downloaded\nBxl_1976_Tome_II2_Part_3.pdf already downloaded\nBxl_1976_Tome_II2_Part_4.pdf already downloaded\nBxl_1976_Tome_II2_Part_5.pdf already downloaded\nBxl_1976_Tome_II2_Part_6.pdf already downloaded\nBxl_1976_Tome_II2_Part_7.pdf already downloaded\nBxl_1976_Tome_II2_Part_8.pdf already downloaded\nBxl_1976_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1976_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1976_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1976_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1977_Tome_I1_Part_1.pdf already downloaded\nBxl_1977_Tome_I1_Part_2.pdf already downloaded\nBxl_1977_Tome_I1_Part_3.pdf already downloaded\nBxl_1977_Tome_I1_Part_4.pdf already downloaded\nBxl_1977_Tome_I1_Part_5.pdf already downloaded\nBxl_1977_Tome_I1_Part_6.pdf already downloaded\nBxl_1977_Tome_I1_Part_7.pdf already downloaded\nBxl_1977_Tome_I1_Part_8.pdf already downloaded\nBxl_1977_Tome_I2_Part_1.pdf already downloaded\nBxl_1977_Tome_I2_Part_2.pdf already downloaded\nBxl_1977_Tome_I2_Part_3.pdf already downloaded\nBxl_1977_Tome_I2_Part_4.pdf already downloaded\nBxl_1977_Tome_I2_Part_5.pdf already downloaded\nBxl_1977_Tome_I2_Part_6.pdf already downloaded\nBxl_1977_Tome_I2_Part_7.pdf already downloaded\nBxl_1977_Tome_I2_Part_8.pdf already downloaded\nBxl_1977_Tome_I2_Part_9.pdf already downloaded\nBxl_1977_Tome_II1_Part_1.pdf already downloaded\nBxl_1977_Tome_II1_Part_2.pdf already downloaded\nBxl_1977_Tome_II1_Part_3.pdf already downloaded\nBxl_1977_Tome_II1_Part_4.pdf already downloaded\nBxl_1977_Tome_II1_Part_5.pdf already downloaded\nBxl_1977_Tome_II1_Part_6.pdf already downloaded\nBxl_1977_Tome_II1_Part_7.pdf already downloaded\nBxl_1977_Tome_II1_Part_8.pdf already downloaded\nBxl_1977_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1977_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1977_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1977_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1978_Tome_I2_Part_1.pdf already downloaded\nBxl_1978_Tome_I2_Part_2.pdf already downloaded\nBxl_1978_Tome_I2_Part_3.pdf already downloaded\nBxl_1978_Tome_I2_Part_4.pdf already downloaded\nBxl_1978_Tome_I2_Part_5.pdf already downloaded\nBxl_1978_Tome_I2_Part_6.pdf already downloaded\nBxl_1978_Tome_I2_Part_7.pdf already downloaded\nBxl_1978_Tome_I2_Part_8.pdf already downloaded\nBxl_1978_Tome_I2_Part_9.pdf already downloaded\nBxl_1978_Tome_II1_Part_1.pdf already downloaded\nBxl_1978_Tome_II1_Part_10.pdf already downloaded\nBxl_1978_Tome_II1_Part_2.pdf already downloaded\nBxl_1978_Tome_II1_Part_3.pdf already downloaded\nBxl_1978_Tome_II1_Part_4.pdf already downloaded\nBxl_1978_Tome_II1_Part_5.pdf already downloaded\nBxl_1978_Tome_II1_Part_6.pdf already downloaded\nBxl_1978_Tome_II1_Part_7.pdf already downloaded\nBxl_1978_Tome_II1_Part_8.pdf already downloaded\nBxl_1978_Tome_II1_Part_9.pdf already downloaded\nBxl_1978_Tome_I_Part_1.pdf already downloaded\nBxl_1978_Tome_I_Part_2.pdf already downloaded\nBxl_1978_Tome_I_Part_3.pdf already downloaded\nBxl_1978_Tome_I_Part_4.pdf already downloaded\nBxl_1978_Tome_I_Part_5.pdf already downloaded\nBxl_1978_Tome_I_Part_6.pdf already downloaded\nBxl_1978_Tome_I_Part_7.pdf already downloaded\nBxl_1978_Tome_I_Part_8.pdf already downloaded\nBxl_1978_Tome_RptAn_Part_1.pdf already downloaded\nBxl_1978_Tome_RptAn_Part_2.pdf already downloaded\nBxl_1978_Tome_RptAn_Part_3.pdf already downloaded\nBxl_1978_Tome_RptAn_Part_4.pdf already downloaded\nBxl_1978_Tome_RptAn_Part_5.pdf already downloaded\nDone\n" ] ], [ [ "## Vérifier que tous les PDFs ont été téléchargés\n", "_____no_output_____" ], [ "Si ce n'est pas le cas, vous pouvez relancer l'étape de téléchargement (elle ignorera les documents déjà téléchargés)", "_____no_output_____" ] ], [ [ "ok_count = 0\nfor url in urls:\n filename = url.split(\"/\")[-1]\n downloads = os.listdir(pdf_path)\n if filename not in downloads:\n print(f\"{filename} is missing!\")\n else:\n ok_count += 1\nprint(f\"{ok_count} PDFs found on {len(urls)}!\")", "2833 PDFs found on 2833!\n" ] ], [ [ "## Pour en savoir plus", "_____no_output_____" ], [ "- Le web scraping avec Python: https://realpython.com/beautiful-soup-web-scraper-python/\n- Tutoriel sur les expressions régulières: https://www.w3schools.com/python/python_regex.asp", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7262d94fe739a14260b114246be7e2c1b0e24e9
70,053
ipynb
Jupyter Notebook
final_model.ipynb
ywigelman/TheCatheterNinjas
1d49bda790a3dabbef9838fa8dd897f443a5e13a
[ "MIT" ]
null
null
null
final_model.ipynb
ywigelman/TheCatheterNinjas
1d49bda790a3dabbef9838fa8dd897f443a5e13a
[ "MIT" ]
null
null
null
final_model.ipynb
ywigelman/TheCatheterNinjas
1d49bda790a3dabbef9838fa8dd897f443a5e13a
[ "MIT" ]
1
2021-01-12T07:19:12.000Z
2021-01-12T07:19:12.000Z
75.569579
44,020
0.799809
[ [ [ "import os\nif 'google.colab' in str(get_ipython()):\n from google.colab import drive\n print('Running on CoLab')\n colab = True\n drive.mount('/content/gdrive')\n ! cp /content/gdrive/MyDrive/TheCatheterNinjas/kaggle.json .\nelse:\n colab = False\n print('Not running on CoLab')", "Running on CoLab\nMounted at /content/gdrive\n" ], [ "if colab:\n if not os.path.exists('./data/train.csv'):\n ! pip install --upgrade --force-reinstall --no-deps kaggle\n ! mkdir /root/.kaggle\n ! cp kaggle.json /root/.kaggle/\n ! chmod 600 /root/.kaggle/kaggle.json\n ! kaggle competitions download -c ranzcr-clip-catheter-line-classification -p ./\n ! unzip -qq ./'*.zip' -d ./\n ! rm ./'ranzcr-clip-catheter-line-classification.zip'\n print('files are ready')\n else:\n print('files are ready')", "Collecting kaggle\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/99/33/365c0d13f07a2a54744d027fe20b60dacdfdfb33bc04746db6ad0b79340b/kaggle-1.5.10.tar.gz (59kB)\n\r\u001b[K |█████▌ | 10kB 23.9MB/s eta 0:00:01\r\u001b[K |███████████ | 20kB 15.3MB/s eta 0:00:01\r\u001b[K |████████████████▋ | 30kB 13.2MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 40kB 12.2MB/s eta 0:00:01\r\u001b[K |███████████████████████████▊ | 51kB 9.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 61kB 5.4MB/s \n\u001b[?25hBuilding wheels for collected packages: kaggle\n Building wheel for kaggle (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for kaggle: filename=kaggle-1.5.10-cp36-none-any.whl size=73269 sha256=4c1102a715c20d558288385fcaff15daeab6a2c4635579ff8f3c2885e8b18bbe\n Stored in directory: /root/.cache/pip/wheels/3a/d1/7e/6ce09b72b770149802c653a02783821629146983ee5a360f10\nSuccessfully built kaggle\nInstalling collected packages: kaggle\n Found existing installation: kaggle 1.5.10\n Uninstalling kaggle-1.5.10:\n Successfully uninstalled kaggle-1.5.10\nSuccessfully installed kaggle-1.5.10\nDownloading ranzcr-clip-catheter-line-classification.zip to .\n100% 11.7G/11.7G [04:43<00:00, 35.7MB/s]\n100% 11.7G/11.7G [04:43<00:00, 44.4MB/s]\nfiles are ready\n" ] ], [ [ "# Setup\n\n## import libraries and modules", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.applications import InceptionResNetV2\nfrom tensorflow.keras.metrics import AUC\nimport warnings\nimport numpy as np\nfrom pathlib import Path\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import GroupShuffleSplit\nfrom tensorflow.keras.optimizers import Nadam\nfrom subprocess import run\nimport cv2\nwarnings.filterwarnings(\"ignore\")\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## make sure GPU are used", "_____no_output_____" ] ], [ [ "for device in tf.config.experimental.list_physical_devices('GPU'):\n tf.config.experimental.set_memory_growth(device, True)", "_____no_output_____" ] ], [ [ "## checking GPUs", "_____no_output_____" ] ], [ [ "tf.config.list_physical_devices()", "_____no_output_____" ] ], [ [ "## functions", "_____no_output_____" ] ], [ [ "def add_suffix(df: pd.DataFrame, col: str, suffix: str):\n df[col] = df[col] + suffix\n return df\n\n\ndef plot_model_history(history):\n sns.set_style('whitegrid')\n fig, ax = plt.subplots(figsize=(15,7))\n for metric, values in history.history.items():\n ax.plot(np.arange(1, len(values) + 1), values, 'o-', label=metric)\n ax.set(xlabel='Epochs', ylabel='Rate %', \n title='Model Trainning Metrics Per Epoch', \n xlim=(1, len(values) + 0.2))\n sns.despine()\n plt.legend(loc='best')\n plt.show() \n \n\ndef canny_edge_detection(example):\n img = cv2.cvtColor(example.astype(np.uint8), cv2.COLOR_BGR2GRAY)\n return cv2.Canny(img, 50, 100)\n\ndef canny_over_clahe(example):\n img = cv2.cvtColor(example.astype(np.uint8), cv2.COLOR_BGR2GRAY)\n high_contrast = clahe.apply(img)\n return cv2.Canny(high_contrast, 50, 100)\n\ndef enhanced_channels(example):\n img = cv2.cvtColor(example.astype(np.uint8), cv2.COLOR_BGR2GRAY)\n high_contrast = clahe.apply(img)\n edges = cv2.Canny(high_contrast, 50, 100)\n return np.dstack((img, high_contrast, edges))", "_____no_output_____" ] ], [ [ "## Constants", "_____no_output_____" ] ], [ [ "LR = 0.0005\n\n# data frame columns\n\nID: str = 'PatientID'\nCASE: str = 'StudyInstanceUID'\n \n# paths and directories\n\nTRAIN_META: str = 'train.csv'\nTRAIN_IMG_DIR: Path = Path('train')\nMODEL_OUT: str = 'InceptionRN2_12b_5p_nadam_lr0005_499_rot_005'\nPREDICTION_OUTPUT = 'predictions.csv'\nCLASS_MODE: str = 'raw'\nRSCL: float = 1./255.\nGEN_PROPERTIES: dict = {'featurewise_center':False,'samplewise_center':False,\n 'featurewise_std_normalization':False, \n 'samplewise_std_normalization':False,\n 'zca_whitening':False, 'zca_epsilon':1e-06,\n 'rotation_range':0.1, 'width_shift_range':0.0, \n 'height_shift_range':0.0, 'brightness_range':None,\n 'shear_range':0.0, 'zoom_range':0.0, \n 'channel_shift_range':0.0, 'fill_mode':\"constant\", \n 'cval':0.0, 'horizontal_flip':False, \n 'vertical_flip':False, 'rescale':RSCL, \n 'preprocessing_function':None, 'data_format':None, \n 'validation_split':0.0, 'dtype':None}\n# split and generators \n\nSUFFIX: str = '.jpg'\nN_SPLITS: int = 1\nPATIENCE: int = 3\nTARGET_SIZE: tuple = (480,480)\n\n# model parameters \nTRANSFER = InceptionResNetV2\nTRAINABLE = True\nTRANSFER_WEIGHTS = 'imagenet'\nIMG_SIZE: tuple = (480, 480, 3)\nOUT_ACTIVATION = 'sigmoid'\nACTIVATION = 'relu'\nN_CLASS = 11\nDENSE_DIM = 128\nFLT = tf.keras.layers.Flatten(name='flat')\nDNS = tf.keras.layers.Dense(DENSE_DIM, activation=ACTIVATION, name='dense')\nOUT = tf.keras.layers.Dense(N_CLASS, activation=OUT_ACTIVATION, name='output')\nLAYERS = [FLT, DNS, OUT] \n\nTRANSFER_PARAMETERS: dict = {'include_top': False, 'input_shape': IMG_SIZE, \n 'weights': TRANSFER_WEIGHTS}\n\nMONITOR = 'val_loss'\nMETRICS=AUC(multi_label=True)\nOPTIMIZER = Nadam(learning_rate=LR)\nLOSS = 'binary_crossentropy'\nEPOCHS = 100\nBATCH_SIZE = 12\n\nRANDOM_STATE = 42\nVALIDATION_SIZE = 0.2\n\nnp.random.seed(RANDOM_STATE)", "_____no_output_____" ] ], [ [ "# Split", "_____no_output_____" ], [ "## load meta files", "_____no_output_____" ] ], [ [ "train_df = pd.read_csv(TRAIN_META)", "_____no_output_____" ] ], [ [ "## define labels", "_____no_output_____" ] ], [ [ "labels = train_df.select_dtypes(int).columns", "_____no_output_____" ] ], [ [ "## group split", "_____no_output_____" ] ], [ [ "train_inds, val_inds = next(GroupShuffleSplit(test_size=VALIDATION_SIZE, n_splits=N_SPLITS, random_state = RANDOM_STATE).split(train_df, groups=train_df[ID]))\ntrain_df, valid_df = train_df.loc[train_inds], train_df.loc[val_inds]", "_____no_output_____" ] ], [ [ "## generators", "_____no_output_____" ] ], [ [ "datagen=ImageDataGenerator(**GEN_PROPERTIES)\n\ntrain_generator=datagen.flow_from_dataframe(dataframe=add_suffix(train_df, \n col=CASE, \n suffix=SUFFIX), \n directory=str(TRAIN_IMG_DIR),\n x_col=CASE,\n y_col=labels,\n batch_size=BATCH_SIZE,\n seed=RANDOM_STATE,\n shuffle=True,\n class_mode=CLASS_MODE,\n target_size=TARGET_SIZE)\n\n\nvalid_generator=datagen.flow_from_dataframe(dataframe=add_suffix(valid_df, \n col=CASE, \n suffix=SUFFIX), \n directory=str(TRAIN_IMG_DIR),\n x_col=CASE,\n y_col=labels,\n batch_size=BATCH_SIZE,\n seed=RANDOM_STATE,\n shuffle=True,\n class_mode=CLASS_MODE,\n target_size=TARGET_SIZE)", "Found 24833 validated image filenames.\nFound 5250 validated image filenames.\n" ] ], [ [ "# setting Model\n\n## get transfer model", "_____no_output_____" ] ], [ [ "transfer_model = TRANSFER(**TRANSFER_PARAMETERS)", "Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_resnet_v2/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\n219062272/219055592 [==============================] - 1s 0us/step\n" ] ], [ [ "## add layers", "_____no_output_____" ] ], [ [ "model = tf.keras.Sequential([transfer_model] + LAYERS)", "_____no_output_____" ] ], [ [ "## freeze transfer layers", "_____no_output_____" ] ], [ [ "transfer_model.trainable=TRAINABLE", "_____no_output_____" ] ], [ [ "## checkout summary", "_____no_output_____" ] ], [ [ "model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninception_resnet_v2 (Functio (None, 13, 13, 1536) 54336736 \n_________________________________________________________________\nflat (Flatten) (None, 259584) 0 \n_________________________________________________________________\ndense (Dense) (None, 128) 33226880 \n_________________________________________________________________\noutput (Dense) (None, 11) 1419 \n=================================================================\nTotal params: 87,565,035\nTrainable params: 87,504,491\nNon-trainable params: 60,544\n_________________________________________________________________\n" ] ], [ [ "## compile model", "_____no_output_____" ] ], [ [ "model.compile(optimizer=OPTIMIZER, loss=LOSS, metrics=METRICS)", "_____no_output_____" ] ], [ [ "## callback", "_____no_output_____" ] ], [ [ "callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=PATIENCE, restore_best_weights=True)", "_____no_output_____" ] ], [ [ "# train", "_____no_output_____" ] ], [ [ "history = model.fit(train_generator, validation_data=valid_generator, steps_per_epoch=len(train_generator), validation_steps=len(valid_generator), epochs=EPOCHS, callbacks=callback)", "Epoch 1/100\n2070/2070 [==============================] - 3009s 1s/step - loss: 0.3221 - auc: 0.7383 - val_loss: 0.1905 - val_auc: 0.8405\nEpoch 2/100\n2070/2070 [==============================] - 3019s 1s/step - loss: 0.1790 - auc: 0.8827 - val_loss: 0.3415 - val_auc: 0.7043\nEpoch 3/100\n2070/2070 [==============================] - 3150s 2s/step - loss: 0.1597 - auc: 0.9091 - val_loss: 0.3327 - val_auc: 0.8104\nEpoch 4/100\n2070/2070 [==============================] - 3163s 2s/step - loss: 0.1492 - auc: 0.9166 - val_loss: 0.1645 - val_auc: 0.9032\nEpoch 5/100\n2070/2070 [==============================] - 3140s 2s/step - loss: 0.1412 - auc: 0.9305 - val_loss: 0.1498 - val_auc: 0.9166\nEpoch 6/100\n2070/2070 [==============================] - 3162s 2s/step - loss: 0.1231 - auc: 0.9443 - val_loss: 0.1515 - val_auc: 0.9158\nEpoch 7/100\n2070/2070 [==============================] - 3074s 1s/step - loss: 0.1027 - auc: 0.9628 - val_loss: 0.1654 - val_auc: 0.9071\nEpoch 8/100\n2070/2070 [==============================] - 3114s 2s/step - loss: 0.0830 - auc: 0.9686 - val_loss: 0.1726 - val_auc: 0.8859\n" ] ], [ [ "### plot model history", "_____no_output_____" ] ], [ [ "plot_model_history(model.history)", "_____no_output_____" ] ], [ [ "# Save model", "_____no_output_____" ] ], [ [ "model.save(f'/content/gdrive/MyDrive/TheCatheterNinjas/saved_model2/{\"InceptionResNetV2_NF_12b_3p_Nadam_0005lr_480\"}')\n", "INFO:tensorflow:Assets written to: /content/gdrive/MyDrive/TheCatheterNinjas/saved_model2/InceptionResNetV2_NF_12b_3p_Nadam_0005lr_480/assets\n" ] ], [ [ "# Zip model and delete", "_____no_output_____" ] ], [ [ "zipped = '.'.join([MODEL_OUT, 'zip'])\nrun(['zip', '-r', zipped, MODEL_OUT])\nrun(['rm', '-r', MODEL_OUT])", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e726326a82526a7e5d30add9f204f053ed0a0d0e
2,104
ipynb
Jupyter Notebook
Chapter1.ipynb
WagdyIshac/Advanced-Analytics-with-Azure-Databricks
c1fb64d3b22bc953fe7d5fd5a47991723e4440af
[ "MIT" ]
null
null
null
Chapter1.ipynb
WagdyIshac/Advanced-Analytics-with-Azure-Databricks
c1fb64d3b22bc953fe7d5fd5a47991723e4440af
[ "MIT" ]
null
null
null
Chapter1.ipynb
WagdyIshac/Advanced-Analytics-with-Azure-Databricks
c1fb64d3b22bc953fe7d5fd5a47991723e4440af
[ "MIT" ]
null
null
null
2,104
2,104
0.653042
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7263e89200a36bdec740e5439a2d93885b7efad
401,295
ipynb
Jupyter Notebook
advanced_examples/850hPa_hgts_wind_temp.ipynb
dlnash/pyclivac
8ba30e5ab883f6f13105494615a433c6a04673c8
[ "MIT" ]
5
2019-11-21T15:24:09.000Z
2020-06-18T18:50:16.000Z
advanced_examples/850hPa_hgts_wind_temp.ipynb
CLIVAC/pyclivac
8ba30e5ab883f6f13105494615a433c6a04673c8
[ "MIT" ]
null
null
null
advanced_examples/850hPa_hgts_wind_temp.ipynb
CLIVAC/pyclivac
8ba30e5ab883f6f13105494615a433c6a04673c8
[ "MIT" ]
1
2021-12-06T23:01:03.000Z
2021-12-06T23:01:03.000Z
863
386,124
0.953588
[ [ [ "# 850-hPa Geopotential Heights, Temperature, & Winds", "_____no_output_____" ] ], [ [ "# Import Python modules\nimport os, sys\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter\nfrom cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER\nimport matplotlib.ticker as mticker\nfrom matplotlib.gridspec import GridSpec\nimport metpy.calc as mpcalc\nfrom metpy.units import units\nimport cmocean\nimport cmocean.cm as cmo\nfrom scipy.ndimage import gaussian_filter", "_____no_output_____" ], [ "# Import my modules\nsys.path.append('/Users/tessamontini/repos/montini-phd/modules/')\n\n# Root directories\nsavedir = '/Users/tessamontini/Desktop/res/'\ndatadir = '/Users/tessamontini/Google_Drive/DATA/'", "_____no_output_____" ] ], [ [ "## Data Processing", "_____no_output_____" ], [ "### SALLJ days", "_____no_output_____" ] ], [ [ "# Daily SALLJ time series\ninfile = datadir + 'sallj/erai.llj.day.1979-2016.csv'\ndf = pd.read_csv(infile)\n\n# Create column `llj1` where: 0=No LLJ; 1=LLJ at SC or MA\nidx = (df['llj_sc'] > 0) | (df['llj_ma'] > 0)\ndf['llj1'] = 0\ndf.loc[idx, 'llj1'] = 1\n\n#df.head()", "_____no_output_____" ] ], [ [ "### ERA5 renalysis", "_____no_output_____" ] ], [ [ "# read in data files\nf1 = xr.open_dataset(datadir+'ERA5/day/era5_zg_850_day_1979-2016_spac.nc')\nf2 = xr.open_dataset(datadir+'ERA5/day/era5_ua_850_day_1979-2016_spac.nc')\nf3 = xr.open_dataset(datadir+'ERA5/day/era5_va_850_day_1979-2016_spac.nc')\nf4 = xr.open_dataset(datadir+'ERA5/day/era5_ta_850_day_1979-2016_spac.nc')\n\n# Merge variables into one dataset\nera = xr.merge([f1,f2,f3,f4])\nprint(era, '\\n')\n\n# Add LLJ time series to era dataset\nera['llj1'] = ('time', df.llj1)\nera = era.set_coords('llj1')\n\n# Subset dataset to only DJF LLJ days\nidx = (era.time.dt.season == 'DJF') & (era.llj1 >= 1)\nera_llj = era.sel(time=idx)\nprint(era_llj)", "<xarray.Dataset>\nDimensions: (bnds: 2, latitude: 161, longitude: 307, time: 13880)\nCoordinates:\n * longitude (longitude) float32 -165.0 -164.5 -164.0 ... -13.0 -12.5 -12.0\n * latitude (latitude) float32 20.0 19.5 19.0 18.5 ... -59.0 -59.5 -60.0\n * time (time) datetime64[ns] 1979-01-01T09:00:00 ... 2016-12-31T09:00:00\nDimensions without coordinates: bnds\nData variables:\n time_bnds (time, bnds) datetime64[ns] 1979-01-01 ... 2016-12-31T18:00:00\n zg (time, latitude, longitude) float32 ...\n ua (time, latitude, longitude) float32 ...\n va (time, latitude, longitude) float32 ...\n ta (time, latitude, longitude) float32 ... \n\n<xarray.Dataset>\nDimensions: (bnds: 2, latitude: 161, longitude: 307, time: 1184)\nCoordinates:\n * longitude (longitude) float32 -165.0 -164.5 -164.0 ... -13.0 -12.5 -12.0\n * latitude (latitude) float32 20.0 19.5 19.0 18.5 ... -59.0 -59.5 -60.0\n * time (time) datetime64[ns] 1979-01-01T09:00:00 ... 2016-12-31T09:00:00\n llj1 (time) int64 1 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1 1 1\nDimensions without coordinates: bnds\nData variables:\n time_bnds (time, bnds) datetime64[ns] 1979-01-01 ... 2016-12-31T18:00:00\n zg (time, latitude, longitude) float32 ...\n ua (time, latitude, longitude) float32 ...\n va (time, latitude, longitude) float32 ...\n ta (time, latitude, longitude) float32 ...\n" ] ], [ [ "### Mask elevations > 1500 m", "_____no_output_____" ] ], [ [ "# ERA5 orography data\nfx = xr.open_dataset(datadir+'ERA5/era5_sfc_orog_spac.nc')\nfx = fx.isel(time=0, drop=True)\n\n# Add orog to era_llj dataset\nera_llj['orog'] = (('latitude','longitude'), fx.orog)\n\n# Mask elevations > 1500 m\nera_llj = era_llj.where(era_llj.orog < 1500)\nprint(era_llj)\n", "<xarray.Dataset>\nDimensions: (bnds: 2, latitude: 161, longitude: 307, time: 1184)\nCoordinates:\n * longitude (longitude) float32 -165.0 -164.5 -164.0 ... -13.0 -12.5 -12.0\n * latitude (latitude) float32 20.0 19.5 19.0 18.5 ... -59.0 -59.5 -60.0\n * time (time) datetime64[ns] 1979-01-01T09:00:00 ... 2016-12-31T09:00:00\n llj1 (time) int64 1 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1 1 1\nDimensions without coordinates: bnds\nData variables:\n time_bnds (time, bnds, latitude, longitude) datetime64[ns] 1979-01-01 ... 2016-12-31T18:00:00\n zg (time, latitude, longitude) float32 1515.5709 ... 1092.8843\n ua (time, latitude, longitude) float32 -6.8414273 ... -0.26343673\n va (time, latitude, longitude) float32 -5.2097526 ... 4.29512\n ta (time, latitude, longitude) float32 282.72244 ... 265.00314\n orog (latitude, longitude) float64 0.1355 -0.3303 ... -0.0974 -0.2527\n" ] ], [ [ "### SALLJ days by category", "_____no_output_____" ] ], [ [ "# Read in category data\ninfile = savedir + 'sallj_types_exp5.csv'\ndftype = pd.read_csv(infile)\n#dftype.head()\n\n# Add LLJ category labels to era_llj dataset\nera_llj['llj_types'] = ('time', dftype.llj_types)\nera_llj = era_llj.set_coords('llj_types')\nprint(era_llj)\n", "<xarray.Dataset>\nDimensions: (bnds: 2, latitude: 161, longitude: 307, time: 1184)\nCoordinates:\n * longitude (longitude) float32 -165.0 -164.5 -164.0 ... -13.0 -12.5 -12.0\n * latitude (latitude) float32 20.0 19.5 19.0 18.5 ... -59.0 -59.5 -60.0\n * time (time) datetime64[ns] 1979-01-01T09:00:00 ... 2016-12-31T09:00:00\n llj1 (time) int64 1 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1 1 1\n llj_types (time) int64 2 2 2 2 2 2 2 1 1 2 2 1 ... 2 2 1 3 3 1 2 3 3 1 3 3\nDimensions without coordinates: bnds\nData variables:\n time_bnds (time, bnds, latitude, longitude) datetime64[ns] 1979-01-01 ... 2016-12-31T18:00:00\n zg (time, latitude, longitude) float32 1515.5709 ... 1092.8843\n ua (time, latitude, longitude) float32 -6.8414273 ... -0.26343673\n va (time, latitude, longitude) float32 -5.2097526 ... 4.29512\n ta (time, latitude, longitude) float32 282.72244 ... 265.00314\n orog (latitude, longitude) float64 0.1355 -0.3303 ... -0.0974 -0.2527\n" ] ], [ [ "### Composites of SALLJ types", "_____no_output_____" ] ], [ [ "# calc field average for each LLJ category\nera_llj_comp = era_llj.groupby('llj_types').mean('time')\nprint(era_llj_comp)\n", "/Users/tessamontini/miniconda3/envs/montini-phd/lib/python3.7/site-packages/xarray/core/nanops.py:160: RuntimeWarning: Mean of empty slice\n return np.nanmean(a, axis=axis, dtype=dtype)\n" ] ], [ [ "### Data for plotting", "_____no_output_____" ] ], [ [ "# Lat/Lon values\nlats = era_llj_comp.latitude.values\nlons = era_llj_comp.longitude.values\n\n# 850-hPa Heights\nhgts = era_llj_comp.zg.values\n\n# 850-hPa Winds\nuwnd = era_llj_comp.ua.values * units('m/s')\nvwnd = era_llj_comp.va.values * units('m/s')\nwspd = mpcalc.wind_speed(uwnd, vwnd)\nwspd_kt = wspd.to('kt')\n\n# 850-hPa Temperature\ntmpk = era_llj_comp.ta.values * units('K')\ntmpc = tmpk.to('degC')\n", "_____no_output_____" ], [ "# Print data ranges (min/max values)\nprint('hgts:', hgts.min(), hgts.max(), '\\n')\nprint('wspd:', wspd.min(), wspd.max(), '\\n')\nprint('wspd (kt):', wspd_kt.min(), wspd_kt.max(), '\\n')\nprint('tmpc:', tmpc.min(), tmpc.max(), '\\n')", "hgts: nan nan \n\nwspd: nan meter / second nan meter / second \n\nwspd (kt): nan knot nan knot \n\ntmpc: nan degC nan degC \n\n" ] ], [ [ "## Map Creation", "_____no_output_____" ] ], [ [ "# Figure set up\nnplots = 3\nnrows = 3\nncols = 1\n\n# Subplot labels\ntype_label = ['LLJ Type 1', 'LLJ Type 2', 'LLJ Type 3']\n\n# Set up projection\nmapcrs = ccrs.PlateCarree()\ndatacrs = ccrs.PlateCarree()\n\n# Create figure\nfig = plt.figure(figsize=(8,11))\ngs = GridSpec(3, 1, figure=fig, hspace=0.02, top=0.97, bottom=0.01)\n\n# Loop through each subplot \nfor k in range(nplots):\n \n # Add plot axes and set map extent\n ax = fig.add_subplot(gs[k,0], projection=mapcrs)\n ax.set_extent([-165, -12, -60, 20], crs=mapcrs)\n \n # Add map features (coastlines, country borders)\n ax.add_feature(cfeature.COASTLINE, edgecolor='dimgrey')\n ax.add_feature(cfeature.BORDERS, edgecolor='dimgrey')\n ax.add_feature(cfeature.LAND, color='dimgrey')\n \n # Plot 850-hPa Temperature (color fill)\n clevs_tmpc = np.arange(-24, 25, 2)\n cf = ax.contourf(lons, lats, tmpc[k,:,:], transform=datacrs,\n levels=clevs_tmpc, cmap='coolwarm')\n cb = plt.colorbar(cf, orientation='vertical', pad=0.02, shrink=0.8)\n \n # Geopotenital height lines\n #clevs_hgts = np.arange(1080,1280,12)\n clevs_hgts = np.arange(0, 1800, 30)\n cs = ax.contour(lons, lats, hgts[k,:,:], transform=datacrs,\n levels=clevs_hgts, colors='k', linewidths=1.1)\n plt.clabel(cs, fmt='%d',fontsize=8.5) \n \n # Wind barbs / vectors \n ax.quiver(lons, lats, uwnd[k,:,:], vwnd[k,:,:], transform=datacrs, \n color='k', regrid_shape=20, pivot='middle')\n \n # subtitles\n ax.set_title(type_label[k], loc='right')\n ax.set_title('850 hPa Heights, Temperature, & Winds', loc='left')\n \n # Tick labels (no gridlines)\n dx = np.arange(-180,31,30)\n dy = np.arange(-60,61,30)\n gl = ax.gridlines(crs=mapcrs, draw_labels=True)\n gl.xlines = False\n gl.ylines = False\n gl.xlabels_top = False\n gl.ylabels_right = False\n gl.xlocator = mticker.FixedLocator(dx)\n gl.ylocator = mticker.FixedLocator(dy)\n gl.xformatter = LONGITUDE_FORMATTER\n gl.yformatter = LATITUDE_FORMATTER\n\n \n# Display figure\n#plt.savefig(savedir+'850-hgts-tmpc-wind.png', dpi=350)\nplt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7265ae3779d74910d58fdb3814f9f35bb7ed760
48,044
ipynb
Jupyter Notebook
data/ETL_Pipeline_Preparation.ipynb
AntonioBauer/DisasterResponsePipelines
561c99e2f938f41ce857aec2e83846b399d2a8e6
[ "FTL", "CNRI-Python", "RSA-MD" ]
null
null
null
data/ETL_Pipeline_Preparation.ipynb
AntonioBauer/DisasterResponsePipelines
561c99e2f938f41ce857aec2e83846b399d2a8e6
[ "FTL", "CNRI-Python", "RSA-MD" ]
null
null
null
data/ETL_Pipeline_Preparation.ipynb
AntonioBauer/DisasterResponsePipelines
561c99e2f938f41ce857aec2e83846b399d2a8e6
[ "FTL", "CNRI-Python", "RSA-MD" ]
1
2021-01-23T14:53:52.000Z
2021-01-23T14:53:52.000Z
35.404569
292
0.414349
[ [ [ "# ETL Pipeline Preparation\nFollow the instructions below to help you create your ETL pipeline.\n### 1. Import libraries and load datasets.\n- Import Python libraries\n- Load `messages.csv` into a dataframe and inspect the first few lines.\n- Load `categories.csv` into a dataframe and inspect the first few lines.", "_____no_output_____" ] ], [ [ "# import libraries\nimport pandas as pd\nfrom sqlalchemy import create_engine", "_____no_output_____" ], [ "# load messages dataset into DataFrame messages\nmessages = pd.read_csv('messages.csv')\nmessages.head()", "_____no_output_____" ], [ "# load categories dataset into DataFrame categories\ncategories = pd.read_csv('categories.csv')\ncategories.head()", "_____no_output_____" ] ], [ [ "### 2. Merge datasets.\n- Merge the messages and categories datasets using the common id\n- Assign this combined dataset to `df`, which will be cleaned in the following steps", "_____no_output_____" ] ], [ [ "# merge datasets on id using an inner join\ndf = messages.merge(categories, how='inner', on='id')\ndf.head()", "_____no_output_____" ] ], [ [ "### 3. Split `categories` into separate category columns.\n- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.\n- Use the first row of categories dataframe to create column names for the categories data.\n- Rename columns of `categories` with new column names.", "_____no_output_____" ] ], [ [ "# create a DataFrame based on the 36 category columns\ncategories = df['categories'].str.split(';', expand=True)\ncategories.head()", "_____no_output_____" ], [ "# select the first row of the categories DataFrame\nrow = categories.iloc[0, :]\n# remove the last two characters of the values included in the first row\ncategory_colnames = row.str[:-2]\nprint(category_colnames)", "0 related\n1 request\n2 offer\n3 aid_related\n4 medical_help\n5 medical_products\n6 search_and_rescue\n7 security\n8 military\n9 child_alone\n10 water\n11 food\n12 shelter\n13 clothing\n14 money\n15 missing_people\n16 refugees\n17 death\n18 other_aid\n19 infrastructure_related\n20 transport\n21 buildings\n22 electricity\n23 tools\n24 hospitals\n25 shops\n26 aid_centers\n27 other_infrastructure\n28 weather_related\n29 floods\n30 storm\n31 fire\n32 earthquake\n33 cold\n34 other_weather\n35 direct_report\nName: 0, dtype: object\n" ], [ "# rename the headers\ncategories.columns = category_colnames\ncategories.head()", "_____no_output_____" ] ], [ [ "### 4. Convert category values to just numbers 0 or 1.\n- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.\n- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`.", "_____no_output_____" ] ], [ [ "for column in categories:\n # set each value according to its last character\n categories[column] = categories[column].apply(lambda x: x.split('-')[1])\n # change datatype of column from string to numeric\n categories[column] = pd.to_numeric(categories[column])", "_____no_output_____" ] ], [ [ "### 5. Replace `categories` column in `df` with new category columns.\n- Drop the categories column from the df dataframe since it is no longer needed.\n- Concatenate df and categories data frames.", "_____no_output_____" ] ], [ [ "categories['related'].value_counts()", "_____no_output_____" ], [ "# replace 2 values included in column related by 1 values\ncategories['related'] = categories['related'].replace(2, 1)", "_____no_output_____" ], [ "categories['related'].value_counts()", "_____no_output_____" ], [ "# drop the categories column from DataFrame\ndf.drop('categories', axis=1, inplace=True)\ndf.head()", "_____no_output_____" ], [ "# concatenate DataFrames df and categories on columns\ndf = pd.concat([df, categories], axis=1)\ndf.head()", "_____no_output_____" ] ], [ [ "### 6. Remove duplicates.\n- Check how many duplicates are in this dataset.\n- Drop the duplicates.\n- Confirm duplicates were removed.", "_____no_output_____" ] ], [ [ "# check for duplicates\ndf.duplicated().sum()", "_____no_output_____" ], [ "# drop duplicates\ndf.drop_duplicates(inplace=True)", "_____no_output_____" ], [ "# check number of duplicates\ndf.duplicated().sum()", "_____no_output_____" ], [ "# create table DisasterDate in Database DisasterResponse\nengine = create_engine('sqlite:///DisasterResponse.db')\ndf.to_sql('DisasterData', engine, if_exists='replace', index=False)", "_____no_output_____" ] ], [ [ "### 8. Use this notebook to complete `etl_pipeline.py`\nUse the template file attached in the Resources folder to write a script that runs the steps above to create a database based on new datasets specified by the user. Alternatively, you can complete `etl_pipeline.py` in the classroom on the `Project Workspace IDE` coming later.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
e7265afd53835535aaa11e96f3eae1b58bee42e3
5,847
ipynb
Jupyter Notebook
RobustRegression/OutlierDetectionAndRemovalLocalOutlierFactor.ipynb
vladiant/MachineLearningUtils
c6d1f4e928d4a258b8b6e441c93004e2337ca301
[ "MIT" ]
null
null
null
RobustRegression/OutlierDetectionAndRemovalLocalOutlierFactor.ipynb
vladiant/MachineLearningUtils
c6d1f4e928d4a258b8b6e441c93004e2337ca301
[ "MIT" ]
null
null
null
RobustRegression/OutlierDetectionAndRemovalLocalOutlierFactor.ipynb
vladiant/MachineLearningUtils
c6d1f4e928d4a258b8b6e441c93004e2337ca301
[ "MIT" ]
null
null
null
20.882143
148
0.534975
[ [ [ "# Outlier Detection and Removal\nhttps://machinelearningmastery.com/model-based-outlier-detection-and-removal-in-python/", "_____no_output_____" ], [ "## Dataset\n[House Price Dataset(housing.csv)](https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv)\n\n[House Price Dataset Description (housing.names)](https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.names)", "_____no_output_____" ], [ "### Load and summarize the dataset", "_____no_output_____" ] ], [ [ "from pandas import read_csv\n\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "# Load the dataset\nurl = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv'\ndf = read_csv(url, header=None)", "_____no_output_____" ], [ "# Retrieve the array\ndata = df.values", "_____no_output_____" ], [ "# Split into input and output elements\nX, y = data[:, :-1], data[:, -1]", "_____no_output_____" ], [ "# Summarize the shape of the dataset\nX.shape, y.shape", "_____no_output_____" ], [ "# Split into train and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)", "_____no_output_____" ], [ "# Summarize the shape of the train and test sets\nX_train.shape, X_test.shape, y_train.shape, y_test.shape", "_____no_output_____" ] ], [ [ "## Local Outlier Factor Performance\nA simple approach to identifying outliers is to locate those examples that are far from the other examples in the feature space.", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\nfrom sklearn.neighbors import LocalOutlierFactor\nfrom sklearn.metrics import mean_absolute_error", "_____no_output_____" ] ], [ [ "The model provides the “contamination” argument, that is the expected percentage of outliers in the dataset, be indicated and defaults to 0.1.", "_____no_output_____" ] ], [ [ "# Identify outliers in the training dataset\nlof = LocalOutlierFactor()\nyhat = lof.fit_predict(X_train)", "_____no_output_____" ], [ "# Select all rows that are not outliers\nmask = yhat != -1\nX_train, y_train = X_train[mask, :], y_train[mask]", "_____no_output_____" ], [ "# Summarize the shape of the updated training dataset\nX_train.shape, y_train.shape", "_____no_output_____" ], [ "# Fit the model\nmodel = LinearRegression()\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "# Evaluate the model\nyhat = model.predict(X_test)", "_____no_output_____" ], [ "# Evaluate predictions\nmae = mean_absolute_error(y_test, yhat)", "_____no_output_____" ], [ "print(f'MAE {mae}')", "MAE 3.3559923292852263\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7266ab5677b966655a01411e17db50b3e0364b6
2,740
ipynb
Jupyter Notebook
thinkful/data_science/my_progress/unit_2_supervised_learning/Unit_2_-_Lesson_2_-_Drill_-_Regression_or_Classification.ipynb
djrgit/coursework
2a91da9b76cb1acbd12f3d8049f15d2e71f475a1
[ "MIT" ]
null
null
null
thinkful/data_science/my_progress/unit_2_supervised_learning/Unit_2_-_Lesson_2_-_Drill_-_Regression_or_Classification.ipynb
djrgit/coursework
2a91da9b76cb1acbd12f3d8049f15d2e71f475a1
[ "MIT" ]
null
null
null
thinkful/data_science/my_progress/unit_2_supervised_learning/Unit_2_-_Lesson_2_-_Drill_-_Regression_or_Classification.ipynb
djrgit/coursework
2a91da9b76cb1acbd12f3d8049f15d2e71f475a1
[ "MIT" ]
3
2018-08-13T23:14:22.000Z
2019-01-11T22:50:07.000Z
18.896552
177
0.522263
[ [ [ "# Drill - Regression or Classification", "_____no_output_____" ], [ "### For each of the following situations, decide if you would model using a regression or classification model. Discuss your reasoning with your mentor next time you meet.", "_____no_output_____" ], [ "###### 1. The amount a person will spend on a given site in the next 24 months.", "_____no_output_____" ], [ ">Regression", "_____no_output_____" ], [ "###### 2. What color car someone is going to buy.", "_____no_output_____" ], [ ">Classification", "_____no_output_____" ], [ "###### 3. How many children a family will have.", "_____no_output_____" ], [ ">Classification or Regression", "_____no_output_____" ], [ "###### 4. If someone will sign up for a service.", "_____no_output_____" ], [ ">Classification", "_____no_output_____" ], [ "###### 5. The number of times someone will get sick in a year.", "_____no_output_____" ], [ ">Classification or Regression", "_____no_output_____" ], [ "###### 6. The probability someone will get sick in the next month.", "_____no_output_____" ], [ ">Regression (if we treat it as a simple, continuous probability b/w 0 and 1); (Classification if ranges are defined)", "_____no_output_____" ], [ "###### 7. Which medicine will work best for a given patient.", "_____no_output_____" ], [ ">Classification", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7268d40e9e358a446a078c9688fc980f4882b57
61,538
ipynb
Jupyter Notebook
Project/lncRNA_05_Kmer_Cooccur.ipynb
ShepherdCode/ShepherdML
fd8d71c63f7bd788ea0052294d93e43246254a12
[ "MIT" ]
null
null
null
Project/lncRNA_05_Kmer_Cooccur.ipynb
ShepherdCode/ShepherdML
fd8d71c63f7bd788ea0052294d93e43246254a12
[ "MIT" ]
4
2020-03-24T18:05:09.000Z
2020-12-22T17:42:54.000Z
Project/lncRNA_05_Kmer_Cooccur.ipynb
ShepherdCode/ShepherdML
fd8d71c63f7bd788ea0052294d93e43246254a12
[ "MIT" ]
null
null
null
33.980121
7,712
0.388069
[ [ [ "# Slice and Dice the DataFrames from csv\nCombine K-mer counts for multiple K.\nMaybe our K-mer counter program should generate these.\nAnyway, this is now scripted as combine_csv.py", "_____no_output_____" ], [ "## Combine 2-mer counts and 3-mer counts to one feature set", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf2 = pd.read_csv ('ncRNA.2mer.features.csv')\ndf3 = pd.read_csv ('ncRNA.3mer.features.csv')", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "df3", "_____no_output_____" ], [ "dfm=df3.drop(columns=['seqnum','seqlen'])", "_____no_output_____" ], [ "dfc=pd.concat([df2,dfm],axis='columns')", "_____no_output_____" ], [ "dfc", "_____no_output_____" ], [ "dfc.to_csv('ncRNA.2-3.features.csv')", "_____no_output_____" ], [ "df2 = pd.read_csv ('pcRNA.2mer.features.csv')\ndf3 = pd.read_csv ('pcRNA.3mer.features.csv')\ndfm=df3.drop(columns=['seqnum','seqlen'])\ndfc=pd.concat([df2,dfm],axis='columns')\ndfc.to_csv('pcRNA.2-3.features.csv')", "_____no_output_____" ] ], [ [ "## Count co-occurrence", "_____no_output_____" ] ], [ [ "df_nc = pd.read_csv ('ncRNA.2mer.features.csv')\ndf_pc = pd.read_csv ('pcRNA.2mer.features.csv')\n\nrow = 0 # first sequence\ncol = 3 # second K-mer because 0=seqnum, 1=seqlen, 2='AA', 3=='AC'\nnc_occ_3=df_nc.iloc[row][col]\nnc_occ_2=df_nc.iloc[row][col-1]\n(nc_occ_2,nc_occ_3)", "_____no_output_____" ], [ "pc_occ_3=df_pc.iloc[row][col]\npc_occ_2=df_pc.iloc[row][col-1]\n(pc_occ_2,pc_occ_3)", "_____no_output_____" ], [ "df_nc.columns[2]", "_____no_output_____" ] ], [ [ "Objective is to find a subset of features that seem informative.\nEvery pair of K-mers has a co-occurrence distribution for nc and pc instances.\nChoose K-mer pairs for which the two distributions pass the t-test.\n\nQuick for now, compute the nc-mean and the pc-mean per K-mer pair.\nSort K-mer pairs by R = nc-mean/pc-mean.\nFor now, avoid sequence overlap without knowing position.\nChoose top L pairs such that #pairs = 4^K i.e. same #features as before.\nRestrict to pairs AB CD (16 possible) where B!=C and D!= (8 possible).\nOr just let the ratio wipe this out.\n\nStatistics could get complicated based on chi squared.\nhttps://www.sciencedirect.com/science/article/pii/S1532046406001328\n\nIt is more complicated if they are allowed to overap e.e. 'AA' and 'AC' are more likely.\n2-mers that share a letter a much more likely to co-occur.\nThey will form a chain in 2 directions: AA begets CA and AC, which beget GC and CG, etc.\nThis is a case where the suffix array could help!\n\nprob of any one 2-mer P2 = 1/(4*4)\nprob of any one 2-mer in N+1 bases P2N = N/(4*4)\nprob of one 2-mer T times = binomial P2B = (NtakeT)*[(P2N)^T]*[(1-P2N)^(N-T)]\nCDF of one 2-mer T times or more = CDF = sum ( P2B ) over i=0toT \n\nprob of any two 2-mer Q2 = 1/[(4*4)^2] if they are independent\nprob of any one 2-mer in N+1 bases Q2N = [N/(4*4)]*[(N-2)/(4*4)] if nonoverlap\nprob of one 2-mer T times = binomial Q2B = (NtakeT)*[(Q2N)^T]*[(1-Q2N)^(N-T)]\nCDF of one 2-mer T times or more = CDF = sum ( Q2B ) over i=0toT \n\nlog odds = log (p/(1-p)) ??\n\nFor now, use a simple formula.\nCounting per sequence, no need to normalize by seq len.\nGiven counts n1,n2 and p1,p2 for seq1: R[1] = min(1,n1,n2)/min(1,p1,p2)", "_____no_output_____" ] ], [ [ "num_kmers=16\nnum_seqs=10000 # reasonable size sample: first 10K of nc and first 10K of pc\nfirst_kmer_column=2\nco_occur=[]\nfor i in range(1,num_kmers):\n I = first_kmer_column+i\n kmer_i = df_nc.columns[I]\n for j in range(0,i):\n J=first_kmer_column+j\n kmer_j = df_pc.columns[J]\n NC_sum = 0\n PC_sum = 0\n for s in range(num_seqs):\n nc_ci = df_nc.iloc[s][I]\n nc_cj = df_nc.iloc[s][J]\n nc_min = min(nc_ci,nc_cj)\n nc_len = df_nc['seqlen'][s]\n NC_sum += nc_min/nc_len\n pc_ci = df_pc.iloc[s][I]\n pc_cj = df_pc.iloc[s][J]\n pc_min = min(pc_ci,pc_cj)\n pc_len = df_pc['seqlen'][s]\n PC_sum += pc_min/pc_len\n R = NC_sum/PC_sum\n print(\"R %s / %s = %10f\"%(kmer_i,kmer_j,R))\n triple=(kmer_i,kmer_j,R)\n co_occur.append(triple)", "R AC / AA = 1.074790\nR AG / AA = 1.093077\nR AG / AC = 1.013515\nR AT / AA = 1.118839\nR AT / AC = 1.068708\nR AT / AG = 1.073068\nR CA / AA = 1.125849\nR CA / AC = 1.018915\nR CA / AG = 1.007975\nR CA / AT = 1.091846\nR CC / AA = 1.060855\nR CC / AC = 0.976139\nR CC / AG = 0.944916\nR CC / AT = 1.045810\nR CC / CA = 0.940748\nR CG / AA = 0.665406\nR CG / AC = 0.626674\nR CG / AG = 0.626202\nR CG / AT = 0.666122\nR CG / CA = 0.618859\nR CG / CC = 0.620823\nR CT / AA = 1.128863\nR CT / AC = 1.009709\nR CT / AG = 1.007551\nR CT / AT = 1.089098\nR CT / CA = 1.007934\nR CT / CC = 0.940427\nR CT / CG = 0.620940\nR GA / AA = 1.044521\nR GA / AC = 1.005922\nR GA / AG = 0.971644\nR GA / AT = 1.047372\nR GA / CA = 0.982358\nR GA / CC = 0.946071\nR GA / CG = 0.628129\nR GA / CT = 0.990019\nR GC / AA = 0.982455\nR GC / AC = 0.933377\nR GC / AG = 0.883732\nR GC / AT = 0.986588\nR GC / CA = 0.877400\nR GC / CC = 0.839302\nR GC / CG = 0.620826\nR GC / CT = 0.874015\nR GC / GA = 0.895655\nR GG / AA = 1.016677\nR GG / AC = 0.951463\nR GG / AG = 0.919562\nR GG / AT = 1.007087\nR GG / CA = 0.911491\nR GG / CC = 0.860518\nR GG / CG = 0.622037\nR GG / CT = 0.902144\nR GG / GA = 0.922250\nR GG / GC = 0.831139\nR GT / AA = 1.017274\nR GT / AC = 0.970613\nR GT / AG = 0.981352\nR GT / AT = 1.016337\nR GT / CA = 0.978204\nR GT / CC = 0.963510\nR GT / CG = 0.643573\nR GT / CT = 0.979185\nR GT / GA = 0.973436\nR GT / GC = 0.934548\nR GT / GG = 0.949429\nR TA / AA = 1.207495\nR TA / AC = 1.150081\nR TA / AG = 1.168585\nR TA / AT = 1.205224\nR TA / CA = 1.176962\nR TA / CC = 1.130406\nR TA / CG = 0.728350\nR TA / CT = 1.173436\nR TA / GA = 1.145844\nR TA / GC = 1.077033\nR TA / GG = 1.095653\nR TA / GT = 1.103391\nR TC / AA = 1.100634\nR TC / AC = 0.998258\nR TC / AG = 1.022176\nR TC / AT = 1.082141\nR TC / CA = 1.023443\nR TC / CC = 0.988379\nR TC / CG = 0.634623\nR TC / CT = 1.033455\nR TC / GA = 1.003931\nR TC / GC = 0.928942\nR TC / GG = 0.946992\nR TC / GT = 0.978562\nR TC / TA = 1.158520\nR TG / AA = 1.068363\nR TG / AC = 1.006532\nR TG / AG = 0.977107\nR TG / AT = 1.063553\nR TG / CA = 0.979206\nR TG / CC = 0.918948\nR TG / CG = 0.617899\nR TG / CT = 0.980997\nR TG / GA = 0.961732\nR TG / GC = 0.862850\nR TG / GG = 0.889905\nR TG / GT = 0.985301\nR TG / TA = 1.168697\nR TG / TC = 1.012941\nR TT / AA = 1.126230\nR TT / AC = 1.072714\nR TT / AG = 1.087477\nR TT / AT = 1.102247\nR TT / CA = 1.105553\nR TT / CC = 1.068070\nR TT / CG = 0.675787\nR TT / CT = 1.122865\nR TT / GA = 1.049632\nR TT / GC = 0.995516\nR TT / GG = 1.017752\nR TT / GT = 1.022198\nR TT / TA = 1.197603\nR TT / TC = 1.106431\nR TT / TG = 1.068894\n" ], [ "f=open('triples.txt','w')\nfor ele in co_occur:\n f.write(str(ele))\n f.write('\\n')\nf.close()", "_____no_output_____" ], [ "df=pd.DataFrame(co_occur)\ndf.columns=['mer1','mer2','freq']\ndf['freq'].min(),df['freq'].max()", "_____no_output_____" ], [ "# Is it normally distributed?\ndf.hist(column='freq',bins=20)", "_____no_output_____" ], [ "def dist(x,y,z):\n return abs(1-z)\ndf.sort_values(by=['freq'],inplace=True)\ndf[:16]", "_____no_output_____" ], [ "df[-16:]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "raw", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code" ] ]
e726a693e7ef90562521764ea9e3379fdafc6103
3,129
ipynb
Jupyter Notebook
examples/notebooks/Non-Convex Extensions.ipynb
ashamraeva/cvxpy
9e320f9c96c9232860d83a5756567ac6401dd51e
[ "ECL-2.0", "Apache-2.0" ]
3,285
2015-01-03T04:02:29.000Z
2021-04-19T14:51:29.000Z
examples/notebooks/Non-Convex Extensions.ipynb
ashamraeva/cvxpy
9e320f9c96c9232860d83a5756567ac6401dd51e
[ "ECL-2.0", "Apache-2.0" ]
1,138
2015-01-01T19:40:14.000Z
2021-04-18T23:37:31.000Z
examples/notebooks/Non-Convex Extensions.ipynb
ashamraeva/cvxpy
9e320f9c96c9232860d83a5756567ac6401dd51e
[ "ECL-2.0", "Apache-2.0" ]
765
2015-01-02T19:29:39.000Z
2021-04-20T00:50:43.000Z
31.928571
348
0.528603
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e726b2fc649e98f8597b7b0c01f9ef045f00c5ee
66,989
ipynb
Jupyter Notebook
Latent_factor/PCA.ipynb
artzers/MachineLearning
6c4b08439a9f803609520c71a3450dc53232c9d6
[ "MIT" ]
66
2017-01-10T14:00:10.000Z
2021-12-01T01:24:18.000Z
Latent_factor/PCA.ipynb
ZhidongYang/MachineLearning
6c4b08439a9f803609520c71a3450dc53232c9d6
[ "MIT" ]
null
null
null
Latent_factor/PCA.ipynb
ZhidongYang/MachineLearning
6c4b08439a9f803609520c71a3450dc53232c9d6
[ "MIT" ]
63
2017-02-24T14:16:01.000Z
2020-03-27T13:57:32.000Z
608.990909
41,642
0.938393
[ [ [ "%matplotlib inline\nimport cv2,numpy as np\nfrom matplotlib import pyplot as plt \n\nimg=cv2.imread('test.jpg')\nimg=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\nimg = img.astype(np.float)\ncolmean = img.mean(axis=0)\ncolmax = img.max(axis=0)\ncolmin = img.min(axis=0)\nfor i in range(0,img.shape[1]):\n img[:,i]-=colmean[i]\n maxV = img[:,i].max()\n minV = img[:,i].min()\n leng = maxV-minV\n img[:,i]=(img[:,i]-minV)/leng\n \nU,s,V=np.linalg.svd(img)\nthrev = 50\nindex = s<threv\n#print index\nindex = np.where(index!=False)\nindex = index[0][0]\nprint(index)\nUm = U\nUm[:,index:]=0\nVm = V\nVm[index:,:]=0\nS=s[s>threv]\nSm = np.zeros(img.shape)\nSm[:len(S),:len(S)]= np.diag(S)\nres = np.dot(Um,np.dot(Sm,Vm))\nfig = plt.figure(1)\nplt.imshow(img,cmap='gray')\nfig = plt.figure(2)\nplt.imshow(res,cmap='gray')\nplt.show()", "24\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
e726b5b38ea47d94c0eaeebb779f284661a1b1d6
6,926
ipynb
Jupyter Notebook
00_core.ipynb
korakot/colabdev
5aac549248b9505e073f0ed76e65b2b328806172
[ "Apache-2.0" ]
1
2022-03-15T13:12:48.000Z
2022-03-15T13:12:48.000Z
00_core.ipynb
korakot/colabdev
5aac549248b9505e073f0ed76e65b2b328806172
[ "Apache-2.0" ]
null
null
null
00_core.ipynb
korakot/colabdev
5aac549248b9505e073f0ed76e65b2b328806172
[ "Apache-2.0" ]
null
null
null
37.236559
477
0.59717
[ [ [ "# default_exp core", "_____no_output_____" ], [ "#hide\n!pip install nbdev", "Collecting nbdev\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d5/3c/e7b430d1f698709a79ac548baf5ad1d91e393c04ccaf42e763793eabde07/nbdev-0.2.16-py3-none-any.whl (44kB)\n\r\u001b[K |███████▍ | 10kB 15.7MB/s eta 0:00:01\r\u001b[K |██████████████▊ | 20kB 1.7MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 30kB 2.5MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▌ | 40kB 1.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 51kB 1.7MB/s \n\u001b[?25hRequirement already satisfied: nbconvert>=5.6.1 in /usr/local/lib/python3.6/dist-packages (from nbdev) (5.6.1)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from nbdev) (3.13)\nCollecting fastscript\n Downloading https://files.pythonhosted.org/packages/55/0e/ecdc0213646bc82986884121109a38b50bbc2cd2c491bbbfdc7ae39228e3/fastscript-0.1.4-py3-none-any.whl\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from nbdev) (20.3)\nRequirement already satisfied: nbformat>=4.4.0 in /usr/local/lib/python3.6/dist-packages (from nbdev) (5.0.5)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (0.6.0)\nRequirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (0.4.4)\nRequirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (4.3.3)\nRequirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (2.1.3)\nRequirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (0.8.4)\nRequirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (3.1.4)\nRequirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (0.3)\nRequirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (1.4.2)\nRequirement already satisfied: jinja2>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (2.11.1)\nRequirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev) (4.6.3)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->nbdev) (2.4.6)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->nbdev) (1.12.0)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4.0->nbdev) (2.6.0)\nRequirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4.0->nbdev) (0.2.0)\nRequirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.6.1->nbdev) (4.4.2)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.6.1->nbdev) (0.5.1)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.4->nbconvert>=5.6.1->nbdev) (1.1.1)\nInstalling collected packages: fastscript, nbdev\nSuccessfully installed fastscript-0.1.4 nbdev-0.2.16\n" ], [ "#hide\n# from google.colab import drive\n# drive.mount('/content/drive')", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n" ] ], [ [ "# module name here\n\n> API details.", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.showdoc import *", "_____no_output_____" ], [ "#export\ndef say_hello(to):\n \"Say hello to somebody\"\n return f'Hello {to}!'", "_____no_output_____" ], [ "say_hello(\"Sylvain\")", "_____no_output_____" ], [ "from IPython.display import SVG\nSVG('<svg height=\"100\"><circle cx=\"50\" cy=\"50\" r=\"40\"/></svg>')", "_____no_output_____" ], [ "assert say_hello(\"Jeremy\")==\"Hello Jeremy!\"", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e726c86a76c6fa244c59ce8a69637b83973192bd
68,264
ipynb
Jupyter Notebook
Notebooks/EDA/eda_Host.ipynb
PedroCotovio/TPD-Airbnb-Data
cbd4d58ba838cbbfb36acf8b49dadf9758f6aac0
[ "MIT" ]
null
null
null
Notebooks/EDA/eda_Host.ipynb
PedroCotovio/TPD-Airbnb-Data
cbd4d58ba838cbbfb36acf8b49dadf9758f6aac0
[ "MIT" ]
null
null
null
Notebooks/EDA/eda_Host.ipynb
PedroCotovio/TPD-Airbnb-Data
cbd4d58ba838cbbfb36acf8b49dadf9758f6aac0
[ "MIT" ]
null
null
null
62.800368
23,268
0.698582
[ [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom scipy import stats\nfrom scipy.stats import norm", "_____no_output_____" ] ], [ [ "# Read data", "_____no_output_____" ] ], [ [ "listings_file_path = '../data/airbnb/listings.csv'\nal_file_path = '../data/Alojamento_Local.csv'\ndf_al = pd.read_csv(al_file_path)\ndf_listings = pd.read_csv(listings_file_path)", "C:\\Users\\joao_\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3063: DtypeWarning: Columns (61,62) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n" ] ], [ [ "# General schema\n\n![Star schema](../schema.png)", "_____no_output_____" ], [ "# Exploratory data analysis", "_____no_output_____" ] ], [ [ "def missing_data(df, columns):\n \"\"\"Calculates the amount of missing data in given DataFrame's selected columns\"\"\"\n df_graph = df[columns]\n # missing data\n total = df_graph.isnull().sum().sort_values(ascending=False)\n percent = (df_graph.isnull().sum()/df_graph.isnull().count()).sort_values(ascending=False)\n missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\n print(missing_data)\n plt.figure(figsize=(10,5))\n ax = sns.barplot(x='index',y = 'Percent', data= missing_data.reset_index())\n ax.set_xticklabels(ax.get_xticklabels(), rotation=45)\n return ax\n\ndef feature_dist(df, column_name):\n plt.figure(figsize=(15,4))\n sns.distplot(df[column_name] , fit=norm)\n\n # Get the fitted parameters used by the function\n (mu, sigma) = norm.fit(df[column_name])\n print( '\\n mu = {:.2f} and sigma = {:.2f}\\n'.format(mu, sigma))\n plt.legend(['Normal dist. ($\\mu=$ {:.2f} and $\\sigma=$ {:.2f} )'.format(mu, sigma)],\n loc='best')\n plt.ylabel('Frequency')\n plt.title('{} distribution'.format(column_name))\n plt.show()\n\n#IN BAR CHARTS, SET PERCENTAGES ABOVE EACH BAR\ndef set_bar_percentage(ax, df):\n for p in ax.patches:\n ax.annotate(f\"{p.get_height() * 100 / df.shape[0]:.2f}%\", (p.get_x() + p.get_width() / 2., p.get_height()),\n ha='center', va='center', rotation=0, xytext=(0, 10),\n textcoords='offset points')\n \n#IN BAR CHARTS, SET THE VALUE ABOVE EACH BAR\ndef set_bar_amount(ax):\n for p in ax.patches:\n height = p.get_height()\n ax.annotate(\"{0:.2f}%\".format(p.get_height()*100), (p.get_x() + p.get_width() / 2., p.get_height()),\n ha='center', va='center', rotation=0, xytext=(0, 10),\n textcoords='offset points')\n \n#Simple plot\ndef simple_plot(df,column):\n bx = sns.catplot(x=column,data=df, kind='count')\n (bx.set_axis_labels(column, \"Count\")\n .set_titles(\"{col_name} {col_var}\")\n .despine(left=True))", "_____no_output_____" ] ], [ [ "## Merge _df_listings_ with _alojamento_local.csv_", "_____no_output_____" ] ], [ [ "def intTryParse(value):\n \"\"\"Tries to parse string to an integer\"\"\"\n try:\n a = int(value)\n return True\n except ValueError:\n return False", "_____no_output_____" ], [ "# get only listings where \ndf_listings_with_license = df_listings[(~df_listings['license'].isnull()) #'license' is not null\n & (df_listings['license'] != 'Exempt')] # && != 'Exempt'\n\n# string replace\ndf_listings_with_license['NrRNAL'] = [s.replace('/AL','').replace('.','') # remove '/AL' and '.' from code\n for s in df_listings_with_license['license']]\n\n# get only records where license nr can be converted to int \ndf_listings_with_license = df_listings_with_license[[intTryParse(s) # if code can be converted to int\n for s in df_listings_with_license['NrRNAL']]] # keep it\n\n# convert NrRNAL to int before merge the two dataframes\ndf_listings_with_license['NrRNAL'] = df_listings_with_license['NrRNAL'].astype(np.int64) # convert code to int\n\n# inner join two dataframes\ndf_result = pd.merge(df_listings_with_license, df_al, how='inner', on='NrRNAL')", "C:\\Users\\joao_\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n import sys\n" ] ], [ [ "Save the intersection of the two files to disk", "_____no_output_____" ] ], [ [ "listings__al_file_path = '../data/listings_al.csv'\ndf_result.to_csv(listings__al_file_path,index=False)", "_____no_output_____" ], [ "print('Dataset size: {}'.format(len(df_result)))", "Dataset size: 17168\n" ] ], [ [ "## Missing data", "_____no_output_____" ], [ "Singling out the columns of interest for this dimension, we analyse the distribution of missing values.", "_____no_output_____" ] ], [ [ "columns = [\n'host_name',\n'host_since',\n'host_location',\n'host_response_time',\n'host_is_superhost',\n'host_verifications'\n]\nax = missing_data(df_result,columns)\nset_bar_amount(ax)\nax.set_xlabel('Columns')\nax.set_ylabel('Missing data (%)')\nplt.ylim(0,1)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.set_title('Host dimension missing data')", " Total Percent\nhost_response_time 1691 0.098497\nhost_location 21 0.001223\nhost_verifications 0 0.000000\nhost_is_superhost 0 0.000000\nhost_since 0 0.000000\nhost_name 0 0.000000\n" ] ], [ [ "## Host dimension", "_____no_output_____" ], [ "**host_name**\n\nvarchar(40), no missing data", "_____no_output_____" ] ], [ [ "print(df_result['host_name'])", "0 Ellie\n1 Bárbara\n2 Mónica\n3 Francisco\n4 Francisco\n ... \n17163 Feels Like Home\n17164 Raquel\n17165 Raquel\n17166 Simone\n17167 Fábio\nName: host_name, Length: 17168, dtype: object\n" ] ], [ [ "**host_since**\n\ndate AAAA-MM-DD", "_____no_output_____" ] ], [ [ "df_result['host_since']", "_____no_output_____" ] ], [ [ "**host_country**\n\n-- Need to extract countries\n\nAttention -- 21 missing values - probably delete rows", "_____no_output_____" ] ], [ [ "print(df_result['host_location'].value_counts())", "Lisbon, Lisbon, Portugal 9020\nPT 2845\nPortugal 744\nCascais, Lisbon, Portugal 584\nSintra, Lisbon, Portugal 311\n ... \nBudapest, Hungary 1\nVieira de Leiria, Leiria District, Portugal 1\nAlgés 1\nTravessa do sequeiro 34, 3andar, Esq, 1200-441 Lisboa,portugal 1\nSerris, Île-de-France, France 1\nName: host_location, Length: 618, dtype: int64\n" ] ], [ [ "**host_country_gdp**\n\nSource: https://data.worldbank.org/indicator/ny.gdp.pcap.cd", "_____no_output_____" ] ], [ [ "gdp_file_path = '../data/GDP per capita (worldbank).csv'\ndf_gdp = pd.read_csv(gdp_file_path)\n\ndf_gdp", "_____no_output_____" ] ], [ [ "**host_continent**", "_____no_output_____" ] ], [ [ "continents = df_gdp['Continent Name']", "_____no_output_____" ] ], [ [ "**host_response_time**\n\n{'within an hour','within a few hours','within a day','a few days or more'}\n\nAttention -- 1691 missing values - delete rows or replace with 'unkwown'", "_____no_output_____" ] ], [ [ "print(df_result['host_response_time'].value_counts())", "within an hour 12943\nwithin a few hours 1506\nwithin a day 884\na few days or more 144\nName: host_response_time, dtype: int64\n" ], [ "simple_plot(df_result, 'host_response_time')", "_____no_output_____" ] ], [ [ "**host_is_superhost**\n\nChange to \"Superhost\" and \"Not Superhost\"", "_____no_output_____" ] ], [ [ "print(df_result['host_is_superhost'].value_counts())\n\nfor index, row in enumerate(df_result['host_is_superhost'], start = 0):\n if row == 't':\n df_result.loc[index, 'host_is_superhost'] = \"Superhost\"\n elif row == 'f':\n df_result.loc[index, 'host_is_superhost'] = \"Not Superhost\"\n\nprint(df_result['host_is_superhost'].value_counts())", "Not Superhost 11879\nSuperhost 5289\nName: host_is_superhost, dtype: int64\nNot Superhost 11879\nSuperhost 5289\nName: host_is_superhost, dtype: int64\n" ] ], [ [ "**host_identity_verified**\n\n-- Change to \"Verified\" and \"Not Verified\"", "_____no_output_____" ] ], [ [ "print(df_result['host_identity_verified'].value_counts())\n\nfor index, row in enumerate(df_result['host_identity_verified'], start = 0):\n if row == 't':\n df_result.loc[index, 'host_identity_verified'] = \"Verified\"\n elif row == 'f':\n df_result.loc[index, 'host_identity_verified'] = \"Unverified\"\n\nprint(df_result['host_identity_verified'].value_counts())", "f 12200\nt 4968\nName: host_identity_verified, dtype: int64\nUnverified 12200\nVerified 4968\nName: host_identity_verified, dtype: int64\n" ] ], [ [ "## 1. Dimensions and facts tables of the data warehouse", "_____no_output_____" ] ], [ [ "class Host:\n def __init__(self, data):\n self.name = data[0]\n self.since = data[1]\n self.contry = data[2]\n self.continent = data[3]\n self.country_gdp = data[4]\n self.response_time = data[5]\n self.is_superhost = data[6]\n self.identity_verified = data[7]", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e726ca3e1286c6ca1d25c37bb9ab839abbda7996
166,597
ipynb
Jupyter Notebook
knn-iris.ipynb
mikequaid/GMIT-course-files
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
knn-iris.ipynb
mikequaid/GMIT-course-files
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
knn-iris.ipynb
mikequaid/GMIT-course-files
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
180.299784
139,764
0.857879
[ [ [ "# KNN with Iris", "_____no_output_____" ], [ "## Libraries", "_____no_output_____" ] ], [ [ "import sklearn.neighbors as nei\nimport pandas as pd\nimport sklearn.model_selection as mod", "_____no_output_____" ] ], [ [ "## Data", "_____no_output_____" ] ], [ [ "# Load the iris data set from a URL.\ndf = pd.read_csv(\"https://github.com/ianmcloughlin/datasets/raw/master/iris.csv\")", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Visualise", "_____no_output_____" ] ], [ [ "# Load the seaborn package.\nimport seaborn as sns\n\n# Plot the Iris data set with a pair plot.\nsns.pairplot(df, hue=\"class\")", "C:\\Users\\mclou\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n" ] ], [ [ "## Inputs and outputs", "_____no_output_____" ] ], [ [ "inputs = df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']]\noutputs = df['class']", "_____no_output_____" ] ], [ [ "## Classifier", "_____no_output_____" ] ], [ [ "knn = nei.KNeighborsClassifier(n_neighbors=5)", "_____no_output_____" ] ], [ [ "## Fit", "_____no_output_____" ] ], [ [ "knn.fit(inputs, outputs)", "_____no_output_____" ] ], [ [ "## Predict", "_____no_output_____" ] ], [ [ "df.loc[121]", "_____no_output_____" ], [ "knn.predict([[5.6, 2.8, 4.9, 2.0]])", "_____no_output_____" ] ], [ [ "## Evaluate", "_____no_output_____" ] ], [ [ "(knn.predict(inputs) == outputs).sum()", "_____no_output_____" ], [ "inputs_train, inputs_test, outputs_train, outputs_test = mod.train_test_split(inputs, outputs, test_size=0.33)", "_____no_output_____" ], [ "knn = nei.KNeighborsClassifier(n_neighbors=5)\nknn.fit(inputs_train, outputs_train)", "_____no_output_____" ], [ "(knn.predict(inputs_test) == outputs_test).sum()", "_____no_output_____" ] ], [ [ "## End", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
e726cbc505fba34a10406445233545450901912d
84,873
ipynb
Jupyter Notebook
House_price_predict.ipynb
eesilas/Adafruit_CircuitPython_ESP32SPI
04be5f54dbeaaadfb2ac71ef03f8921fd01c7100
[ "MIT" ]
null
null
null
House_price_predict.ipynb
eesilas/Adafruit_CircuitPython_ESP32SPI
04be5f54dbeaaadfb2ac71ef03f8921fd01c7100
[ "MIT" ]
null
null
null
House_price_predict.ipynb
eesilas/Adafruit_CircuitPython_ESP32SPI
04be5f54dbeaaadfb2ac71ef03f8921fd01c7100
[ "MIT" ]
null
null
null
113.164
19,678
0.679721
[ [ [ "<a href=\"https://colab.research.google.com/github/eesilas/Adafruit_CircuitPython_ESP32SPI/blob/master/House_price_predict.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "# 匯入「房屋txt檔」和 『第三方函式庫』到 Colab\n\nfrom google.colab import files \nuploaded = files.upload() # 匯入房屋txt檔\nuploaded = files.upload() # 匯入第三方函式庫 keras_lite_convertor", "_____no_output_____" ], [ "# 讀取 house.txt 檔案, 並得出特徵和標籤\n\nimport keras_lite_convertor as kc\n\npath_name = 'house.txt' # 檔案路徑\nData_reader = kc.Data_reader(path_name, mode='regression') # 指定讀檔模式(regression適用於迴歸預測)\ndata, label = Data_reader.read(random_seed=12) # 將檔案讀到的 5 種資料分為『特徵』和『標籤』, 並設定亂數種子為 12", "_____no_output_____" ], [ "# 資料預處理\n\n# 取資料中的 90% 當作訓練集\nsplit_num = int(len(data)*0.9) \ntrain_data = data[:split_num] # 訓練用資料\ntrain_label = label[:split_num] # 訓練用標籤 \n\n# 正規化\nmean = train_data.mean() # 訓練資料的平均數\ndata -= mean # data 減掉平均值\nstd = train_data.std() # 訓練資料的標準差\ndata /= std # data 除以平均值\n\nnew_label = label/max(label) # 將 label範圍落在 0~1 (label正規化)", "_____no_output_____" ], [ "# 訓練集、驗證集、測試集的資料形狀\n\n# 訓練集\ntrain_data = data[:split_num] # 訓練用資料\nprint(train_data.shape) \ntrain_label = new_label[:split_num] # 訓練用標籤\n\n# 驗證集\nvalidation_data=data[split_num:-30] # 驗證用資料\nprint(validation_data.shape)\nvalidation_label=new_label[split_num:-30] # 驗證用標籤\n\n# 測試集\ntest_data=data[-30:] # 測試用資料, 30筆\nprint(test_data.shape)\ntest_label=new_label[-30:] # 測試用標籤", "(620, 4)\n(39, 4)\n(30, 4)\n" ], [ "# 建立神經網路架構\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\n\nmodel = Sequential() # 建構網路模型 \nmodel.add(Dense(20,activation='relu', # 增加一層神經層, 使用 ReLU 激活函數, 輸入層有4個輸入特徵\n input_shape=(4,))) \nmodel.add(Dense(20,activation='relu')) # 增加一層神經層, 使用 ReLU 激活函數\nmodel.add(Dense(1)) # 增加輸出為 1 的輸出層", "_____no_output_____" ], [ "# 編譯及訓練模型\n\n# 編譯模型\nmodel.compile(optimizer='adam',loss='mse',metrics=['mae']) \nhistory=model.fit(train_data,train_label, # 訓練集\n validation_data=(validation_data,validation_label), # 驗證集\n epochs=200) # 訓練週期", "Epoch 1/200\n20/20 [==============================] - 0s 6ms/step - loss: 0.0589 - mae: 0.1825 - val_loss: 0.0150 - val_mae: 0.0977\nEpoch 2/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0125 - mae: 0.0824 - val_loss: 0.0052 - val_mae: 0.0505\nEpoch 3/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0067 - mae: 0.0617 - val_loss: 0.0035 - val_mae: 0.0465\nEpoch 4/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0052 - mae: 0.0522 - val_loss: 0.0030 - val_mae: 0.0438\nEpoch 5/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0047 - mae: 0.0495 - val_loss: 0.0029 - val_mae: 0.0424\nEpoch 6/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0045 - mae: 0.0485 - val_loss: 0.0032 - val_mae: 0.0457\nEpoch 7/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0046 - mae: 0.0488 - val_loss: 0.0029 - val_mae: 0.0412\nEpoch 8/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0043 - mae: 0.0471 - val_loss: 0.0029 - val_mae: 0.0409\nEpoch 9/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0042 - mae: 0.0464 - val_loss: 0.0028 - val_mae: 0.0409\nEpoch 10/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0042 - mae: 0.0462 - val_loss: 0.0029 - val_mae: 0.0413\nEpoch 11/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0042 - mae: 0.0458 - val_loss: 0.0029 - val_mae: 0.0409\nEpoch 12/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0454 - val_loss: 0.0029 - val_mae: 0.0407\nEpoch 13/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0456 - val_loss: 0.0029 - val_mae: 0.0423\nEpoch 14/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0451 - val_loss: 0.0029 - val_mae: 0.0399\nEpoch 15/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0452 - val_loss: 0.0029 - val_mae: 0.0413\nEpoch 16/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0447 - val_loss: 0.0029 - val_mae: 0.0412\nEpoch 17/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0451 - val_loss: 0.0029 - val_mae: 0.0398\nEpoch 18/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0445 - val_loss: 0.0029 - val_mae: 0.0394\nEpoch 19/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0450 - val_loss: 0.0030 - val_mae: 0.0430\nEpoch 20/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0445 - val_loss: 0.0028 - val_mae: 0.0397\nEpoch 21/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0451 - val_loss: 0.0028 - val_mae: 0.0391\nEpoch 22/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0452 - val_loss: 0.0028 - val_mae: 0.0398\nEpoch 23/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0458 - val_loss: 0.0033 - val_mae: 0.0475\nEpoch 24/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0453 - val_loss: 0.0030 - val_mae: 0.0439\nEpoch 25/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0446 - val_loss: 0.0028 - val_mae: 0.0405\nEpoch 26/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0444 - val_loss: 0.0028 - val_mae: 0.0395\nEpoch 27/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0442 - val_loss: 0.0029 - val_mae: 0.0431\nEpoch 28/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0463 - val_loss: 0.0028 - val_mae: 0.0415\nEpoch 29/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0447 - val_loss: 0.0028 - val_mae: 0.0395\nEpoch 30/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0444 - val_loss: 0.0028 - val_mae: 0.0391\nEpoch 31/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0445 - val_loss: 0.0030 - val_mae: 0.0401\nEpoch 32/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0454 - val_loss: 0.0029 - val_mae: 0.0393\nEpoch 33/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0441 - val_loss: 0.0028 - val_mae: 0.0397\nEpoch 34/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0452 - val_loss: 0.0029 - val_mae: 0.0391\nEpoch 35/200\n20/20 [==============================] - 0s 12ms/step - loss: 0.0040 - mae: 0.0449 - val_loss: 0.0028 - val_mae: 0.0395\nEpoch 36/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0041 - mae: 0.0458 - val_loss: 0.0034 - val_mae: 0.0430\nEpoch 37/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0455 - val_loss: 0.0028 - val_mae: 0.0389\nEpoch 38/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0448 - val_loss: 0.0029 - val_mae: 0.0391\nEpoch 39/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0443 - val_loss: 0.0028 - val_mae: 0.0388\nEpoch 40/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0444 - val_loss: 0.0028 - val_mae: 0.0393\nEpoch 41/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0446 - val_loss: 0.0028 - val_mae: 0.0397\nEpoch 42/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0450 - val_loss: 0.0030 - val_mae: 0.0401\nEpoch 43/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0447 - val_loss: 0.0028 - val_mae: 0.0396\nEpoch 44/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0442 - val_loss: 0.0028 - val_mae: 0.0396\nEpoch 45/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0441 - val_loss: 0.0028 - val_mae: 0.0395\nEpoch 46/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0441 - val_loss: 0.0029 - val_mae: 0.0422\nEpoch 47/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0437 - val_loss: 0.0028 - val_mae: 0.0407\nEpoch 48/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0439 - val_loss: 0.0029 - val_mae: 0.0394\nEpoch 49/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0440 - val_loss: 0.0030 - val_mae: 0.0436\nEpoch 50/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0439 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 51/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0446 - val_loss: 0.0028 - val_mae: 0.0400\nEpoch 52/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0434 - val_loss: 0.0028 - val_mae: 0.0396\nEpoch 53/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0434 - val_loss: 0.0029 - val_mae: 0.0411\nEpoch 54/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0437 - val_loss: 0.0029 - val_mae: 0.0393\nEpoch 55/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0038 - mae: 0.0440 - val_loss: 0.0032 - val_mae: 0.0412\nEpoch 56/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0435 - val_loss: 0.0029 - val_mae: 0.0400\nEpoch 57/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0434 - val_loss: 0.0029 - val_mae: 0.0427\nEpoch 58/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0440 - val_loss: 0.0029 - val_mae: 0.0429\nEpoch 59/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0438 - val_loss: 0.0032 - val_mae: 0.0414\nEpoch 60/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0447 - val_loss: 0.0034 - val_mae: 0.0421\nEpoch 61/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0435 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 62/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0435 - val_loss: 0.0029 - val_mae: 0.0401\nEpoch 63/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0431 - val_loss: 0.0029 - val_mae: 0.0402\nEpoch 64/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0433 - val_loss: 0.0029 - val_mae: 0.0394\nEpoch 65/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0443 - val_loss: 0.0029 - val_mae: 0.0396\nEpoch 66/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0434 - val_loss: 0.0029 - val_mae: 0.0405\nEpoch 67/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0434 - val_loss: 0.0030 - val_mae: 0.0398\nEpoch 68/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0433 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 69/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0435 - val_loss: 0.0031 - val_mae: 0.0410\nEpoch 70/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0439 - val_loss: 0.0030 - val_mae: 0.0400\nEpoch 71/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0437 - val_loss: 0.0031 - val_mae: 0.0456\nEpoch 72/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0444 - val_loss: 0.0029 - val_mae: 0.0429\nEpoch 73/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0438 - val_loss: 0.0031 - val_mae: 0.0462\nEpoch 74/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0438 - val_loss: 0.0029 - val_mae: 0.0426\nEpoch 75/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0460 - val_loss: 0.0029 - val_mae: 0.0399\nEpoch 76/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0431 - val_loss: 0.0029 - val_mae: 0.0403\nEpoch 77/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0442 - val_loss: 0.0029 - val_mae: 0.0408\nEpoch 78/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0435 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 79/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0429 - val_loss: 0.0028 - val_mae: 0.0401\nEpoch 80/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0430 - val_loss: 0.0029 - val_mae: 0.0404\nEpoch 81/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0433 - val_loss: 0.0028 - val_mae: 0.0400\nEpoch 82/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0433 - val_loss: 0.0032 - val_mae: 0.0463\nEpoch 83/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0440 - val_loss: 0.0029 - val_mae: 0.0406\nEpoch 84/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0431 - val_loss: 0.0030 - val_mae: 0.0401\nEpoch 85/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0430 - val_loss: 0.0029 - val_mae: 0.0399\nEpoch 86/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0441 - val_loss: 0.0034 - val_mae: 0.0499\nEpoch 87/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0454 - val_loss: 0.0031 - val_mae: 0.0464\nEpoch 88/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0438 - val_loss: 0.0030 - val_mae: 0.0456\nEpoch 89/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0436 - val_loss: 0.0029 - val_mae: 0.0413\nEpoch 90/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0426 - val_loss: 0.0029 - val_mae: 0.0399\nEpoch 91/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0430 - val_loss: 0.0029 - val_mae: 0.0403\nEpoch 92/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0438 - val_loss: 0.0031 - val_mae: 0.0411\nEpoch 93/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0432 - val_loss: 0.0031 - val_mae: 0.0402\nEpoch 94/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0430 - val_loss: 0.0029 - val_mae: 0.0403\nEpoch 95/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0448 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 96/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0433 - val_loss: 0.0038 - val_mae: 0.0443\nEpoch 97/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0447 - val_loss: 0.0030 - val_mae: 0.0409\nEpoch 98/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0039 - mae: 0.0455 - val_loss: 0.0032 - val_mae: 0.0471\nEpoch 99/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0438 - val_loss: 0.0031 - val_mae: 0.0454\nEpoch 100/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0428 - val_loss: 0.0029 - val_mae: 0.0412\nEpoch 101/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0431 - val_loss: 0.0030 - val_mae: 0.0394\nEpoch 102/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0035 - mae: 0.0424 - val_loss: 0.0031 - val_mae: 0.0456\nEpoch 103/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0437 - val_loss: 0.0028 - val_mae: 0.0383\nEpoch 104/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0429 - val_loss: 0.0029 - val_mae: 0.0396\nEpoch 105/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0438 - val_loss: 0.0028 - val_mae: 0.0412\nEpoch 106/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0434 - val_loss: 0.0029 - val_mae: 0.0417\nEpoch 107/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0426 - val_loss: 0.0031 - val_mae: 0.0396\nEpoch 108/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0428 - val_loss: 0.0029 - val_mae: 0.0390\nEpoch 109/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0433 - val_loss: 0.0031 - val_mae: 0.0409\nEpoch 110/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0425 - val_loss: 0.0029 - val_mae: 0.0424\nEpoch 111/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0428 - val_loss: 0.0032 - val_mae: 0.0477\nEpoch 112/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0458 - val_loss: 0.0033 - val_mae: 0.0493\nEpoch 113/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0457 - val_loss: 0.0028 - val_mae: 0.0392\nEpoch 114/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0437 - val_loss: 0.0036 - val_mae: 0.0435\nEpoch 115/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0040 - mae: 0.0446 - val_loss: 0.0029 - val_mae: 0.0399\nEpoch 116/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0429 - val_loss: 0.0033 - val_mae: 0.0490\nEpoch 117/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0035 - mae: 0.0432 - val_loss: 0.0029 - val_mae: 0.0423\nEpoch 118/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0427 - val_loss: 0.0029 - val_mae: 0.0388\nEpoch 119/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0424 - val_loss: 0.0029 - val_mae: 0.0424\nEpoch 120/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0428 - val_loss: 0.0029 - val_mae: 0.0405\nEpoch 121/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0429 - val_loss: 0.0029 - val_mae: 0.0396\nEpoch 122/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0422 - val_loss: 0.0032 - val_mae: 0.0480\nEpoch 123/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0434 - val_loss: 0.0029 - val_mae: 0.0408\nEpoch 124/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0427 - val_loss: 0.0030 - val_mae: 0.0392\nEpoch 125/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0424 - val_loss: 0.0031 - val_mae: 0.0397\nEpoch 126/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0036 - mae: 0.0427 - val_loss: 0.0030 - val_mae: 0.0428\nEpoch 127/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0437 - val_loss: 0.0039 - val_mae: 0.0452\nEpoch 128/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0038 - mae: 0.0451 - val_loss: 0.0029 - val_mae: 0.0404\nEpoch 129/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0425 - val_loss: 0.0028 - val_mae: 0.0407\nEpoch 130/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0431 - val_loss: 0.0029 - val_mae: 0.0391\nEpoch 131/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0429 - val_loss: 0.0029 - val_mae: 0.0393\nEpoch 132/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0432 - val_loss: 0.0030 - val_mae: 0.0397\nEpoch 133/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0427 - val_loss: 0.0029 - val_mae: 0.0430\nEpoch 134/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0037 - mae: 0.0447 - val_loss: 0.0030 - val_mae: 0.0457\nEpoch 135/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0422 - val_loss: 0.0031 - val_mae: 0.0401\nEpoch 136/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0419 - val_loss: 0.0030 - val_mae: 0.0394\nEpoch 137/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0426 - val_loss: 0.0031 - val_mae: 0.0403\nEpoch 138/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0424 - val_loss: 0.0029 - val_mae: 0.0434\nEpoch 139/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0429 - val_loss: 0.0028 - val_mae: 0.0396\nEpoch 140/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0422 - val_loss: 0.0029 - val_mae: 0.0395\nEpoch 141/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0422 - val_loss: 0.0030 - val_mae: 0.0417\nEpoch 142/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0429 - val_loss: 0.0029 - val_mae: 0.0408\nEpoch 143/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0422 - val_loss: 0.0030 - val_mae: 0.0398\nEpoch 144/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0419 - val_loss: 0.0028 - val_mae: 0.0399\nEpoch 145/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0431 - val_loss: 0.0033 - val_mae: 0.0403\nEpoch 146/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0435 - val_loss: 0.0031 - val_mae: 0.0414\nEpoch 147/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0431 - val_loss: 0.0028 - val_mae: 0.0401\nEpoch 148/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0426 - val_loss: 0.0031 - val_mae: 0.0400\nEpoch 149/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0421 - val_loss: 0.0031 - val_mae: 0.0403\nEpoch 150/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0429 - val_loss: 0.0028 - val_mae: 0.0397\nEpoch 151/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0432 - val_loss: 0.0029 - val_mae: 0.0406\nEpoch 152/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0427 - val_loss: 0.0028 - val_mae: 0.0399\nEpoch 153/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0425 - val_loss: 0.0031 - val_mae: 0.0393\nEpoch 154/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0430 - val_loss: 0.0034 - val_mae: 0.0502\nEpoch 155/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0430 - val_loss: 0.0030 - val_mae: 0.0449\nEpoch 156/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0425 - val_loss: 0.0027 - val_mae: 0.0400\nEpoch 157/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0419 - val_loss: 0.0030 - val_mae: 0.0426\nEpoch 158/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0423 - val_loss: 0.0034 - val_mae: 0.0494\nEpoch 159/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0429 - val_loss: 0.0028 - val_mae: 0.0398\nEpoch 160/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0419 - val_loss: 0.0028 - val_mae: 0.0416\nEpoch 161/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0419 - val_loss: 0.0029 - val_mae: 0.0409\nEpoch 162/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0422 - val_loss: 0.0028 - val_mae: 0.0415\nEpoch 163/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0417 - val_loss: 0.0029 - val_mae: 0.0417\nEpoch 164/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0421 - val_loss: 0.0028 - val_mae: 0.0402\nEpoch 165/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0418 - val_loss: 0.0028 - val_mae: 0.0402\nEpoch 166/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0420 - val_loss: 0.0030 - val_mae: 0.0459\nEpoch 167/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0429 - val_loss: 0.0028 - val_mae: 0.0416\nEpoch 168/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0425 - val_loss: 0.0029 - val_mae: 0.0436\nEpoch 169/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0416 - val_loss: 0.0027 - val_mae: 0.0383\nEpoch 170/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0425 - val_loss: 0.0028 - val_mae: 0.0388\nEpoch 171/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0418 - val_loss: 0.0028 - val_mae: 0.0403\nEpoch 172/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0420 - val_loss: 0.0028 - val_mae: 0.0410\nEpoch 173/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0420 - val_loss: 0.0028 - val_mae: 0.0379\nEpoch 174/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0413 - val_loss: 0.0027 - val_mae: 0.0396\nEpoch 175/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0422 - val_loss: 0.0028 - val_mae: 0.0389\nEpoch 176/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0418 - val_loss: 0.0028 - val_mae: 0.0395\nEpoch 177/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0423 - val_loss: 0.0028 - val_mae: 0.0418\nEpoch 178/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0446 - val_loss: 0.0028 - val_mae: 0.0392\nEpoch 179/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0034 - mae: 0.0417 - val_loss: 0.0028 - val_mae: 0.0382\nEpoch 180/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0419 - val_loss: 0.0028 - val_mae: 0.0414\nEpoch 181/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0425 - val_loss: 0.0029 - val_mae: 0.0403\nEpoch 182/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0427 - val_loss: 0.0028 - val_mae: 0.0404\nEpoch 183/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0424 - val_loss: 0.0028 - val_mae: 0.0384\nEpoch 184/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0426 - val_loss: 0.0029 - val_mae: 0.0401\nEpoch 185/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0420 - val_loss: 0.0027 - val_mae: 0.0376\nEpoch 186/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0035 - mae: 0.0432 - val_loss: 0.0028 - val_mae: 0.0417\nEpoch 187/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0422 - val_loss: 0.0028 - val_mae: 0.0385\nEpoch 188/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0416 - val_loss: 0.0027 - val_mae: 0.0391\nEpoch 189/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0417 - val_loss: 0.0028 - val_mae: 0.0437\nEpoch 190/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0036 - mae: 0.0433 - val_loss: 0.0029 - val_mae: 0.0384\nEpoch 191/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0419 - val_loss: 0.0028 - val_mae: 0.0405\nEpoch 192/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0420 - val_loss: 0.0027 - val_mae: 0.0407\nEpoch 193/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0415 - val_loss: 0.0029 - val_mae: 0.0423\nEpoch 194/200\n20/20 [==============================] - 0s 4ms/step - loss: 0.0034 - mae: 0.0429 - val_loss: 0.0029 - val_mae: 0.0397\nEpoch 195/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0422 - val_loss: 0.0029 - val_mae: 0.0395\nEpoch 196/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0421 - val_loss: 0.0027 - val_mae: 0.0400\nEpoch 197/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0421 - val_loss: 0.0027 - val_mae: 0.0387\nEpoch 198/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0034 - mae: 0.0428 - val_loss: 0.0030 - val_mae: 0.0461\nEpoch 199/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0426 - val_loss: 0.0029 - val_mae: 0.0443\nEpoch 200/200\n20/20 [==============================] - 0s 3ms/step - loss: 0.0033 - mae: 0.0418 - val_loss: 0.0028 - val_mae: 0.0393\n" ], [ "# 查看損失值\n\nimport matplotlib.pyplot as plt\n\nplt.plot(history.history['loss'],\"r\", # 訓練資料的損失值\n label='loss') \nplt.plot(history.history['val_loss'],\"b\", # 驗證資料的損失值 \n label='val loss') \nplt.legend() # 顯示標籤\nplt.show() # 顯示圖片", "_____no_output_____" ], [ "# 資料比較圖\n\nimport numpy as np\n\nplt.figure(figsize=(10,8)) # 定義一個視窗(10,8 為視窗大小)\nplt.subplots_adjust(hspace = 0.3) # 調整兩張圖的間距\n\n# 實際值-預測值(*max(label)表示恢復原始值)\nerror=test_label.reshape(30,1)*max(label)-model.predict(test_data)*max(label)\n# 把誤差分成 15 等份, 求出每一等份的長度\nstep=(max(error)-min(error))/15\n# 寫出每一等份的值 \ninterval=[i for i in range(int(min(error)),\n int(max(error))+int(step),\n int(step))]\n\n# 實際預測比較圖\nwidth = 0.3\nplt.subplot(2,1,1) # 第一張圖位於視窗裡的位置 (2列1行的第二個位置 - 上)\nplt.xlabel(\"test data\") # x軸名稱\nplt.ylabel(\"money\") # y軸名稱\nplt.bar(np.linspace(1,30,30)-width/2, # 實際值\n (test_label*max(label)).reshape(30),\n width=width, label='actual') \nplt.bar(np.linspace(1,30,30)+width/2, # 預測值\n (model.predict(test_data)*max(label)).reshape(30),\n width=width, label='predict') \n\nplt.legend()\n\n# 誤差分布圖\nplt.subplot(2,1,2) # 第二張圖位於視窗裡的位置 (2列1行的第二個位置 - 下)\nplt.xlabel(\"error\") # x軸名稱\nplt.ylabel(\"quantity\") # y軸名稱\nplt.hist(error,interval,linewidth=1,edgecolor='black') #顯示\n\nplt.show() # 將圖片顯示出來", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e726d90ebd5cd9fbeba3e810d443c99133a64f1d
9,700
ipynb
Jupyter Notebook
lecture02-conda-python/my-first-notebook.ipynb
chrisschuerz/lecture-scientific-computing
26eeaa7dd1374065a02b55b7544633774be0f89f
[ "MIT" ]
9
2020-03-06T15:31:20.000Z
2020-03-26T07:43:17.000Z
lecture02-conda-python/my-first-notebook.ipynb
chrisschuerz/lecture-scientific-computing
26eeaa7dd1374065a02b55b7544633774be0f89f
[ "MIT" ]
19
2020-03-09T05:04:46.000Z
2020-06-02T13:18:31.000Z
lecture02-conda-python/my-first-notebook.ipynb
chrisschuerz/lecture-scientific-computing
26eeaa7dd1374065a02b55b7544633774be0f89f
[ "MIT" ]
18
2020-03-05T20:20:22.000Z
2020-05-28T15:10:50.000Z
21.748879
393
0.539175
[ [ [ "print(\"Hello World!\")", "_____no_output_____" ] ], [ [ "## This is text shown in the notebook with markdown\n\n", "_____no_output_____" ], [ "Python as a calculator:", "_____no_output_____" ] ], [ [ "2 + 3 * 5", "_____no_output_____" ], [ "# This is a comment, no markdown here!\nx = 1", "_____no_output_____" ] ], [ [ "x is a variable. It can change its value by assignment.", "_____no_output_____" ] ], [ [ "print(x)", "_____no_output_____" ], [ "x = x + 1", "_____no_output_____" ], [ "x", "_____no_output_____" ] ], [ [ "You can have as many variables as you want...", "_____no_output_____" ] ], [ [ "y = 7", "_____no_output_____" ], [ "z = x * y", "_____no_output_____" ] ], [ [ "Output of last statement is shown in console, if there is no assignment", "_____no_output_____" ] ], [ [ "z = z", "_____no_output_____" ], [ "z\n\ny", "_____no_output_____" ] ], [ [ "Print() can be used to show contents of variables. Works always, but messy. Better use logging (later in class?).", "_____no_output_____" ] ], [ [ "print(z)\n\nprint(y)", "_____no_output_____" ] ], [ [ "# If statement\n\nAn if statement assesses a boolean statement (TRUE or FALSE). If statement is TRUE, first command block is executed.\nIF statement is FALSE, second command block is executed.", "_____no_output_____" ] ], [ [ "if z > 3:\n print(\"z is larger than 3\")\nelse:\n print(\"z is smaller than or equal to 3\")", "_____no_output_____" ] ], [ [ "# Correct indentation in Python is syntax, i.e. obligatory\n", "_____no_output_____" ] ], [ [ "if z > 3:\nprint(\"z is larger than 3\")", "_____no_output_____" ] ], [ [ "# Exercise 1\nCreate a variable named `infected_03_17` and `infected_03_21`.\nAssign the values `1332` and `2785`, respectively.\n\nAssign `4` to the variable `diff_days`.\n\nCalculate the average daily discrete growth rate given by $\\left(\\frac{\\textrm{infected_03_21}}{\\textrm{infected_03_17}}\\right)^{\\frac{1}{\\textrm{diff_days}}} - 1$ and assign it to the variable `growth_rate`. Note: the power function $x^2$ is written as `x**2` in Python code.\n\nDisplay the `growth_rate`.", "_____no_output_____" ], [ "# Functions\nA function is used to put code into one container that can be reused. You will write some functions, but you will use even more functions.\n\nMost functions you are going to use are part of packages, i.e. containers of functions (more on that later!). For now you have to know that if you want to use a function, you first have to import the package. The package has to be available on your computer, so sometimes it has to be installed with conda first. Many important packages come in the conda standard installation already.\n", "_____no_output_____" ], [ "Calculate the square root of a number:", "_____no_output_____" ] ], [ [ "import math\n\nmath.sqrt(7)", "_____no_output_____" ] ], [ [ "Of course, you can also pass variables to functions:", "_____no_output_____" ] ], [ [ "x = 7\n\nmath.sqrt(x)", "_____no_output_____" ] ], [ [ "For the moment, we do not write our own functions, but only use them. This will also be important for your home work, where you use functions to encrypt a text.", "_____no_output_____" ], [ "# Exercise 2\n\nCalculate the logarithm of `infected_03_17` and `infected_03_21` with the Python library `math`. Use Google to find the correct function name in python. Or guess ;-)\n\nDisplay the result.", "_____no_output_____" ], [ "# Exercise 3\n\n\n", "_____no_output_____" ], [ "The code below assigns a random value in the range $[0, 50]$ to $x$. Add code to the cell that prints \"Above 25\" if the value of $x$ is above $25$ and \"Below 25\" if the value of $x$ is below $25$. Use an if statement for that purpose.", "_____no_output_____" ] ], [ [ "import random\n\nx = random.randrange(0, 50)", "_____no_output_____" ] ], [ [ "# Exercise 4", "_____no_output_____" ], [ "The following code should calculate the doubling time of an exponential process from the growth_rate calculated before. However, something is wrong. Find the error!", "_____no_output_____" ] ], [ [ "doubling_time = log(2) / log(1 + growth_rate)\n\ndoubling_time", "_____no_output_____" ] ], [ [ "# Writing your own functions\n\nSomewhere up here, we have calculated the growth rate of a process for two timesteps. It would be handy to reuse this code without having to retype the formula each time. This can be accompolished with your own user-defined function.", "_____no_output_____" ] ], [ [ "def growth_rate(value_t1, value_t2, time_diff):\n result = (value_t2 / value_t1)**(1 / time_diff) - 1\n \n return result", "_____no_output_____" ], [ "growth_rate", "_____no_output_____" ], [ "growth_rate(100, 200, 1)", "_____no_output_____" ], [ "t1 = 100\nt2 = 200\ndiff = 1\n\ngrowth_rate(t1, t2, diff)", "_____no_output_____" ], [ "value_t1 = 100\nvalue_t2 = 200\ntime_diff = 1\n\ngrowth_rate(value_t1, value_t2, time_diff)", "_____no_output_____" ] ], [ [ "# Exercise 5\n\nWrite a function to determine the doubling time of an exponential process, given the growth rate.", "_____no_output_____" ], [ "# Exercise 6\n\nThis function should calculate the relative growth rate of a process given two discrete variables. The formula is given by $\\textrm{relative_growth} = \\frac{\\textrm{value_t2} - \\textrm{value_t1}}{\\textrm{value_t2}}$. However, the function does not work. Find the problem!", "_____no_output_____" ] ], [ [ "def relative_growth(value_t1, value_t2):\n result = (x_t2 - x_t1) / x_t1\n return result", "_____no_output_____" ], [ "relative_growth(2, 1)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
e726ddeff3fc7e465b63a3e816b7121707d846a6
420,309
ipynb
Jupyter Notebook
notebooks/learned-prior/vae-prior.ipynb
act65/mri-reconstruction
2dcf30e10c37a482f1aab2524c5966d03eb72085
[ "MIT" ]
8
2018-12-30T10:33:44.000Z
2021-07-16T09:59:09.000Z
notebooks/learned-prior/vae-prior.ipynb
gongjizhang/mri-reconstruction
2dcf30e10c37a482f1aab2524c5966d03eb72085
[ "MIT" ]
1
2018-06-01T03:51:35.000Z
2018-08-27T03:39:34.000Z
notebooks/learned-prior/vae-prior.ipynb
gongjizhang/mri-reconstruction
2dcf30e10c37a482f1aab2524c5966d03eb72085
[ "MIT" ]
5
2018-12-30T10:33:45.000Z
2021-11-15T11:19:56.000Z
898.096154
104,220
0.952097
[ [ [ "import tensorflow as tf\n# tf.enable_eager_execution()\nimport src.infovae as infovae\nimport src.mri as mri\n\nimport numpy as np\nimport matplotlib.pyplot as plt", "/home/act65/anaconda3/envs/mri/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ], [ "from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"/tmp/MNIST_data/\", one_hot=False)", "WARNING:tensorflow:From <ipython-input-2-4fc822d0e34a>:2: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use alternatives such as official/mnist/dataset.py from tensorflow/models.\nWARNING:tensorflow:From /home/act65/anaconda3/envs/mri/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease write your own downloading logic.\nWARNING:tensorflow:From /home/act65/anaconda3/envs/mri/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use tf.data to implement this functionality.\nExtracting /tmp/MNIST_data/train-images-idx3-ubyte.gz\nWARNING:tensorflow:From /home/act65/anaconda3/envs/mri/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use tf.data to implement this functionality.\nExtracting /tmp/MNIST_data/train-labels-idx1-ubyte.gz\nExtracting /tmp/MNIST_data/t10k-images-idx3-ubyte.gz\nExtracting /tmp/MNIST_data/t10k-labels-idx1-ubyte.gz\nWARNING:tensorflow:From /home/act65/anaconda3/envs/mri/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use alternatives such as official/mnist/dataset.py from tensorflow/models.\n" ], [ "with tf.variable_scope('s'):\n\n nn = infovae.InfoVAE(8, 64, 4)\n inputs = tf.placeholder(shape=[None, 32, 32, 1], dtype=tf.float32)\n\n y = nn(inputs)\n d = int(0.5*32*32*1)\n f = mri.MRI(N=d, stddev=0.0001)", "_____no_output_____" ], [ "def construct_loss(r, y, coeffs):\n p = nn.estimate_density(r)\n recon = nn(r)\n h = f(r)\n recon_loss = tf.reduce_mean(tf.reduce_sum(tf.square(h - y), axis=1))\n prior_loss = tf.reduce_mean(tf.reduce_sum(tf.square(nn.h[:,:,:,:8]), axis=1)) \n# prior_loss = tf.reduce_mean(1 - p)\n tv_loss = tf.reduce_mean(tf.image.total_variation(r))\n\n agg_loss = (coeffs[0]*recon_loss+\n coeffs[1]*prior_loss+\n coeffs[2]*tv_loss)\n return agg_loss", "_____no_output_____" ], [ "saver = tf.train.Saver(nn.encoder.variables + nn.decoder.variables)\nmodel_path = '/media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/17'\n\ndef reconstruct(x, N, coeffs, lr=0.1):\n iters = N//10\n global_step = tf.train.get_or_create_global_step()\n stddev = tf.train.exponential_decay(0.1,global_step,iters//5,0.5)\n \n y = f(inputs)\n r = tf.Variable(tf.zeros_like(x)) #tf.nn.sigmoid(tf.random_normal(tf.shape(x))))\n loss = construct_loss(r, y, coeffs)\n train_step = tf.train.AdamOptimizer(lr).minimize(loss, var_list=[r], global_step=global_step)\n op = r.assign_add(tf.random_normal(shape=tf.shape(r), stddev=stddev))\n train_step = tf.group(*[train_step, op])\n \n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n saver.restore(sess, tf.train.latest_checkpoint(model_path))\n \n plt.figure(figsize=(16,8))\n for i in range(N):\n sess.run(train_step, feed_dict={inputs: x})\n \n if i % iters == 0:\n recon, L = sess.run([r, loss], feed_dict={inputs: x})\n print('\\r{}'.format(L),end='',flush=True)\n plt.subplot(2,5,i//iters+1)\n plt.imshow(recon[0,:,:,0].reshape((32,32)), cmap='gray', interpolation='nearest')\n plt.axis('off')\n R = sess.run(r)\n return R", "_____no_output_____" ], [ "# check the restored params\nwith tf.variable_scope('scope'):\n r = nn.decoder(tf.random_normal(shape=tf.shape(nn.z)))\n with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint(model_path))\n recon = sess.run(r, feed_dict={inputs: infovae.InfoVAE.preprocess(mnist.train.images[0:25, ...])})\n\nplt.figure(figsize=(16,16)) \nfor i in range(len(recon)):\n plt.subplot(5,5,i+1)\n plt.imshow(recon[i,:,:,0].reshape((32,32)), cmap='gray', interpolation='nearest')", "INFO:tensorflow:Restoring parameters from /media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/17/infovae_saver.ckpt\n" ], [ "x = infovae.InfoVAE.preprocess(mnist.train.images[0:3, ...])\nrecon = reconstruct(x, 100, coeffs=[1,0,0,0]) # no regularisation, just the data\n\nfor i in range(3):\n plt.figure()\n plt.subplot(1,2,1)\n plt.imshow(recon[i, :, :, 0], interpolation='nearest', cmap='gray')\n plt.subplot(1,2,2)\n plt.imshow(mnist.train.images[i].reshape((28,28)), interpolation='nearest', cmap='gray')", "INFO:tensorflow:Restoring parameters from /media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/17/infovae_saver.ckpt\n0.05640988424420357" ], [ "x = infovae.InfoVAE.preprocess(mnist.train.images[0:3, ...])\nrecon = reconstruct(x, 100, coeffs=[0,1,0], lr=0.1) # learned prior\n\nfor i in range(3):\n plt.figure()\n plt.subplot(1,2,1)\n plt.imshow(recon[i, :, :, 0], interpolation='nearest', cmap='gray')\n plt.subplot(1,2,2)\n plt.imshow(mnist.train.images[i].reshape((28,28)), interpolation='nearest', cmap='gray')", "INFO:tensorflow:Restoring parameters from /media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/17/infovae_saver.ckpt\n2.5869674118439434e-06" ], [ "recon = reconstruct(x, 100, coeffs=[1,1,0]) # learned prior and recon\n\nfor i in range(3):\n plt.figure()\n plt.subplot(1,2,1)\n plt.imshow(recon[i, :, :, 0], interpolation='nearest', cmap='gray')\n plt.subplot(1,2,2)\n plt.imshow(mnist.train.images[i].reshape((28,28)), interpolation='nearest', cmap='gray')", "INFO:tensorflow:Restoring parameters from /media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/15/infovae_saver.ckpt\n0.34958934783935547" ], [ "recon = reconstruct(x, 500, coeffs=[1,0,1]) # CS with TV prior\n\nfor i in range(3):\n plt.figure()\n plt.subplot(1,2,1)\n plt.imshow(recon[i, :, :, 0], interpolation='nearest', cmap='gray')\n plt.subplot(1,2,2)\n plt.imshow(mnist.train.images[i].reshape((28,28)), interpolation='nearest', cmap='gray')", "INFO:tensorflow:Restoring parameters from /media/act65/640a4914-fb27-4b69-acc6-2a2d569c900b/infovae/15/infovae_saver.ckpt\n124.24583435058594" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e726e5a13d91d39b18ea80a36c623f4bdd98449a
376,232
ipynb
Jupyter Notebook
examples/cloudml-support-routing/Notebooks/EDA.ipynb
gmogr/professional-services
de342506127ec66af53ebefa560ecf1d827167c2
[ "Apache-2.0" ]
2,116
2017-05-18T19:33:05.000Z
2022-03-31T13:34:48.000Z
examples/cloudml-support-routing/Notebooks/EDA.ipynb
gmogr/professional-services
de342506127ec66af53ebefa560ecf1d827167c2
[ "Apache-2.0" ]
548
2017-05-20T05:05:35.000Z
2022-03-28T16:38:12.000Z
examples/cloudml-support-routing/Notebooks/EDA.ipynb
gmogr/professional-services
de342506127ec66af53ebefa560ecf1d827167c2
[ "Apache-2.0" ]
1,095
2017-05-19T00:02:36.000Z
2022-03-31T05:21:39.000Z
81.736259
41,188
0.745104
[ [ [ "from google.cloud import bigquery\nimport pandas", "_____no_output_____" ], [ "client = bigquery.Client(location=\"US\")\nprint(\"Client creating using default project: {}\".format(client.project))", "Client creating using default project: ut-goog\n" ] ], [ [ "# Looking at the first half of the columns from the dataset.", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n date_received, \n product, subproduct, \n issue, subissue, \n consumer_complaint_narrative, \n company_public_response, \n company_name, \n state \n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\ncomplaints_df = query_job.to_dataframe()\ncomplaints_df", "_____no_output_____" ] ], [ [ "# Looking at null values by column\nPercentage of missing values by columns", "_____no_output_____" ] ], [ [ "(complaints_df.isnull().sum()/len(complaints_df)*100).sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "### Can see that the narrative is the highest missing. ", "_____no_output_____" ] ], [ [ "(len(complaints_df)-complaints_df.isnull().sum()).sort_values()", "_____no_output_____" ] ], [ [ "### We have half a million complaint narratives that we can use the NLP API on.", "_____no_output_____" ] ], [ [ "complaints_df.dtypes", "_____no_output_____" ] ], [ [ "### Looking at Products and Subproducts", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n product, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n product\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nproducts_df = query_job.to_dataframe()\nproducts_df", "_____no_output_____" ], [ "products_df.plot(kind=\"bar\",x='product')", "_____no_output_____" ], [ "products_df", "_____no_output_____" ] ], [ [ "Most have to do with credit and debt. Also, notice that a lot of the labels are very similar (i.e. \"credit card\", \"prepaid card\", \"credit card or prepaid card\"). Might have to find a way to wrap these up.", "_____no_output_____" ], [ "### Look at and maybe combine some of the products?\n### Could the different product labels be due to different companies? or maybe they changed over time with the form?", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n company_name, product, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_name, product\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nproducts_company_df = query_job.to_dataframe()", "_____no_output_____" ], [ "products_company_df[:10]", "_____no_output_____" ], [ "products_company_df[products_company_df['company_name']=='EQUIFAX, INC.']", "_____no_output_____" ], [ "products_company_df[:10].plot(kind='bar',x='product')", "_____no_output_____" ] ], [ [ "Most complaints for the companies seem to be about credit cards or credit. Looks like the differing labels for the categories are not company related. This makes sense since the complaints go through the CFPB and thus are probably standardized. The differing labels may be due to the complaint form changing overtime. ", "_____no_output_____" ], [ "### Lets look at subproducts to see if there is a reason for the different product types", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n subproduct, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n subproduct\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nsubproducts_df = query_job.to_dataframe()\nsubproducts_df", "_____no_output_____" ], [ "query = \"\"\"\n SELECT \n product, subproduct, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n product, subproduct\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nproducts3_df = query_job.to_dataframe()\nproducts3_df[:20]", "_____no_output_____" ] ], [ [ "Notice row 3, \"I do not know\". What does this mean? \n\nAlso, looks like we can combine some subproducts, i.e. \"credit card debt\" and \"credit card or medical debt\" and \"medical\". Or at the at least look into why they are all seperated.", "_____no_output_____" ] ], [ [ "products3_df[products3_df['product'] == 'Credit card or prepaid card']", "_____no_output_____" ] ], [ [ "The just credit card product has no subproduct but the one that combines credit card and prepaid card does. Do they mean different things?", "_____no_output_____" ], [ "# Looking at issues and subissues", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n issue, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n issue\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nissue_df = query_job.to_dataframe()\nissue_df[:10]", "_____no_output_____" ] ], [ [ "Can see some looks repetitive.", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n company_name, issue, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_name, issue\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nissue2_df = query_job.to_dataframe()\nissue2_df[:10]", "_____no_output_____" ], [ "query = \"\"\"\n SELECT \n subissue, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n subissue\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nsubissue_df = query_job.to_dataframe()\nsubissue_df[:10]", "_____no_output_____" ] ], [ [ "debt is not mine and debt is not yours seem to be the same thing?", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n issue, subissue, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n issue, subissue\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nsubissue2_df = query_job.to_dataframe()\nsubissue2_df[:15]", "_____no_output_____" ] ], [ [ "# Looking At Which Companies Had The Most Complaints", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n company_name, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_name\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\ncompanies_df = query_job.to_dataframe()\ncompanies_df", "_____no_output_____" ], [ "query = \"\"\"\n SELECT \n company_name, COUNT(complaint_id) as total, (COUNT(complaint_id)/1498059)*100 as percentage\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_name\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\ncompanies_df2 = query_job.to_dataframe()\ncompanies_df2", "_____no_output_____" ], [ "companies_df2[:10].plot(kind='bar', x='company_name',y='total')", "_____no_output_____" ] ], [ [ "# Checking if companies have different spellings", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n DISTINCT(company_name)\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n company_name LIKE \"C%\"\n ORDER BY company_name\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\ncompanies_df3 = query_job.to_dataframe()\ncompanies_df3[:30]", "_____no_output_____" ] ], [ [ "Doesn't look like there are different named spellings but more analysis could be performed.", "_____no_output_____" ], [ "# Looking at the states sorted by most complaints", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n state, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n state\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nstate_df = query_job.to_dataframe()\nstate_df", "_____no_output_____" ] ], [ [ "Looks like the data includes US territories as well. For example, \"AS\" means American Samoa.", "_____no_output_____" ] ], [ [ "state_df[:10].plot(kind='bar', x='state')", "_____no_output_____" ] ], [ [ "Complaints by state seemed to be correlated by population size.", "_____no_output_____" ], [ "# Resolution\n\nHere we will look at how the company publicly responded to the complaint.", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n company_public_response, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_public_response\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nresponse_df = query_job.to_dataframe()\nresponse_df[:10]", "_____no_output_____" ], [ "response_df['company_public_response'][2]", "_____no_output_____" ] ], [ [ "Most companies chose to have no response. We might also need to look into the difference between what the difference between \"None\" and the \"Company chooses not to provied a public response\".", "_____no_output_____" ], [ "### Maybe there is a trend or pattern that indicates some companies are more prone to certain public reponses over others?", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n company_name, company_public_response, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n company_name, company_public_response\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nresponse2_df = query_job.to_dataframe()\nresponse2_df[:10]", "_____no_output_____" ] ], [ [ "Two reponse types seem to be the most common. Also, big companies like Equifax, Chase, and Capital One seem to prefer the \"None\" response. Why is that?", "_____no_output_____" ] ], [ [ "response2_df[response2_df['company_name']=='EQUIFAX, INC.']", "_____no_output_____" ], [ "response2_df[response2_df['company_name']=='JPMORGAN CHASE & CO.']", "_____no_output_____" ] ], [ [ "Why does JP Morgan and Equifax mainly have \"None\" as its response?", "_____no_output_____" ] ], [ [ "response2_df[response2_df['company_public_response']=='Company believes it acted appropriately as authorized by contract or law'][:10]", "_____no_output_____" ] ], [ [ "Looks like many of the \"smaller\" companies believe that they have acted appropriately.", "_____no_output_____" ], [ "# Looking at date", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n extract(year from date_received) as year, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n year\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nyear_df = query_job.to_dataframe()", "_____no_output_____" ], [ "year_df.sort_values(by='year').plot(x='year')", "_____no_output_____" ], [ "year_df.sort_values(by='year')", "_____no_output_____" ] ], [ [ "Data all the way from 2011 but only 2015 and on has consumer complaint narratives. 2020 drops due to not being a full year yet.", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n extract(month from date_received) as month, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n month\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nmonth_df = query_job.to_dataframe()", "_____no_output_____" ], [ "month_df.sort_values(by='month').plot(kind='bar',x='month')", "_____no_output_____" ], [ "query = \"\"\"\n SELECT \n date_received, COUNT(complaint_id) as total\n FROM \n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n GROUP BY\n date_received\n ORDER BY\n total DESC\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\ntime_df = query_job.to_dataframe()", "_____no_output_____" ], [ "from datetime import datetime\ntime_df['date_received'] = pandas.to_datetime(time_df['date_received'])\ntime_df.set_index('date_received', inplace=True)\ntime_df.head()", "_____no_output_____" ], [ "time_df['total'].plot()", "_____no_output_____" ], [ "dates = pandas.date_range(start='1/1/2017', end='1/1/2018', freq='D')\ntime_df.loc[dates]['total'].plot()", "/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:2: FutureWarning: \nPassing list-likes to .loc or [] with any missing label will raise\nKeyError in the future, you can use .reindex() as an alternative.\n\nSee the documentation here:\nhttps://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike\n \n" ] ], [ [ "Why those spikes?", "_____no_output_____" ] ], [ [ "dates2 = pandas.date_range(start='1/10/2017', end='1/20/2017', freq='D')\ntime_df.loc[dates2]['total'].plot()", "_____no_output_____" ] ], [ [ "Low on the weekends, high on Thursday for this week.\n# Now lets look at the other half of the columns", "_____no_output_____" ], [ "These columns look at zipcode, tags, and other variables that describe how the complaint resolved.", "_____no_output_____" ] ], [ [ "query = \"\"\"\n SELECT \n zip_code,\n tags,\n consumer_consent_provided,\n submitted_via,\n date_sent_to_company,\n company_response_to_consumer,\n timely_response,\n consumer_disputed,\n complaint_id\n FROM\n `bigquery-public-data.cfpb_complaints.complaint_database`\n WHERE\n consumer_complaint_narrative IS NOT NULL\n\"\"\"\nquery_job = client.query(\n query,\n # Location must match that of the dataset(s) referenced in the query.\n location=\"US\",\n) # API request - starts the query\n\nsecond_half_df = query_job.to_dataframe()", "_____no_output_____" ], [ "second_half_df.describe()", "_____no_output_____" ] ], [ [ "Looking at null values for the second half of the dataset columns.", "_____no_output_____" ] ], [ [ "(second_half_df.isnull().sum()/len(second_half_df)*100).sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "Tags seems to be missing in a majority of the rows. Same with the consumer_disputed variable.", "_____no_output_____" ] ], [ [ "second_half_df['zip_code'].value_counts().head(n=20)", "_____no_output_____" ] ], [ [ "Looks like part of the zipcode is hidden.", "_____no_output_____" ], [ "Tag seems to be a classifier of some sort.", "_____no_output_____" ] ], [ [ "second_half_df['tags'].value_counts()", "_____no_output_____" ] ], [ [ "Only three groups. What does servicemember mean? What is the age cutoff for older American?", "_____no_output_____" ] ], [ [ "second_half_df['tags'].value_counts().plot(kind='bar')", "_____no_output_____" ] ], [ [ "Look at consumer consent", "_____no_output_____" ] ], [ [ "second_half_df['consumer_consent_provided'].value_counts().plot(kind='bar')", "_____no_output_____" ] ], [ [ "Only one label for this variable.", "_____no_output_____" ] ], [ [ "second_half_df['submitted_via'].value_counts().plot(kind='bar')", "_____no_output_____" ] ], [ [ "All complaints sent through the web.", "_____no_output_____" ], [ "### Possible company responses", "_____no_output_____" ] ], [ [ "second_half_df['company_response_to_consumer'].value_counts()", "_____no_output_____" ] ], [ [ "Probably want to put more emphasis on untimely response and closed with monetary relief. Focusing on these resolutions will save the company money and face-value.", "_____no_output_____" ] ], [ [ "second_half_df['timely_response'].value_counts().plot(kind='bar')", "_____no_output_____" ], [ "second_half_df['consumer_disputed'].value_counts().plot(kind='bar')", "_____no_output_____" ] ], [ [ "Most complaints are resolved in a timely manner and are not disputed.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e726e873fbf65577499e252793cea63da1567360
84,545
ipynb
Jupyter Notebook
Assignment_3/Part_2_retail_dataset/Code.ipynb
KyleLeePiupiupiu/CS677_Assignment
c38278e81f4e58cc6ef020fade2c075e9fc09bf7
[ "MIT" ]
null
null
null
Assignment_3/Part_2_retail_dataset/Code.ipynb
KyleLeePiupiupiu/CS677_Assignment
c38278e81f4e58cc6ef020fade2c075e9fc09bf7
[ "MIT" ]
null
null
null
Assignment_3/Part_2_retail_dataset/Code.ipynb
KyleLeePiupiupiu/CS677_Assignment
c38278e81f4e58cc6ef020fade2c075e9fc09bf7
[ "MIT" ]
null
null
null
219.028497
47,136
0.913821
[ [ [ "import math\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nfrom matplotlib.ticker import PercentFormatter\nfrom collections import Counter\n\nimport numpy as np\n\n\ndef bernford(digit):\n temp = 1 + (1 / digit)\n return math.log(temp, 10)", "_____no_output_____" ], [ "# Extract the data and extract the first digits\ndf = pd.read_csv('online_retail.csv')\ndf = df[['UnitPrice', 'Country']]\ntemp = []\nfor p in df.UnitPrice:\n p = str(p)\n temp.append(int(p[0]))\ndf['PriceDigit'] = temp\n\n# Drop data with PriceDigit == 0\ndf.drop(df[df.PriceDigit == 0].index, inplace=True)", "_____no_output_____" ], [ "# Create the figure size\nf = plt.figure()\nf.set_size_inches(12,24)\n\n# Subplot of real distrubution\ndata = df.PriceDigit\nreaHis = f.add_subplot(3,1,1)\n\nn, bins, patches = reaHis.hist(data, bins = 9, histtype='bar', ec='black', rwidth=0.8)\n\nreaHis.set_title('Real Distribution', fontsize=20)\nreaHis.set_xlabel('Digits', fontsize=20)\nreaHis.set_ylabel('Probability', fontsize=20)\n\n# Subplot of Equal distrubution\nx = [ i+1 for i in range(9)]\ny = [ 1/9 * len(data) for i in x]\newHis = f.add_subplot(3,1,2)\n\newHis.bar(x, y, width = 0.8)\n\newHis.set_title('Equal Weight Distribution', fontsize=20)\newHis.set_xlabel('Digits', fontsize=20)\newHis.set_ylabel('Probability', fontsize=20)\n\n# Subplot of Bernford distrubution\nx = [ i+1 for i in range(9)]\ny = [bernford(i) * len(data) for i in x]\nbfHis = f.add_subplot(3,1,3)\n\nbfHis.bar(x, y, width = 0.8)\n\nbfHis.set_title('Bernford Distribution', fontsize=20)\nbfHis.set_xlabel('Digits', fontsize=20)\nbfHis.set_ylabel('Probability', fontsize=20)", "_____no_output_____" ], [ "# Relative error\ndef relativeErr(true, predict):\n ans = abs((true-predict) * 100) / true\n return ans\n\nif __name__ == \"__main__\":\n reD = np.array(n) \n ewD = np.array([ 1/9 * len(df.PriceDigit) for i in range(9)])\n bfD = np.array([bernford(i+1) * len(df.PriceDigit) for i in range(9)])\n\n rErr_ewD = relativeErr(ewD, reD)\n rErr_bfD = relativeErr(bfD, reD)\n x = [i+1 for i in range(9)]\n\n\n # Plotting\n f = plt.figure()\n f.set_size_inches(12,24)\n\n ## Plot equal weight \n a = f.add_subplot(2,1,1)\n a.bar(x, rErr_ewD, width = 0.8)\n\n a.set_title('Equal Weight Error', fontsize=20)\n a.set_xlabel('Digits', fontsize=20)\n a.set_ylabel('Error % ', fontsize=20)\n \n\n ## plot bernford \n a = f.add_subplot(2,1,2)\n a.bar(x, rErr_bfD, width = 0.8)\n\n a.set_title('Bernford Error', fontsize=20)\n a.set_xlabel('Digits', fontsize=20)\n a.set_ylabel('Error % ', fontsize=20)\n \n\n\n \n\n\n\n ", "_____no_output_____" ], [ "# RMSE\ndef rmse(true, predict):\n temp = (true-predict)\n n = len(true)\n ans = sum(temp*temp/n)**0.5\n return ans\n\nif __name__ == \"__main__\":\n rmse_ewD = rmse(ewD, reD)\n rmse_bfD = rmse(bfD, reD)\n \n print(\"remse for model 1 to real distribution is {:.1f}\".format(rmse_ewD))\n print(\"remse for model 1 to benford distribution is {:.1f}\".format(rmse_bfD))\n\n ", "remse for model 1 to real distribution is 22256.6\nremse for model 1 to benford distribution is 8120.7\n" ] ], [ [ "# Take 3 countries of my choice", "_____no_output_____" ] ], [ [ "# Check how many countries we have\ncountry = set(df.Country)\ncountry", "_____no_output_____" ], [ "# Choose Isreal, Finland, Japan\ndfIsr = df[df.Country == 'Israel']\ndfFin = df[df.Country == 'Finland']\ndfJap = df[df.Country == 'Japan']", "_____no_output_____" ], [ "def countFrequency(input):\n countList = [0 for i in range(9)]\n for d in input:\n countList[d-1] = countList[d-1] + 1\n return countList\n\nif __name__ == \"__main__\":\n # Compute F P pi\n ## Compute F, real distribution\n fIsr = np.array(countFrequency(dfIsr.PriceDigit))\n fFin = np.array(countFrequency(dfFin.PriceDigit))\n fJap = np.array(countFrequency(dfJap.PriceDigit))\n\n\n ## Compute P, equal weight distribution\n pIsr = np.array([1/9 * len(dfIsr) for i in range(9)])\n pFin = np.array([1/9 * len(dfFin) for i in range(9)])\n pJap = np.array([1/9 * len(dfJap) for i in range(9)])\n ## Compute pi, Bernford distribution\n piIsr = np.array([bernford(i+1) * len(dfIsr) for i in range(9)])\n piFin = np.array([bernford(i+1) * len(dfFin) for i in range(9)])\n piJap = np.array([bernford(i+1) * len(dfJap) for i in range(9)])\n", "_____no_output_____" ], [ "# Using RMSE as a 'distance' metric \nprint(\"RMSE of fIsr with pIrs is {:.1f}\".format(rmse(pIsr, fIsr)))\nprint(\"RMSE of fFin with pFin is {:.1f}\".format(rmse(pFin, fFin)))\nprint(\"RMSE of fJpa with pJap is {:.1f}\".format(rmse(pJap, fJap)))\n\n# Closest to equal weight\nprint('Isreal is the closed to equal weight P')\n", "RMSE of fIsr with pIrs is 3.6\nRMSE of fFin with pFin is 22.7\nRMSE of fJpa with pJap is 25.4\nIsreal is the closed to equal weight P\n" ], [ "print(len(dfIsr))\nprint(len(dfFin))\nprint(len(dfJap))", "34\n215\n171\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e726e994e1366d03a8f6a9472c62a76f83a14e49
43,781
ipynb
Jupyter Notebook
tomolab/Examples/Example_PET_load_listmode_cyclic.ipynb
TomographyLab/TomoLab
86b9a5894ef1660d7f4de39f560f1f92024b40f8
[ "Apache-2.0" ]
5
2019-06-01T13:16:00.000Z
2022-03-02T10:21:59.000Z
tomolab/Examples/Example_PET_load_listmode_cyclic.ipynb
TomographyLab/TomoLab
86b9a5894ef1660d7f4de39f560f1f92024b40f8
[ "Apache-2.0" ]
null
null
null
tomolab/Examples/Example_PET_load_listmode_cyclic.ipynb
TomographyLab/TomoLab
86b9a5894ef1660d7f4de39f560f1f92024b40f8
[ "Apache-2.0" ]
1
2019-06-01T13:19:18.000Z
2019-06-01T13:19:18.000Z
127.641399
31,564
0.846029
[ [ [ "<img align=\"center\" src=\"./images/occiput_logo3.png\" width=\"400\">", "_____no_output_____" ], [ "# Load listmode PET data with CYCLIC sinogram binning\n### This if useful for PET imaging with cyclic movement - e.g. cardiac, respiratory or both. ", "_____no_output_____" ] ], [ [ "import occiput\nfrom occiput.Reconstruction.PET.PET_scanners import Biograph_mMR", "_____no_output_____" ], [ "pet = occiput.Reconstruction.PET.PET_Cyclic_Scan()\npet.set_scanner(Biograph_mMR)\npet.use_compression(False)", "_____no_output_____" ] ], [ [ "### Create N sinograms, each containing counts from K time frames\n#### All it takes is to define a matrix of size (NxKx2) that contains the start and end times in [ms] of each frame. ", "_____no_output_____" ] ], [ [ "time_matrix = zeros((3,4,2))\ntime_matrix[0,0,0] = 0\ntime_matrix[0,0,1] = 1000\ntime_matrix[0,1,0] = 10000\ntime_matrix[0,1,1] = 11000\ntime_matrix[0,2,0] = 20000\ntime_matrix[0,2,1] = 21000\ntime_matrix[0,3,0] = 30000\ntime_matrix[0,3,1] = 31000\n\ntime_matrix[1,0,0] = 1000\ntime_matrix[1,0,1] = 2000\ntime_matrix[1,1,0] = 11000\ntime_matrix[1,1,1] = 12000\ntime_matrix[1,2,0] = 21000\ntime_matrix[1,2,1] = 22000\ntime_matrix[1,3,0] = 31000\ntime_matrix[1,3,1] = 32000\n\ntime_matrix[2,0,0] = 2000\ntime_matrix[2,0,1] = 3000\ntime_matrix[2,1,0] = 12000\ntime_matrix[2,1,1] = 13000\ntime_matrix[2,2,0] = 22000\ntime_matrix[2,2,1] = 23000\ntime_matrix[2,3,0] = 32000\ntime_matrix[2,3,1] = 33000\n\nprint time_matrix", "[[[ 0. 1000.]\n [ 10000. 11000.]\n [ 20000. 21000.]\n [ 30000. 31000.]]\n\n [[ 1000. 2000.]\n [ 11000. 12000.]\n [ 21000. 22000.]\n [ 31000. 32000.]]\n\n [[ 2000. 3000.]\n [ 12000. 13000.]\n [ 22000. 23000.]\n [ 32000. 33000.]]]\n" ] ], [ [ "#### Load the listmode data and create the sinograms: ", "_____no_output_____" ] ], [ [ "filename = '../data/PET_data/scans/2014_12_03_cylinder/listmode/PET_ACQ_342_20141212155111-0.l.hdr'\npet.import_listmode(filename, time_matrix)", "_____no_output_____" ], [ "pet.frame0", "_____no_output_____" ] ], [ [ "#### Now the object pet contains variables named frameX; each is an instance of PET_Static_Scan. \n#### Objects of type PET_Static_Scan have .promots, .randoms variables and methods to reconstruct. ", "_____no_output_____" ] ], [ [ "prompts0 = pet.frame0.prompts.to_nd_array()\nrandoms0 = pet.frame0.randoms.to_nd_array()\n\nprompts1 = pet.frame1.prompts.to_nd_array()\nrandoms1 = pet.frame1.randoms.to_nd_array()\n\nprompts2 = pet.frame2.prompts.to_nd_array()\nrandoms2 = pet.frame2.randoms.to_nd_array()\n\nprompts_global = pet.static.prompts.to_nd_array()\nrandoms_global = pet.static.randoms.to_nd_array()\n\nprint prompts0.sum()\nprint randoms0.sum()\n\nprint prompts1.sum()\nprint randoms1.sum()\n\nprint prompts2.sum()\nprint randoms2.sum()\n\nprint prompts_global.sum()\nprint randoms_global.sum()", "975811.0\n193805.0\n977308.0\n194648.0\n976703.0\n194740.0\n8.05176e+06\n1.60544e+06\n" ] ], [ [ "Under the hood, the sinograms (prompts and randoms) are stored in C structures that reside in the global scope of the \nlibrary listmode_c.so. The Python wrapper exposes methods to obtain additional information about these C structures, \nsuch as compression ratio. This information can be useful for debugging. Let's print the information for the first sinogram (index 0): ", "_____no_output_____" ] ], [ [ "print pet.scanner.listmode.get_measurement_prompt_info(0)\nprint pet.scanner.listmode.get_measurement_delay_info(0)", "{'angles_axial': 0.0, 'time_start': 0L, 'N_u': 344L, 'N_v': 127L, 'N_counts': 975811L, 'compression_ratio': 0.007887070998549461, 'N_locations': 955149L, 'time_bin': 0L, 'listmode_loss': 1.021632194519043, 'angles_azimuthal': 0.0, 'N_azimuthal': 11L, 'size_u': 717.6734619140625, 'size_v': 257.96875, 'N_axial': 252L, 'time_end': 0L}\n{'angles_axial': 0.0, 'time_start': 0L, 'N_u': 344L, 'N_v': 127L, 'N_counts': 193805L, 'compression_ratio': 0.0015973987756296992, 'N_locations': 193450L, 'time_bin': 0L, 'listmode_loss': 1.0018351078033447, 'angles_azimuthal': 0.0, 'N_azimuthal': 11L, 'size_u': 717.6734619140625, 'size_v': 257.96875, 'N_axial': 252L, 'time_end': 0L}\n" ], [ "imshow(pet.frame0.prompts.to_nd_array()[0,5,:,:].T)", "_____no_output_____" ] ], [ [ "#### Note that the pet object can be indexed and iterated. We can use pet[0] instead of pet.frame0 ", "_____no_output_____" ] ], [ [ "imshow(pet[0].prompts.to_nd_array()[0,5,:,:].T)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e726f606ccc3b1c3f43d477543fa91c37232c6be
583,365
ipynb
Jupyter Notebook
Model backlog/Training/Segmentation/Kaggle/73-unet-efficientnetb3-onecycle-batch-norm-warmup.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
38e71eb423f020fd07153e644705e068225e0902
[ "MIT" ]
3
2019-11-19T03:43:43.000Z
2020-02-27T05:38:24.000Z
Model backlog/Training/Segmentation/Kaggle/73-unet-efficientnetb3-onecycle-batch-norm-warmup.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
38e71eb423f020fd07153e644705e068225e0902
[ "MIT" ]
null
null
null
Model backlog/Training/Segmentation/Kaggle/73-unet-efficientnetb3-onecycle-batch-norm-warmup.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
38e71eb423f020fd07153e644705e068225e0902
[ "MIT" ]
2
2019-11-19T04:01:06.000Z
2019-11-19T08:24:12.000Z
192.402704
110,772
0.834875
[ [ [ "### Dependencies", "_____no_output_____" ] ], [ [ "from utillity_script_cloud_segmentation import *\n\nseed = 0\nseed_everything(seed)\nwarnings.filterwarnings(\"ignore\")", "Using TensorFlow backend.\n" ], [ "from keras.callbacks import Callback\n\nclass LRFinder(Callback):\n def __init__(self,\n num_samples,\n batch_size,\n minimum_lr=1e-5,\n maximum_lr=10.,\n lr_scale='exp',\n validation_data=None,\n validation_sample_rate=5,\n stopping_criterion_factor=4.,\n loss_smoothing_beta=0.98,\n save_dir=None,\n verbose=True):\n \"\"\"\n This class uses the Cyclic Learning Rate history to find a\n set of learning rates that can be good initializations for the\n One-Cycle training proposed by Leslie Smith in the paper referenced\n below.\n A port of the Fast.ai implementation for Keras.\n # Note\n This requires that the model be trained for exactly 1 epoch. If the model\n is trained for more epochs, then the metric calculations are only done for\n the first epoch.\n # Interpretation\n Upon visualizing the loss plot, check where the loss starts to increase\n rapidly. Choose a learning rate at somewhat prior to the corresponding\n position in the plot for faster convergence. This will be the maximum_lr lr.\n Choose the max value as this value when passing the `max_val` argument\n to OneCycleLR callback.\n Since the plot is in log-scale, you need to compute 10 ^ (-k) of the x-axis\n # Arguments:\n num_samples: Integer. Number of samples in the dataset.\n batch_size: Integer. Batch size during training.\n minimum_lr: Float. Initial learning rate (and the minimum).\n maximum_lr: Float. Final learning rate (and the maximum).\n lr_scale: Can be one of ['exp', 'linear']. Chooses the type of\n scaling for each update to the learning rate during subsequent\n batches. Choose 'exp' for large range and 'linear' for small range.\n validation_data: Requires the validation dataset as a tuple of\n (X, y) belonging to the validation set. If provided, will use the\n validation set to compute the loss metrics. Else uses the training\n batch loss. Will warn if not provided to alert the user.\n validation_sample_rate: Positive or Negative Integer. Number of batches to sample from the\n validation set per iteration of the LRFinder. Larger number of\n samples will reduce the variance but will take longer time to execute\n per batch.\n If Positive > 0, will sample from the validation dataset\n If Megative, will use the entire dataset\n stopping_criterion_factor: Integer or None. A factor which is used\n to measure large increase in the loss value during training.\n Since callbacks cannot stop training of a model, it will simply\n stop logging the additional values from the epochs after this\n stopping criterion has been met.\n If None, this check will not be performed.\n loss_smoothing_beta: Float. The smoothing factor for the moving\n average of the loss function.\n save_dir: Optional, String. If passed a directory path, the callback\n will save the running loss and learning rates to two separate numpy\n arrays inside this directory. If the directory in this path does not\n exist, they will be created.\n verbose: Whether to print the learning rate after every batch of training.\n # References:\n - [A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, weight_decay, and weight decay](https://arxiv.org/abs/1803.09820)\n \"\"\"\n super(LRFinder, self).__init__()\n\n if lr_scale not in ['exp', 'linear']:\n raise ValueError(\"`lr_scale` must be one of ['exp', 'linear']\")\n\n if validation_data is not None:\n self.validation_data = validation_data\n self.use_validation_set = True\n\n if validation_sample_rate > 0 or validation_sample_rate < 0:\n self.validation_sample_rate = validation_sample_rate\n else:\n raise ValueError(\"`validation_sample_rate` must be a positive or negative integer other than o\")\n else:\n self.use_validation_set = False\n self.validation_sample_rate = 0\n\n self.num_samples = num_samples\n self.batch_size = batch_size\n self.initial_lr = minimum_lr\n self.final_lr = maximum_lr\n self.lr_scale = lr_scale\n self.stopping_criterion_factor = stopping_criterion_factor\n self.loss_smoothing_beta = loss_smoothing_beta\n self.save_dir = save_dir\n self.verbose = verbose\n\n self.num_batches_ = num_samples // batch_size\n self.current_lr_ = minimum_lr\n\n if lr_scale == 'exp':\n self.lr_multiplier_ = (maximum_lr / float(minimum_lr)) ** (\n 1. / float(self.num_batches_))\n else:\n extra_batch = int((num_samples % batch_size) != 0)\n self.lr_multiplier_ = np.linspace(\n minimum_lr, maximum_lr, num=self.num_batches_ + extra_batch)\n\n # If negative, use entire validation set\n if self.validation_sample_rate < 0:\n self.validation_sample_rate = self.validation_data[0].shape[0] // batch_size\n\n self.current_batch_ = 0\n self.current_epoch_ = 0\n self.best_loss_ = 1e6\n self.running_loss_ = 0.\n\n self.history = {}\n\n def on_train_begin(self, logs=None):\n\n self.current_epoch_ = 1\n K.set_value(self.model.optimizer.lr, self.initial_lr)\n\n warnings.simplefilter(\"ignore\")\n\n def on_epoch_begin(self, epoch, logs=None):\n self.current_batch_ = 0\n\n if self.current_epoch_ > 1:\n warnings.warn(\n \"\\n\\nLearning rate finder should be used only with a single epoch. \"\n \"Hereafter, the callback will not measure the losses.\\n\\n\")\n\n def on_batch_begin(self, batch, logs=None):\n self.current_batch_ += 1\n\n def on_batch_end(self, batch, logs=None):\n if self.current_epoch_ > 1:\n return\n\n if self.use_validation_set:\n X, Y = self.validation_data[0], self.validation_data[1]\n\n # use 5 random batches from test set for fast approximate of loss\n num_samples = self.batch_size * self.validation_sample_rate\n\n if num_samples > X.shape[0]:\n num_samples = X.shape[0]\n\n idx = np.random.choice(X.shape[0], num_samples, replace=False)\n x = X[idx]\n y = Y[idx]\n\n values = self.model.evaluate(x, y, batch_size=self.batch_size, verbose=False)\n loss = values[0]\n else:\n loss = logs['loss']\n\n # smooth the loss value and bias correct\n running_loss = self.loss_smoothing_beta * loss + (\n 1. - self.loss_smoothing_beta) * loss\n running_loss = running_loss / (\n 1. - self.loss_smoothing_beta**self.current_batch_)\n\n # stop logging if loss is too large\n if self.current_batch_ > 1 and self.stopping_criterion_factor is not None and (\n running_loss >\n self.stopping_criterion_factor * self.best_loss_):\n\n if self.verbose:\n print(\" - LRFinder: Skipping iteration since loss is %d times as large as best loss (%0.4f)\"\n % (self.stopping_criterion_factor, self.best_loss_))\n return\n\n if running_loss < self.best_loss_ or self.current_batch_ == 1:\n self.best_loss_ = running_loss\n\n current_lr = K.get_value(self.model.optimizer.lr)\n\n self.history.setdefault('running_loss_', []).append(running_loss)\n if self.lr_scale == 'exp':\n self.history.setdefault('log_lrs', []).append(np.log10(current_lr))\n else:\n self.history.setdefault('log_lrs', []).append(current_lr)\n\n # compute the lr for the next batch and update the optimizer lr\n if self.lr_scale == 'exp':\n current_lr *= self.lr_multiplier_\n else:\n current_lr = self.lr_multiplier_[self.current_batch_ - 1]\n\n K.set_value(self.model.optimizer.lr, current_lr)\n\n # save the other metrics as well\n for k, v in logs.items():\n self.history.setdefault(k, []).append(v)\n\n if self.verbose:\n if self.use_validation_set:\n print(\" - LRFinder: val_loss: %1.4f - lr = %1.8f \" %\n (values[0], current_lr))\n else:\n print(\" - LRFinder: lr = %1.8f \" % current_lr)\n\n def on_epoch_end(self, epoch, logs=None):\n if self.save_dir is not None and self.current_epoch_ <= 1:\n if not os.path.exists(self.save_dir):\n os.makedirs(self.save_dir)\n\n losses_path = os.path.join(self.save_dir, 'losses.npy')\n lrs_path = os.path.join(self.save_dir, 'lrs.npy')\n\n np.save(losses_path, self.losses)\n np.save(lrs_path, self.lrs)\n\n if self.verbose:\n print(\"\\tLR Finder : Saved the losses and learning rate values in path : {%s}\"\n % (self.save_dir))\n\n self.current_epoch_ += 1\n\n warnings.simplefilter(\"default\")\n\n def plot_schedule(self, clip_beginning=None, clip_endding=None):\n \"\"\"\n Plots the schedule from the callback itself.\n # Arguments:\n clip_beginning: Integer or None. If positive integer, it will\n remove the specified portion of the loss graph to remove the large\n loss values in the beginning of the graph.\n clip_endding: Integer or None. If negative integer, it will\n remove the specified portion of the ending of the loss graph to\n remove the sharp increase in the loss values at high learning rates.\n \"\"\"\n try:\n import matplotlib.pyplot as plt\n plt.style.use('seaborn-white')\n except ImportError:\n print(\n \"Matplotlib not found. Please use `pip install matplotlib` first.\"\n )\n return\n\n if clip_beginning is not None and clip_beginning < 0:\n clip_beginning = -clip_beginning\n\n if clip_endding is not None and clip_endding > 0:\n clip_endding = -clip_endding\n\n losses = self.losses\n lrs = self.lrs\n\n if clip_beginning:\n losses = losses[clip_beginning:]\n lrs = lrs[clip_beginning:]\n\n if clip_endding:\n losses = losses[:clip_endding]\n lrs = lrs[:clip_endding]\n\n plt.plot(lrs, losses)\n plt.title('Learning rate vs Loss')\n plt.xlabel('learning rate')\n plt.ylabel('loss')\n plt.show()\n\n @classmethod\n def restore_schedule_from_dir(cls,\n directory,\n clip_beginning=None,\n clip_endding=None):\n \"\"\"\n Loads the training history from the saved numpy files in the given directory.\n # Arguments:\n directory: String. Path to the directory where the serialized numpy\n arrays of the loss and learning rates are saved.\n clip_beginning: Integer or None. If positive integer, it will\n remove the specified portion of the loss graph to remove the large\n loss values in the beginning of the graph.\n clip_endding: Integer or None. If negative integer, it will\n remove the specified portion of the ending of the loss graph to\n remove the sharp increase in the loss values at high learning rates.\n Returns:\n tuple of (losses, learning rates)\n \"\"\"\n if clip_beginning is not None and clip_beginning < 0:\n clip_beginning = -clip_beginning\n\n if clip_endding is not None and clip_endding > 0:\n clip_endding = -clip_endding\n\n losses_path = os.path.join(directory, 'losses.npy')\n lrs_path = os.path.join(directory, 'lrs.npy')\n\n if not os.path.exists(losses_path) or not os.path.exists(lrs_path):\n print(\"%s and %s could not be found at directory : {%s}\" %\n (losses_path, lrs_path, directory))\n\n losses = None\n lrs = None\n\n else:\n losses = np.load(losses_path)\n lrs = np.load(lrs_path)\n\n if clip_beginning:\n losses = losses[clip_beginning:]\n lrs = lrs[clip_beginning:]\n\n if clip_endding:\n losses = losses[:clip_endding]\n lrs = lrs[:clip_endding]\n\n return losses, lrs\n\n @classmethod\n def plot_schedule_from_file(cls,\n directory,\n clip_beginning=None,\n clip_endding=None):\n \"\"\"\n Plots the schedule from the saved numpy arrays of the loss and learning\n rate values in the specified directory.\n # Arguments:\n directory: String. Path to the directory where the serialized numpy\n arrays of the loss and learning rates are saved.\n clip_beginning: Integer or None. If positive integer, it will\n remove the specified portion of the loss graph to remove the large\n loss values in the beginning of the graph.\n clip_endding: Integer or None. If negative integer, it will\n remove the specified portion of the ending of the loss graph to\n remove the sharp increase in the loss values at high learning rates.\n \"\"\"\n try:\n import matplotlib.pyplot as plt\n plt.style.use('seaborn-white')\n except ImportError:\n print(\"Matplotlib not found. Please use `pip install matplotlib` first.\")\n return\n\n losses, lrs = cls.restore_schedule_from_dir(\n directory,\n clip_beginning=clip_beginning,\n clip_endding=clip_endding)\n\n if losses is None or lrs is None:\n return\n else:\n plt.plot(lrs, losses)\n plt.title('Learning rate vs Loss')\n plt.xlabel('learning rate')\n plt.ylabel('loss')\n plt.show()\n\n @property\n def lrs(self):\n return np.array(self.history['log_lrs'])\n\n @property\n def losses(self):\n return np.array(self.history['running_loss_'])", "_____no_output_____" ] ], [ [ "### Load data", "_____no_output_____" ] ], [ [ "train = pd.read_csv('../input/understanding_cloud_organization/train.csv')\nhold_out_set = pd.read_csv('../input/cloud-data-split-v2/hold-out.csv')\n\nX_train = hold_out_set[hold_out_set['set'] == 'train']\nX_val = hold_out_set[hold_out_set['set'] == 'validation']\n\nprint('Compete set samples:', len(train))\nprint('Train samples: ', len(X_train))\nprint('Validation samples: ', len(X_val))\n\n# Preprocecss data\ntrain['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])\n\ndisplay(X_train.head())", "Compete set samples: 22184\nTrain samples: 4420\nValidation samples: 1105\n" ] ], [ [ "# Model parameters", "_____no_output_____" ] ], [ [ "BACKBONE = 'efficientnetb3'\nBATCH_SIZE = 8\nEPOCHS_WARMUP = 3\nEPOCHS = 15\nLEARNING_RATE = 10**(-2)\nHEIGHT = 320\nWIDTH = 480\nCHANNELS = 3\nN_CLASSES = 4\nSTEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE\nSTEP_SIZE_VALID = len(X_val)//BATCH_SIZE\n\nmodel_path = '70-unet_%s_%sx%s.h5' % (BACKBONE, HEIGHT, WIDTH)\ntrain_images_path = '../input/cloud-images-resized-320x480/train_images320x480/train_images/'", "_____no_output_____" ], [ "class OneCycleLR(Callback):\n def __init__(self,\n max_lr,\n end_percentage=0.1,\n scale_percentage=None,\n maximum_momentum=0.95,\n minimum_momentum=0.85,\n verbose=True):\n \"\"\" This callback implements a cyclical learning rate policy (CLR).\n This is a special case of Cyclic Learning Rates, where we have only 1 cycle.\n After the completion of 1 cycle, the learning rate will decrease rapidly to\n 100th its initial lowest value.\n # Arguments:\n max_lr: Float. Initial learning rate. This also sets the\n starting learning rate (which will be 10x smaller than\n this), and will increase to this value during the first cycle.\n end_percentage: Float. The percentage of all the epochs of training\n that will be dedicated to sharply decreasing the learning\n rate after the completion of 1 cycle. Must be between 0 and 1.\n scale_percentage: Float or None. If float, must be between 0 and 1.\n If None, it will compute the scale_percentage automatically\n based on the `end_percentage`.\n maximum_momentum: Optional. Sets the maximum momentum (initial)\n value, which gradually drops to its lowest value in half-cycle,\n then gradually increases again to stay constant at this max value.\n Can only be used with SGD Optimizer.\n minimum_momentum: Optional. Sets the minimum momentum at the end of\n the half-cycle. Can only be used with SGD Optimizer.\n verbose: Bool. Whether to print the current learning rate after every\n epoch.\n # Reference\n - [A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, weight_decay, and weight decay](https://arxiv.org/abs/1803.09820)\n - [Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120)\n \"\"\"\n super(OneCycleLR, self).__init__()\n\n if end_percentage < 0. or end_percentage > 1.:\n raise ValueError(\"`end_percentage` must be between 0 and 1\")\n\n if scale_percentage is not None and (scale_percentage < 0. or scale_percentage > 1.):\n raise ValueError(\"`scale_percentage` must be between 0 and 1\")\n\n self.initial_lr = max_lr\n self.end_percentage = end_percentage\n self.scale = float(scale_percentage) if scale_percentage is not None else float(end_percentage)\n self.max_momentum = maximum_momentum\n self.min_momentum = minimum_momentum\n self.verbose = verbose\n\n if self.max_momentum is not None and self.min_momentum is not None:\n self._update_momentum = True\n else:\n self._update_momentum = False\n\n self.clr_iterations = 0.\n self.history = {}\n\n self.epochs = None\n self.batch_size = None\n self.samples = None\n self.steps = None\n self.num_iterations = None\n self.mid_cycle_id = None\n\n def _reset(self):\n \"\"\"\n Reset the callback.\n \"\"\"\n self.clr_iterations = 0.\n self.history = {}\n\n def compute_lr(self):\n \"\"\"\n Compute the learning rate based on which phase of the cycle it is in.\n - If in the first half of training, the learning rate gradually increases.\n - If in the second half of training, the learning rate gradually decreases.\n - If in the final `end_percentage` portion of training, the learning rate\n is quickly reduced to near 100th of the original min learning rate.\n # Returns:\n the new learning rate\n \"\"\"\n if self.clr_iterations > 2 * self.mid_cycle_id:\n current_percentage = (self.clr_iterations - 2 * self.mid_cycle_id)\n current_percentage /= float((self.num_iterations - 2 * self.mid_cycle_id))\n new_lr = self.initial_lr * (1. + (current_percentage *\n (1. - 100.) / 100.)) * self.scale\n\n elif self.clr_iterations > self.mid_cycle_id:\n current_percentage = 1. - (\n self.clr_iterations - self.mid_cycle_id) / self.mid_cycle_id\n new_lr = self.initial_lr * (1. + current_percentage *\n (self.scale * 100 - 1.)) * self.scale\n\n else:\n current_percentage = self.clr_iterations / self.mid_cycle_id\n new_lr = self.initial_lr * (1. + current_percentage *\n (self.scale * 100 - 1.)) * self.scale\n\n if self.clr_iterations == self.num_iterations:\n self.clr_iterations = 0\n\n return new_lr\n\n def compute_momentum(self):\n \"\"\"\n Compute the momentum based on which phase of the cycle it is in.\n - If in the first half of training, the momentum gradually decreases.\n - If in the second half of training, the momentum gradually increases.\n - If in the final `end_percentage` portion of training, the momentum value\n is kept constant at the maximum initial value.\n # Returns:\n the new momentum value\n \"\"\"\n if self.clr_iterations > 2 * self.mid_cycle_id:\n new_momentum = self.max_momentum\n\n elif self.clr_iterations > self.mid_cycle_id:\n current_percentage = 1. - ((self.clr_iterations - self.mid_cycle_id) / float(\n self.mid_cycle_id))\n new_momentum = self.max_momentum - current_percentage * (\n self.max_momentum - self.min_momentum)\n\n else:\n current_percentage = self.clr_iterations / float(self.mid_cycle_id)\n new_momentum = self.max_momentum - current_percentage * (\n self.max_momentum - self.min_momentum)\n\n return new_momentum\n\n def on_train_begin(self, logs={}):\n logs = logs or {}\n\n# self.epochs = self.params['epochs']\n# self.batch_size = self.params['batch_size']\n# self.samples = self.params['samples']\n# self.steps = self.params['steps']\n self.epochs = EPOCHS\n self.batch_size = BATCH_SIZE\n self.samples = len(X_train)\n self.steps = len(X_train)//BATCH_SIZE\n \n\n if self.steps is not None:\n self.num_iterations = self.epochs * self.steps\n else:\n if (self.samples % self.batch_size) == 0:\n remainder = 0\n else:\n remainder = 1\n self.num_iterations = (self.epochs + remainder) * self.samples // self.batch_size\n\n self.mid_cycle_id = int(self.num_iterations * ((1. - self.end_percentage)) / float(2))\n\n self._reset()\n K.set_value(self.model.optimizer.lr, self.compute_lr())\n\n if self._update_momentum:\n if not hasattr(self.model.optimizer, 'momentum'):\n raise ValueError(\"Momentum can be updated only on SGD optimizer !\")\n\n new_momentum = self.compute_momentum()\n K.set_value(self.model.optimizer.momentum, new_momentum)\n\n def on_batch_end(self, epoch, logs=None):\n logs = logs or {}\n\n self.clr_iterations += 1\n new_lr = self.compute_lr()\n\n self.history.setdefault('lr', []).append(\n K.get_value(self.model.optimizer.lr))\n K.set_value(self.model.optimizer.lr, new_lr)\n\n if self._update_momentum:\n if not hasattr(self.model.optimizer, 'momentum'):\n raise ValueError(\"Momentum can be updated only on SGD optimizer !\")\n\n new_momentum = self.compute_momentum()\n\n self.history.setdefault('momentum', []).append(\n K.get_value(self.model.optimizer.momentum))\n K.set_value(self.model.optimizer.momentum, new_momentum)\n\n for k, v in logs.items():\n self.history.setdefault(k, []).append(v)\n\n def on_epoch_end(self, epoch, logs=None):\n if self.verbose:\n if self._update_momentum:\n print(\" - lr: %0.5f - momentum: %0.2f \" %\n (self.history['lr'][-1], self.history['momentum'][-1]))\n\n else:\n print(\" - lr: %0.5f \" % (self.history['lr'][-1]))", "_____no_output_____" ], [ "preprocessing = sm.get_preprocessing(BACKBONE)\n\naugmentation = albu.Compose([albu.HorizontalFlip(p=0.5),\n albu.VerticalFlip(p=0.5),\n albu.ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, \n shift_limit=0.1, border_mode=0, p=0.5)\n ])", "_____no_output_____" ] ], [ [ "### Data generator", "_____no_output_____" ] ], [ [ "train_generator = DataGenerator(\n directory=train_images_path,\n dataframe=X_train,\n target_df=train,\n batch_size=BATCH_SIZE,\n target_size=(HEIGHT, WIDTH),\n n_channels=CHANNELS,\n n_classes=N_CLASSES,\n preprocessing=preprocessing,\n augmentation=augmentation,\n seed=seed)\n\nvalid_generator = DataGenerator(\n directory=train_images_path,\n dataframe=X_val,\n target_df=train,\n batch_size=BATCH_SIZE, \n target_size=(HEIGHT, WIDTH),\n n_channels=CHANNELS,\n n_classes=N_CLASSES,\n preprocessing=preprocessing,\n seed=seed)", "_____no_output_____" ] ], [ [ "# Warm Up (Batch Normalization)", "_____no_output_____" ] ], [ [ "model = sm.Unet(backbone_name=BACKBONE, \n encoder_weights='imagenet',\n classes=N_CLASSES,\n activation='sigmoid',\n input_shape=(None, None, CHANNELS))\n\nfreeze_segmentation_model(model)\nmodel.layers[-1].trainable = True\n\nmetric_list = [dice_coef, sm.metrics.iou_score, sm.metrics.f1_score]\noptimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)\n\nmodel.compile(optimizer=optimizer, loss=sm.losses.bce_dice_loss, metrics=metric_list)\nmodel.summary()", "Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b3_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5\n44113920/44107200 [==============================] - 1s 0us/step\nModel: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, None, None, 3 0 \n__________________________________________________________________________________________________\nstem_conv (Conv2D) (None, None, None, 4 1080 input_1[0][0] \n__________________________________________________________________________________________________\nstem_bn (BatchNormalization) (None, None, None, 4 160 stem_conv[0][0] \n__________________________________________________________________________________________________\nstem_activation (Activation) (None, None, None, 4 0 stem_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_dwconv (DepthwiseConv2D (None, None, None, 4 360 stem_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_bn (BatchNormalization) (None, None, None, 4 160 block1a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1a_activation (Activation) (None, None, None, 4 0 block1a_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_se_squeeze (GlobalAvera (None, 40) 0 block1a_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reshape (Reshape) (None, 1, 1, 40) 0 block1a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reduce (Conv2D) (None, 1, 1, 10) 410 block1a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1a_se_expand (Conv2D) (None, 1, 1, 40) 440 block1a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1a_se_excite (Multiply) (None, None, None, 4 0 block1a_activation[0][0] \n block1a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1a_project_conv (Conv2D) (None, None, None, 2 960 block1a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1a_project_bn (BatchNormal (None, None, None, 2 96 block1a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_dwconv (DepthwiseConv2D (None, None, None, 2 216 block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_bn (BatchNormalization) (None, None, None, 2 96 block1b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1b_activation (Activation) (None, None, None, 2 0 block1b_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_se_squeeze (GlobalAvera (None, 24) 0 block1b_activation[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reshape (Reshape) (None, 1, 1, 24) 0 block1b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 block1b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1b_se_expand (Conv2D) (None, 1, 1, 24) 168 block1b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1b_se_excite (Multiply) (None, None, None, 2 0 block1b_activation[0][0] \n block1b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1b_project_conv (Conv2D) (None, None, None, 2 576 block1b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1b_project_bn (BatchNormal (None, None, None, 2 96 block1b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_drop (FixedDropout) (None, None, None, 2 0 block1b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_add (Add) (None, None, None, 2 0 block1b_drop[0][0] \n block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_conv (Conv2D) (None, None, None, 1 3456 block1b_add[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_bn (BatchNormali (None, None, None, 1 576 block2a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_activation (Acti (None, None, None, 1 0 block2a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_dwconv (DepthwiseConv2D (None, None, None, 1 1296 block2a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_bn (BatchNormalization) (None, None, None, 1 576 block2a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2a_activation (Activation) (None, None, None, 1 0 block2a_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_se_squeeze (GlobalAvera (None, 144) 0 block2a_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reshape (Reshape) (None, 1, 1, 144) 0 block2a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 block2a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 block2a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2a_se_excite (Multiply) (None, None, None, 1 0 block2a_activation[0][0] \n block2a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2a_project_conv (Conv2D) (None, None, None, 3 4608 block2a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2a_project_bn (BatchNormal (None, None, None, 3 128 block2a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_conv (Conv2D) (None, None, None, 1 6144 block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_bn (BatchNormali (None, None, None, 1 768 block2b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_activation (Acti (None, None, None, 1 0 block2b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_bn (BatchNormalization) (None, None, None, 1 768 block2b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2b_activation (Activation) (None, None, None, 1 0 block2b_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_se_squeeze (GlobalAvera (None, 192) 0 block2b_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reshape (Reshape) (None, 1, 1, 192) 0 block2b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2b_se_excite (Multiply) (None, None, None, 1 0 block2b_activation[0][0] \n block2b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2b_project_conv (Conv2D) (None, None, None, 3 6144 block2b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2b_project_bn (BatchNormal (None, None, None, 3 128 block2b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_drop (FixedDropout) (None, None, None, 3 0 block2b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_add (Add) (None, None, None, 3 0 block2b_drop[0][0] \n block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_conv (Conv2D) (None, None, None, 1 6144 block2b_add[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_bn (BatchNormali (None, None, None, 1 768 block2c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_activation (Acti (None, None, None, 1 0 block2c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_bn (BatchNormalization) (None, None, None, 1 768 block2c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2c_activation (Activation) (None, None, None, 1 0 block2c_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_se_squeeze (GlobalAvera (None, 192) 0 block2c_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reshape (Reshape) (None, 1, 1, 192) 0 block2c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2c_se_excite (Multiply) (None, None, None, 1 0 block2c_activation[0][0] \n block2c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2c_project_conv (Conv2D) (None, None, None, 3 6144 block2c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2c_project_bn (BatchNormal (None, None, None, 3 128 block2c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_drop (FixedDropout) (None, None, None, 3 0 block2c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_add (Add) (None, None, None, 3 0 block2c_drop[0][0] \n block2b_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_conv (Conv2D) (None, None, None, 1 6144 block2c_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_bn (BatchNormali (None, None, None, 1 768 block3a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_activation (Acti (None, None, None, 1 0 block3a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_dwconv (DepthwiseConv2D (None, None, None, 1 4800 block3a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_bn (BatchNormalization) (None, None, None, 1 768 block3a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3a_activation (Activation) (None, None, None, 1 0 block3a_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_se_squeeze (GlobalAvera (None, 192) 0 block3a_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reshape (Reshape) (None, 1, 1, 192) 0 block3a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block3a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 block3a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3a_se_excite (Multiply) (None, None, None, 1 0 block3a_activation[0][0] \n block3a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3a_project_conv (Conv2D) (None, None, None, 4 9216 block3a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3a_project_bn (BatchNormal (None, None, None, 4 192 block3a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_conv (Conv2D) (None, None, None, 2 13824 block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_bn (BatchNormali (None, None, None, 2 1152 block3b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_activation (Acti (None, None, None, 2 0 block3b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_dwconv (DepthwiseConv2D (None, None, None, 2 7200 block3b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_bn (BatchNormalization) (None, None, None, 2 1152 block3b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3b_activation (Activation) (None, None, None, 2 0 block3b_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_se_squeeze (GlobalAvera (None, 288) 0 block3b_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reshape (Reshape) (None, 1, 1, 288) 0 block3b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block3b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3b_se_expand (Conv2D) (None, 1, 1, 288) 3744 block3b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3b_se_excite (Multiply) (None, None, None, 2 0 block3b_activation[0][0] \n block3b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3b_project_conv (Conv2D) (None, None, None, 4 13824 block3b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3b_project_bn (BatchNormal (None, None, None, 4 192 block3b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_drop (FixedDropout) (None, None, None, 4 0 block3b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_add (Add) (None, None, None, 4 0 block3b_drop[0][0] \n block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_conv (Conv2D) (None, None, None, 2 13824 block3b_add[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_bn (BatchNormali (None, None, None, 2 1152 block3c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_activation (Acti (None, None, None, 2 0 block3c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_dwconv (DepthwiseConv2D (None, None, None, 2 7200 block3c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_bn (BatchNormalization) (None, None, None, 2 1152 block3c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3c_activation (Activation) (None, None, None, 2 0 block3c_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_se_squeeze (GlobalAvera (None, 288) 0 block3c_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reshape (Reshape) (None, 1, 1, 288) 0 block3c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block3c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3c_se_expand (Conv2D) (None, 1, 1, 288) 3744 block3c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3c_se_excite (Multiply) (None, None, None, 2 0 block3c_activation[0][0] \n block3c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3c_project_conv (Conv2D) (None, None, None, 4 13824 block3c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3c_project_bn (BatchNormal (None, None, None, 4 192 block3c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_drop (FixedDropout) (None, None, None, 4 0 block3c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_add (Add) (None, None, None, 4 0 block3c_drop[0][0] \n block3b_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_conv (Conv2D) (None, None, None, 2 13824 block3c_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_bn (BatchNormali (None, None, None, 2 1152 block4a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_activation (Acti (None, None, None, 2 0 block4a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_dwconv (DepthwiseConv2D (None, None, None, 2 2592 block4a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_bn (BatchNormalization) (None, None, None, 2 1152 block4a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4a_activation (Activation) (None, None, None, 2 0 block4a_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_se_squeeze (GlobalAvera (None, 288) 0 block4a_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reshape (Reshape) (None, 1, 1, 288) 0 block4a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block4a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4a_se_expand (Conv2D) (None, 1, 1, 288) 3744 block4a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4a_se_excite (Multiply) (None, None, None, 2 0 block4a_activation[0][0] \n block4a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4a_project_conv (Conv2D) (None, None, None, 9 27648 block4a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4a_project_bn (BatchNormal (None, None, None, 9 384 block4a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_conv (Conv2D) (None, None, None, 5 55296 block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_bn (BatchNormali (None, None, None, 5 2304 block4b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_activation (Acti (None, None, None, 5 0 block4b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_bn (BatchNormalization) (None, None, None, 5 2304 block4b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4b_activation (Activation) (None, None, None, 5 0 block4b_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_se_squeeze (GlobalAvera (None, 576) 0 block4b_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reshape (Reshape) (None, 1, 1, 576) 0 block4b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4b_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4b_se_excite (Multiply) (None, None, None, 5 0 block4b_activation[0][0] \n block4b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4b_project_conv (Conv2D) (None, None, None, 9 55296 block4b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4b_project_bn (BatchNormal (None, None, None, 9 384 block4b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_drop (FixedDropout) (None, None, None, 9 0 block4b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_add (Add) (None, None, None, 9 0 block4b_drop[0][0] \n block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_conv (Conv2D) (None, None, None, 5 55296 block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_bn (BatchNormali (None, None, None, 5 2304 block4c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_activation (Acti (None, None, None, 5 0 block4c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_bn (BatchNormalization) (None, None, None, 5 2304 block4c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4c_activation (Activation) (None, None, None, 5 0 block4c_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_se_squeeze (GlobalAvera (None, 576) 0 block4c_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reshape (Reshape) (None, 1, 1, 576) 0 block4c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4c_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4c_se_excite (Multiply) (None, None, None, 5 0 block4c_activation[0][0] \n block4c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4c_project_conv (Conv2D) (None, None, None, 9 55296 block4c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4c_project_bn (BatchNormal (None, None, None, 9 384 block4c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_drop (FixedDropout) (None, None, None, 9 0 block4c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_add (Add) (None, None, None, 9 0 block4c_drop[0][0] \n block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_conv (Conv2D) (None, None, None, 5 55296 block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_bn (BatchNormali (None, None, None, 5 2304 block4d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_activation (Acti (None, None, None, 5 0 block4d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_bn (BatchNormalization) (None, None, None, 5 2304 block4d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4d_activation (Activation) (None, None, None, 5 0 block4d_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_se_squeeze (GlobalAvera (None, 576) 0 block4d_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reshape (Reshape) (None, 1, 1, 576) 0 block4d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4d_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4d_se_excite (Multiply) (None, None, None, 5 0 block4d_activation[0][0] \n block4d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4d_project_conv (Conv2D) (None, None, None, 9 55296 block4d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4d_project_bn (BatchNormal (None, None, None, 9 384 block4d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_drop (FixedDropout) (None, None, None, 9 0 block4d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_add (Add) (None, None, None, 9 0 block4d_drop[0][0] \n block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_conv (Conv2D) (None, None, None, 5 55296 block4d_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_bn (BatchNormali (None, None, None, 5 2304 block4e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_activation (Acti (None, None, None, 5 0 block4e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_bn (BatchNormalization) (None, None, None, 5 2304 block4e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4e_activation (Activation) (None, None, None, 5 0 block4e_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_se_squeeze (GlobalAvera (None, 576) 0 block4e_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reshape (Reshape) (None, 1, 1, 576) 0 block4e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4e_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4e_se_excite (Multiply) (None, None, None, 5 0 block4e_activation[0][0] \n block4e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4e_project_conv (Conv2D) (None, None, None, 9 55296 block4e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4e_project_bn (BatchNormal (None, None, None, 9 384 block4e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_drop (FixedDropout) (None, None, None, 9 0 block4e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_add (Add) (None, None, None, 9 0 block4e_drop[0][0] \n block4d_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_conv (Conv2D) (None, None, None, 5 55296 block4e_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_bn (BatchNormali (None, None, None, 5 2304 block5a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_activation (Acti (None, None, None, 5 0 block5a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_dwconv (DepthwiseConv2D (None, None, None, 5 14400 block5a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_bn (BatchNormalization) (None, None, None, 5 2304 block5a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5a_activation (Activation) (None, None, None, 5 0 block5a_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_se_squeeze (GlobalAvera (None, 576) 0 block5a_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reshape (Reshape) (None, 1, 1, 576) 0 block5a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block5a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5a_se_expand (Conv2D) (None, 1, 1, 576) 14400 block5a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5a_se_excite (Multiply) (None, None, None, 5 0 block5a_activation[0][0] \n block5a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5a_project_conv (Conv2D) (None, None, None, 1 78336 block5a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5a_project_bn (BatchNormal (None, None, None, 1 544 block5a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_conv (Conv2D) (None, None, None, 8 110976 block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_bn (BatchNormali (None, None, None, 8 3264 block5b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_activation (Acti (None, None, None, 8 0 block5b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_bn (BatchNormalization) (None, None, None, 8 3264 block5b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5b_activation (Activation) (None, None, None, 8 0 block5b_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_se_squeeze (GlobalAvera (None, 816) 0 block5b_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reshape (Reshape) (None, 1, 1, 816) 0 block5b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5b_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5b_se_excite (Multiply) (None, None, None, 8 0 block5b_activation[0][0] \n block5b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5b_project_conv (Conv2D) (None, None, None, 1 110976 block5b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5b_project_bn (BatchNormal (None, None, None, 1 544 block5b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_drop (FixedDropout) (None, None, None, 1 0 block5b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_add (Add) (None, None, None, 1 0 block5b_drop[0][0] \n block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_conv (Conv2D) (None, None, None, 8 110976 block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_bn (BatchNormali (None, None, None, 8 3264 block5c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_activation (Acti (None, None, None, 8 0 block5c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_bn (BatchNormalization) (None, None, None, 8 3264 block5c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5c_activation (Activation) (None, None, None, 8 0 block5c_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_se_squeeze (GlobalAvera (None, 816) 0 block5c_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reshape (Reshape) (None, 1, 1, 816) 0 block5c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5c_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5c_se_excite (Multiply) (None, None, None, 8 0 block5c_activation[0][0] \n block5c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5c_project_conv (Conv2D) (None, None, None, 1 110976 block5c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5c_project_bn (BatchNormal (None, None, None, 1 544 block5c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_drop (FixedDropout) (None, None, None, 1 0 block5c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_add (Add) (None, None, None, 1 0 block5c_drop[0][0] \n block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_conv (Conv2D) (None, None, None, 8 110976 block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_bn (BatchNormali (None, None, None, 8 3264 block5d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_activation (Acti (None, None, None, 8 0 block5d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_bn (BatchNormalization) (None, None, None, 8 3264 block5d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5d_activation (Activation) (None, None, None, 8 0 block5d_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_se_squeeze (GlobalAvera (None, 816) 0 block5d_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reshape (Reshape) (None, 1, 1, 816) 0 block5d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5d_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5d_se_excite (Multiply) (None, None, None, 8 0 block5d_activation[0][0] \n block5d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5d_project_conv (Conv2D) (None, None, None, 1 110976 block5d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5d_project_bn (BatchNormal (None, None, None, 1 544 block5d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_drop (FixedDropout) (None, None, None, 1 0 block5d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_add (Add) (None, None, None, 1 0 block5d_drop[0][0] \n block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_conv (Conv2D) (None, None, None, 8 110976 block5d_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_bn (BatchNormali (None, None, None, 8 3264 block5e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_activation (Acti (None, None, None, 8 0 block5e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_bn (BatchNormalization) (None, None, None, 8 3264 block5e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5e_activation (Activation) (None, None, None, 8 0 block5e_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_se_squeeze (GlobalAvera (None, 816) 0 block5e_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reshape (Reshape) (None, 1, 1, 816) 0 block5e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5e_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5e_se_excite (Multiply) (None, None, None, 8 0 block5e_activation[0][0] \n block5e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5e_project_conv (Conv2D) (None, None, None, 1 110976 block5e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5e_project_bn (BatchNormal (None, None, None, 1 544 block5e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_drop (FixedDropout) (None, None, None, 1 0 block5e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_add (Add) (None, None, None, 1 0 block5e_drop[0][0] \n block5d_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_conv (Conv2D) (None, None, None, 8 110976 block5e_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_bn (BatchNormali (None, None, None, 8 3264 block6a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_activation (Acti (None, None, None, 8 0 block6a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block6a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_bn (BatchNormalization) (None, None, None, 8 3264 block6a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6a_activation (Activation) (None, None, None, 8 0 block6a_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_se_squeeze (GlobalAvera (None, 816) 0 block6a_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reshape (Reshape) (None, 1, 1, 816) 0 block6a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block6a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6a_se_expand (Conv2D) (None, 1, 1, 816) 28560 block6a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6a_se_excite (Multiply) (None, None, None, 8 0 block6a_activation[0][0] \n block6a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6a_project_conv (Conv2D) (None, None, None, 2 189312 block6a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6a_project_bn (BatchNormal (None, None, None, 2 928 block6a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_conv (Conv2D) (None, None, None, 1 322944 block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_bn (BatchNormali (None, None, None, 1 5568 block6b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_activation (Acti (None, None, None, 1 0 block6b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_bn (BatchNormalization) (None, None, None, 1 5568 block6b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6b_activation (Activation) (None, None, None, 1 0 block6b_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_se_squeeze (GlobalAvera (None, 1392) 0 block6b_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6b_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6b_se_excite (Multiply) (None, None, None, 1 0 block6b_activation[0][0] \n block6b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6b_project_conv (Conv2D) (None, None, None, 2 322944 block6b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6b_project_bn (BatchNormal (None, None, None, 2 928 block6b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_drop (FixedDropout) (None, None, None, 2 0 block6b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_add (Add) (None, None, None, 2 0 block6b_drop[0][0] \n block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_conv (Conv2D) (None, None, None, 1 322944 block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_bn (BatchNormali (None, None, None, 1 5568 block6c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_activation (Acti (None, None, None, 1 0 block6c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_bn (BatchNormalization) (None, None, None, 1 5568 block6c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6c_activation (Activation) (None, None, None, 1 0 block6c_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_se_squeeze (GlobalAvera (None, 1392) 0 block6c_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6c_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6c_se_excite (Multiply) (None, None, None, 1 0 block6c_activation[0][0] \n block6c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6c_project_conv (Conv2D) (None, None, None, 2 322944 block6c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6c_project_bn (BatchNormal (None, None, None, 2 928 block6c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_drop (FixedDropout) (None, None, None, 2 0 block6c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_add (Add) (None, None, None, 2 0 block6c_drop[0][0] \n block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_conv (Conv2D) (None, None, None, 1 322944 block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_bn (BatchNormali (None, None, None, 1 5568 block6d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_activation (Acti (None, None, None, 1 0 block6d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_bn (BatchNormalization) (None, None, None, 1 5568 block6d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6d_activation (Activation) (None, None, None, 1 0 block6d_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_se_squeeze (GlobalAvera (None, 1392) 0 block6d_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6d_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6d_se_excite (Multiply) (None, None, None, 1 0 block6d_activation[0][0] \n block6d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6d_project_conv (Conv2D) (None, None, None, 2 322944 block6d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6d_project_bn (BatchNormal (None, None, None, 2 928 block6d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_drop (FixedDropout) (None, None, None, 2 0 block6d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_add (Add) (None, None, None, 2 0 block6d_drop[0][0] \n block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_conv (Conv2D) (None, None, None, 1 322944 block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_bn (BatchNormali (None, None, None, 1 5568 block6e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_activation (Acti (None, None, None, 1 0 block6e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_bn (BatchNormalization) (None, None, None, 1 5568 block6e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6e_activation (Activation) (None, None, None, 1 0 block6e_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_se_squeeze (GlobalAvera (None, 1392) 0 block6e_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6e_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6e_se_excite (Multiply) (None, None, None, 1 0 block6e_activation[0][0] \n block6e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6e_project_conv (Conv2D) (None, None, None, 2 322944 block6e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6e_project_bn (BatchNormal (None, None, None, 2 928 block6e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_drop (FixedDropout) (None, None, None, 2 0 block6e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_add (Add) (None, None, None, 2 0 block6e_drop[0][0] \n block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_conv (Conv2D) (None, None, None, 1 322944 block6e_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_bn (BatchNormali (None, None, None, 1 5568 block6f_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_activation (Acti (None, None, None, 1 0 block6f_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6f_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_bn (BatchNormalization) (None, None, None, 1 5568 block6f_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6f_activation (Activation) (None, None, None, 1 0 block6f_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_se_squeeze (GlobalAvera (None, 1392) 0 block6f_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6f_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6f_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6f_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6f_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6f_se_excite (Multiply) (None, None, None, 1 0 block6f_activation[0][0] \n block6f_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6f_project_conv (Conv2D) (None, None, None, 2 322944 block6f_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6f_project_bn (BatchNormal (None, None, None, 2 928 block6f_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_drop (FixedDropout) (None, None, None, 2 0 block6f_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_add (Add) (None, None, None, 2 0 block6f_drop[0][0] \n block6e_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_conv (Conv2D) (None, None, None, 1 322944 block6f_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_bn (BatchNormali (None, None, None, 1 5568 block7a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_activation (Acti (None, None, None, 1 0 block7a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_dwconv (DepthwiseConv2D (None, None, None, 1 12528 block7a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_bn (BatchNormalization) (None, None, None, 1 5568 block7a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7a_activation (Activation) (None, None, None, 1 0 block7a_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_se_squeeze (GlobalAvera (None, 1392) 0 block7a_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reshape (Reshape) (None, 1, 1, 1392) 0 block7a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block7a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7a_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block7a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7a_se_excite (Multiply) (None, None, None, 1 0 block7a_activation[0][0] \n block7a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7a_project_conv (Conv2D) (None, None, None, 3 534528 block7a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7a_project_bn (BatchNormal (None, None, None, 3 1536 block7a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_conv (Conv2D) (None, None, None, 2 884736 block7a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_bn (BatchNormali (None, None, None, 2 9216 block7b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_activation (Acti (None, None, None, 2 0 block7b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_dwconv (DepthwiseConv2D (None, None, None, 2 20736 block7b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_bn (BatchNormalization) (None, None, None, 2 9216 block7b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7b_activation (Activation) (None, None, None, 2 0 block7b_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_se_squeeze (GlobalAvera (None, 2304) 0 block7b_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reshape (Reshape) (None, 1, 1, 2304) 0 block7b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reduce (Conv2D) (None, 1, 1, 96) 221280 block7b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7b_se_expand (Conv2D) (None, 1, 1, 2304) 223488 block7b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7b_se_excite (Multiply) (None, None, None, 2 0 block7b_activation[0][0] \n block7b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7b_project_conv (Conv2D) (None, None, None, 3 884736 block7b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7b_project_bn (BatchNormal (None, None, None, 3 1536 block7b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_drop (FixedDropout) (None, None, None, 3 0 block7b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_add (Add) (None, None, None, 3 0 block7b_drop[0][0] \n block7a_project_bn[0][0] \n__________________________________________________________________________________________________\ntop_conv (Conv2D) (None, None, None, 1 589824 block7b_add[0][0] \n__________________________________________________________________________________________________\ntop_bn (BatchNormalization) (None, None, None, 1 6144 top_conv[0][0] \n__________________________________________________________________________________________________\ntop_activation (Activation) (None, None, None, 1 0 top_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0_upsampling (UpSa (None, None, None, 1 0 top_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0_concat (Concaten (None, None, None, 2 0 decoder_stage0_upsampling[0][0] \n block6a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_conv (Conv2D) (None, None, None, 2 5419008 decoder_stage0_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_bn (BatchNormal (None, None, None, 2 1024 decoder_stage0a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_relu (Activatio (None, None, None, 2 0 decoder_stage0a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_conv (Conv2D) (None, None, None, 2 589824 decoder_stage0a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_bn (BatchNormal (None, None, None, 2 1024 decoder_stage0b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_relu (Activatio (None, None, None, 2 0 decoder_stage0b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1_upsampling (UpSa (None, None, None, 2 0 decoder_stage0b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1_concat (Concaten (None, None, None, 5 0 decoder_stage1_upsampling[0][0] \n block4a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_conv (Conv2D) (None, None, None, 1 626688 decoder_stage1_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_bn (BatchNormal (None, None, None, 1 512 decoder_stage1a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_relu (Activatio (None, None, None, 1 0 decoder_stage1a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_conv (Conv2D) (None, None, None, 1 147456 decoder_stage1a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_bn (BatchNormal (None, None, None, 1 512 decoder_stage1b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_relu (Activatio (None, None, None, 1 0 decoder_stage1b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2_upsampling (UpSa (None, None, None, 1 0 decoder_stage1b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2_concat (Concaten (None, None, None, 3 0 decoder_stage2_upsampling[0][0] \n block3a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_conv (Conv2D) (None, None, None, 6 184320 decoder_stage2_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_bn (BatchNormal (None, None, None, 6 256 decoder_stage2a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_relu (Activatio (None, None, None, 6 0 decoder_stage2a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_conv (Conv2D) (None, None, None, 6 36864 decoder_stage2a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_bn (BatchNormal (None, None, None, 6 256 decoder_stage2b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_relu (Activatio (None, None, None, 6 0 decoder_stage2b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3_upsampling (UpSa (None, None, None, 6 0 decoder_stage2b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3_concat (Concaten (None, None, None, 2 0 decoder_stage3_upsampling[0][0] \n block2a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_conv (Conv2D) (None, None, None, 3 59904 decoder_stage3_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_bn (BatchNormal (None, None, None, 3 128 decoder_stage3a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_relu (Activatio (None, None, None, 3 0 decoder_stage3a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_conv (Conv2D) (None, None, None, 3 9216 decoder_stage3a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_bn (BatchNormal (None, None, None, 3 128 decoder_stage3b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_relu (Activatio (None, None, None, 3 0 decoder_stage3b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4_upsampling (UpSa (None, None, None, 3 0 decoder_stage3b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_conv (Conv2D) (None, None, None, 1 4608 decoder_stage4_upsampling[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_bn (BatchNormal (None, None, None, 1 64 decoder_stage4a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_relu (Activatio (None, None, None, 1 0 decoder_stage4a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_conv (Conv2D) (None, None, None, 1 2304 decoder_stage4a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_bn (BatchNormal (None, None, None, 1 64 decoder_stage4b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_relu (Activatio (None, None, None, 1 0 decoder_stage4b_bn[0][0] \n__________________________________________________________________________________________________\nfinal_conv (Conv2D) (None, None, None, 4 580 decoder_stage4b_relu[0][0] \n__________________________________________________________________________________________________\nsigmoid (Activation) (None, None, None, 4 0 final_conv[0][0] \n==================================================================================================\nTotal params: 17,868,268\nTrainable params: 89,280\nNon-trainable params: 17,778,988\n__________________________________________________________________________________________________\n" ], [ "history = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=EPOCHS_WARMUP,\n verbose=1).history", "Epoch 1/3\n552/552 [==============================] - 400s 724ms/step - loss: 1.3368 - dice_coef: 0.2186 - iou_score: 0.1182 - f1-score: 0.2084 - val_loss: 1.2482 - val_dice_coef: 0.2153 - val_iou_score: 0.1153 - val_f1-score: 0.2043\nEpoch 2/3\n552/552 [==============================] - 337s 611ms/step - loss: 1.2717 - dice_coef: 0.2122 - iou_score: 0.1134 - f1-score: 0.2012 - val_loss: 1.2446 - val_dice_coef: 0.2114 - val_iou_score: 0.1134 - val_f1-score: 0.2015\nEpoch 3/3\n552/552 [==============================] - 332s 601ms/step - loss: 1.2447 - dice_coef: 0.2088 - iou_score: 0.1116 - f1-score: 0.1985 - val_loss: 1.2312 - val_dice_coef: 0.2087 - val_iou_score: 0.1119 - val_f1-score: 0.1994\n" ] ], [ [ "## Model loss graph", "_____no_output_____" ] ], [ [ "plot_metrics(history, metric_list=['loss', 'dice_coef', 'iou_score', 'f1-score'])", "_____no_output_____" ] ], [ [ "# Learning rate finder", "_____no_output_____" ] ], [ [ "for layer in model.layers:\n layer.trainable = True\n \nwarm_weights = model.get_weights()", "_____no_output_____" ], [ "lr_finder = LRFinder(num_samples=len(X_train), batch_size=BATCH_SIZE, minimum_lr=1e-5, maximum_lr=10, verbose=0)\n\noptimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)\nmodel.compile(optimizer=optimizer, loss=sm.losses.bce_dice_loss)\n\nhistory = model.fit_generator(generator=train_generator, \n steps_per_epoch=STEP_SIZE_TRAIN, \n epochs=1, \n callbacks=[lr_finder])\n\nplt.figure(figsize=(30, 10))\nplt.axvline(x=np.log10(LEARNING_RATE), color='red')\nlr_finder.plot_schedule(clip_beginning=15)", "Epoch 1/1\n552/552 [==============================] - 329s 596ms/step - loss: 1.1711\n" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "model.set_weights(warm_weights)\n\ncheckpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True)\noneCycleLR = OneCycleLR(max_lr=LEARNING_RATE, maximum_momentum=0.9, minimum_momentum=0.9)\n\ncallback_list = [checkpoint, oneCycleLR]\noptimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)\n\nmodel.compile(optimizer=optimizer, loss=sm.losses.bce_dice_loss, metrics=metric_list)\nmodel.summary()", "Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, None, None, 3 0 \n__________________________________________________________________________________________________\nstem_conv (Conv2D) (None, None, None, 4 1080 input_1[0][0] \n__________________________________________________________________________________________________\nstem_bn (BatchNormalization) (None, None, None, 4 160 stem_conv[0][0] \n__________________________________________________________________________________________________\nstem_activation (Activation) (None, None, None, 4 0 stem_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_dwconv (DepthwiseConv2D (None, None, None, 4 360 stem_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_bn (BatchNormalization) (None, None, None, 4 160 block1a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1a_activation (Activation) (None, None, None, 4 0 block1a_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_se_squeeze (GlobalAvera (None, 40) 0 block1a_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reshape (Reshape) (None, 1, 1, 40) 0 block1a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reduce (Conv2D) (None, 1, 1, 10) 410 block1a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1a_se_expand (Conv2D) (None, 1, 1, 40) 440 block1a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1a_se_excite (Multiply) (None, None, None, 4 0 block1a_activation[0][0] \n block1a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1a_project_conv (Conv2D) (None, None, None, 2 960 block1a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1a_project_bn (BatchNormal (None, None, None, 2 96 block1a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_dwconv (DepthwiseConv2D (None, None, None, 2 216 block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_bn (BatchNormalization) (None, None, None, 2 96 block1b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1b_activation (Activation) (None, None, None, 2 0 block1b_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_se_squeeze (GlobalAvera (None, 24) 0 block1b_activation[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reshape (Reshape) (None, 1, 1, 24) 0 block1b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 block1b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1b_se_expand (Conv2D) (None, 1, 1, 24) 168 block1b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1b_se_excite (Multiply) (None, None, None, 2 0 block1b_activation[0][0] \n block1b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1b_project_conv (Conv2D) (None, None, None, 2 576 block1b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1b_project_bn (BatchNormal (None, None, None, 2 96 block1b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_drop (FixedDropout) (None, None, None, 2 0 block1b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_add (Add) (None, None, None, 2 0 block1b_drop[0][0] \n block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_conv (Conv2D) (None, None, None, 1 3456 block1b_add[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_bn (BatchNormali (None, None, None, 1 576 block2a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_activation (Acti (None, None, None, 1 0 block2a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_dwconv (DepthwiseConv2D (None, None, None, 1 1296 block2a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_bn (BatchNormalization) (None, None, None, 1 576 block2a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2a_activation (Activation) (None, None, None, 1 0 block2a_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_se_squeeze (GlobalAvera (None, 144) 0 block2a_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reshape (Reshape) (None, 1, 1, 144) 0 block2a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 block2a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 block2a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2a_se_excite (Multiply) (None, None, None, 1 0 block2a_activation[0][0] \n block2a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2a_project_conv (Conv2D) (None, None, None, 3 4608 block2a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2a_project_bn (BatchNormal (None, None, None, 3 128 block2a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_conv (Conv2D) (None, None, None, 1 6144 block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_bn (BatchNormali (None, None, None, 1 768 block2b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_activation (Acti (None, None, None, 1 0 block2b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_bn (BatchNormalization) (None, None, None, 1 768 block2b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2b_activation (Activation) (None, None, None, 1 0 block2b_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_se_squeeze (GlobalAvera (None, 192) 0 block2b_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reshape (Reshape) (None, 1, 1, 192) 0 block2b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2b_se_excite (Multiply) (None, None, None, 1 0 block2b_activation[0][0] \n block2b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2b_project_conv (Conv2D) (None, None, None, 3 6144 block2b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2b_project_bn (BatchNormal (None, None, None, 3 128 block2b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_drop (FixedDropout) (None, None, None, 3 0 block2b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_add (Add) (None, None, None, 3 0 block2b_drop[0][0] \n block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_conv (Conv2D) (None, None, None, 1 6144 block2b_add[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_bn (BatchNormali (None, None, None, 1 768 block2c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_activation (Acti (None, None, None, 1 0 block2c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_bn (BatchNormalization) (None, None, None, 1 768 block2c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2c_activation (Activation) (None, None, None, 1 0 block2c_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_se_squeeze (GlobalAvera (None, 192) 0 block2c_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reshape (Reshape) (None, 1, 1, 192) 0 block2c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2c_se_excite (Multiply) (None, None, None, 1 0 block2c_activation[0][0] \n block2c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2c_project_conv (Conv2D) (None, None, None, 3 6144 block2c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2c_project_bn (BatchNormal (None, None, None, 3 128 block2c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_drop (FixedDropout) (None, None, None, 3 0 block2c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_add (Add) (None, None, None, 3 0 block2c_drop[0][0] \n block2b_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_conv (Conv2D) (None, None, None, 1 6144 block2c_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_bn (BatchNormali (None, None, None, 1 768 block3a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_activation (Acti (None, None, None, 1 0 block3a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_dwconv (DepthwiseConv2D (None, None, None, 1 4800 block3a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_bn (BatchNormalization) (None, None, None, 1 768 block3a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3a_activation (Activation) (None, None, None, 1 0 block3a_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_se_squeeze (GlobalAvera (None, 192) 0 block3a_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reshape (Reshape) (None, 1, 1, 192) 0 block3a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block3a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 block3a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3a_se_excite (Multiply) (None, None, None, 1 0 block3a_activation[0][0] \n block3a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3a_project_conv (Conv2D) (None, None, None, 4 9216 block3a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3a_project_bn (BatchNormal (None, None, None, 4 192 block3a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_conv (Conv2D) (None, None, None, 2 13824 block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_bn (BatchNormali (None, None, None, 2 1152 block3b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_activation (Acti (None, None, None, 2 0 block3b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_dwconv (DepthwiseConv2D (None, None, None, 2 7200 block3b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_bn (BatchNormalization) (None, None, None, 2 1152 block3b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3b_activation (Activation) (None, None, None, 2 0 block3b_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_se_squeeze (GlobalAvera (None, 288) 0 block3b_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reshape (Reshape) (None, 1, 1, 288) 0 block3b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block3b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3b_se_expand (Conv2D) (None, 1, 1, 288) 3744 block3b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3b_se_excite (Multiply) (None, None, None, 2 0 block3b_activation[0][0] \n block3b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3b_project_conv (Conv2D) (None, None, None, 4 13824 block3b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3b_project_bn (BatchNormal (None, None, None, 4 192 block3b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_drop (FixedDropout) (None, None, None, 4 0 block3b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_add (Add) (None, None, None, 4 0 block3b_drop[0][0] \n block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_conv (Conv2D) (None, None, None, 2 13824 block3b_add[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_bn (BatchNormali (None, None, None, 2 1152 block3c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_activation (Acti (None, None, None, 2 0 block3c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_dwconv (DepthwiseConv2D (None, None, None, 2 7200 block3c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_bn (BatchNormalization) (None, None, None, 2 1152 block3c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3c_activation (Activation) (None, None, None, 2 0 block3c_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_se_squeeze (GlobalAvera (None, 288) 0 block3c_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reshape (Reshape) (None, 1, 1, 288) 0 block3c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block3c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3c_se_expand (Conv2D) (None, 1, 1, 288) 3744 block3c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3c_se_excite (Multiply) (None, None, None, 2 0 block3c_activation[0][0] \n block3c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3c_project_conv (Conv2D) (None, None, None, 4 13824 block3c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3c_project_bn (BatchNormal (None, None, None, 4 192 block3c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_drop (FixedDropout) (None, None, None, 4 0 block3c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_add (Add) (None, None, None, 4 0 block3c_drop[0][0] \n block3b_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_conv (Conv2D) (None, None, None, 2 13824 block3c_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_bn (BatchNormali (None, None, None, 2 1152 block4a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_activation (Acti (None, None, None, 2 0 block4a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_dwconv (DepthwiseConv2D (None, None, None, 2 2592 block4a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_bn (BatchNormalization) (None, None, None, 2 1152 block4a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4a_activation (Activation) (None, None, None, 2 0 block4a_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_se_squeeze (GlobalAvera (None, 288) 0 block4a_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reshape (Reshape) (None, 1, 1, 288) 0 block4a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reduce (Conv2D) (None, 1, 1, 12) 3468 block4a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4a_se_expand (Conv2D) (None, 1, 1, 288) 3744 block4a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4a_se_excite (Multiply) (None, None, None, 2 0 block4a_activation[0][0] \n block4a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4a_project_conv (Conv2D) (None, None, None, 9 27648 block4a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4a_project_bn (BatchNormal (None, None, None, 9 384 block4a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_conv (Conv2D) (None, None, None, 5 55296 block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_bn (BatchNormali (None, None, None, 5 2304 block4b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_activation (Acti (None, None, None, 5 0 block4b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_bn (BatchNormalization) (None, None, None, 5 2304 block4b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4b_activation (Activation) (None, None, None, 5 0 block4b_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_se_squeeze (GlobalAvera (None, 576) 0 block4b_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reshape (Reshape) (None, 1, 1, 576) 0 block4b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4b_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4b_se_excite (Multiply) (None, None, None, 5 0 block4b_activation[0][0] \n block4b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4b_project_conv (Conv2D) (None, None, None, 9 55296 block4b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4b_project_bn (BatchNormal (None, None, None, 9 384 block4b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_drop (FixedDropout) (None, None, None, 9 0 block4b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_add (Add) (None, None, None, 9 0 block4b_drop[0][0] \n block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_conv (Conv2D) (None, None, None, 5 55296 block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_bn (BatchNormali (None, None, None, 5 2304 block4c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_activation (Acti (None, None, None, 5 0 block4c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_bn (BatchNormalization) (None, None, None, 5 2304 block4c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4c_activation (Activation) (None, None, None, 5 0 block4c_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_se_squeeze (GlobalAvera (None, 576) 0 block4c_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reshape (Reshape) (None, 1, 1, 576) 0 block4c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4c_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4c_se_excite (Multiply) (None, None, None, 5 0 block4c_activation[0][0] \n block4c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4c_project_conv (Conv2D) (None, None, None, 9 55296 block4c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4c_project_bn (BatchNormal (None, None, None, 9 384 block4c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_drop (FixedDropout) (None, None, None, 9 0 block4c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_add (Add) (None, None, None, 9 0 block4c_drop[0][0] \n block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_conv (Conv2D) (None, None, None, 5 55296 block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_bn (BatchNormali (None, None, None, 5 2304 block4d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_activation (Acti (None, None, None, 5 0 block4d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_bn (BatchNormalization) (None, None, None, 5 2304 block4d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4d_activation (Activation) (None, None, None, 5 0 block4d_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_se_squeeze (GlobalAvera (None, 576) 0 block4d_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reshape (Reshape) (None, 1, 1, 576) 0 block4d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4d_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4d_se_excite (Multiply) (None, None, None, 5 0 block4d_activation[0][0] \n block4d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4d_project_conv (Conv2D) (None, None, None, 9 55296 block4d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4d_project_bn (BatchNormal (None, None, None, 9 384 block4d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_drop (FixedDropout) (None, None, None, 9 0 block4d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_add (Add) (None, None, None, 9 0 block4d_drop[0][0] \n block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_conv (Conv2D) (None, None, None, 5 55296 block4d_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_bn (BatchNormali (None, None, None, 5 2304 block4e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_activation (Acti (None, None, None, 5 0 block4e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_dwconv (DepthwiseConv2D (None, None, None, 5 5184 block4e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_bn (BatchNormalization) (None, None, None, 5 2304 block4e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4e_activation (Activation) (None, None, None, 5 0 block4e_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_se_squeeze (GlobalAvera (None, 576) 0 block4e_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reshape (Reshape) (None, 1, 1, 576) 0 block4e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block4e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4e_se_expand (Conv2D) (None, 1, 1, 576) 14400 block4e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4e_se_excite (Multiply) (None, None, None, 5 0 block4e_activation[0][0] \n block4e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4e_project_conv (Conv2D) (None, None, None, 9 55296 block4e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4e_project_bn (BatchNormal (None, None, None, 9 384 block4e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_drop (FixedDropout) (None, None, None, 9 0 block4e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_add (Add) (None, None, None, 9 0 block4e_drop[0][0] \n block4d_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_conv (Conv2D) (None, None, None, 5 55296 block4e_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_bn (BatchNormali (None, None, None, 5 2304 block5a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_activation (Acti (None, None, None, 5 0 block5a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_dwconv (DepthwiseConv2D (None, None, None, 5 14400 block5a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_bn (BatchNormalization) (None, None, None, 5 2304 block5a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5a_activation (Activation) (None, None, None, 5 0 block5a_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_se_squeeze (GlobalAvera (None, 576) 0 block5a_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reshape (Reshape) (None, 1, 1, 576) 0 block5a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reduce (Conv2D) (None, 1, 1, 24) 13848 block5a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5a_se_expand (Conv2D) (None, 1, 1, 576) 14400 block5a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5a_se_excite (Multiply) (None, None, None, 5 0 block5a_activation[0][0] \n block5a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5a_project_conv (Conv2D) (None, None, None, 1 78336 block5a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5a_project_bn (BatchNormal (None, None, None, 1 544 block5a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_conv (Conv2D) (None, None, None, 8 110976 block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_bn (BatchNormali (None, None, None, 8 3264 block5b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_activation (Acti (None, None, None, 8 0 block5b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_bn (BatchNormalization) (None, None, None, 8 3264 block5b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5b_activation (Activation) (None, None, None, 8 0 block5b_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_se_squeeze (GlobalAvera (None, 816) 0 block5b_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reshape (Reshape) (None, 1, 1, 816) 0 block5b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5b_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5b_se_excite (Multiply) (None, None, None, 8 0 block5b_activation[0][0] \n block5b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5b_project_conv (Conv2D) (None, None, None, 1 110976 block5b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5b_project_bn (BatchNormal (None, None, None, 1 544 block5b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_drop (FixedDropout) (None, None, None, 1 0 block5b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_add (Add) (None, None, None, 1 0 block5b_drop[0][0] \n block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_conv (Conv2D) (None, None, None, 8 110976 block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_bn (BatchNormali (None, None, None, 8 3264 block5c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_activation (Acti (None, None, None, 8 0 block5c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_bn (BatchNormalization) (None, None, None, 8 3264 block5c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5c_activation (Activation) (None, None, None, 8 0 block5c_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_se_squeeze (GlobalAvera (None, 816) 0 block5c_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reshape (Reshape) (None, 1, 1, 816) 0 block5c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5c_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5c_se_excite (Multiply) (None, None, None, 8 0 block5c_activation[0][0] \n block5c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5c_project_conv (Conv2D) (None, None, None, 1 110976 block5c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5c_project_bn (BatchNormal (None, None, None, 1 544 block5c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_drop (FixedDropout) (None, None, None, 1 0 block5c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_add (Add) (None, None, None, 1 0 block5c_drop[0][0] \n block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_conv (Conv2D) (None, None, None, 8 110976 block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_bn (BatchNormali (None, None, None, 8 3264 block5d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_activation (Acti (None, None, None, 8 0 block5d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_bn (BatchNormalization) (None, None, None, 8 3264 block5d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5d_activation (Activation) (None, None, None, 8 0 block5d_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_se_squeeze (GlobalAvera (None, 816) 0 block5d_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reshape (Reshape) (None, 1, 1, 816) 0 block5d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5d_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5d_se_excite (Multiply) (None, None, None, 8 0 block5d_activation[0][0] \n block5d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5d_project_conv (Conv2D) (None, None, None, 1 110976 block5d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5d_project_bn (BatchNormal (None, None, None, 1 544 block5d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_drop (FixedDropout) (None, None, None, 1 0 block5d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_add (Add) (None, None, None, 1 0 block5d_drop[0][0] \n block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_conv (Conv2D) (None, None, None, 8 110976 block5d_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_bn (BatchNormali (None, None, None, 8 3264 block5e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_activation (Acti (None, None, None, 8 0 block5e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block5e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_bn (BatchNormalization) (None, None, None, 8 3264 block5e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5e_activation (Activation) (None, None, None, 8 0 block5e_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_se_squeeze (GlobalAvera (None, 816) 0 block5e_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reshape (Reshape) (None, 1, 1, 816) 0 block5e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block5e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5e_se_expand (Conv2D) (None, 1, 1, 816) 28560 block5e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5e_se_excite (Multiply) (None, None, None, 8 0 block5e_activation[0][0] \n block5e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5e_project_conv (Conv2D) (None, None, None, 1 110976 block5e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5e_project_bn (BatchNormal (None, None, None, 1 544 block5e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_drop (FixedDropout) (None, None, None, 1 0 block5e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_add (Add) (None, None, None, 1 0 block5e_drop[0][0] \n block5d_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_conv (Conv2D) (None, None, None, 8 110976 block5e_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_bn (BatchNormali (None, None, None, 8 3264 block6a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_activation (Acti (None, None, None, 8 0 block6a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_dwconv (DepthwiseConv2D (None, None, None, 8 20400 block6a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_bn (BatchNormalization) (None, None, None, 8 3264 block6a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6a_activation (Activation) (None, None, None, 8 0 block6a_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_se_squeeze (GlobalAvera (None, 816) 0 block6a_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reshape (Reshape) (None, 1, 1, 816) 0 block6a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reduce (Conv2D) (None, 1, 1, 34) 27778 block6a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6a_se_expand (Conv2D) (None, 1, 1, 816) 28560 block6a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6a_se_excite (Multiply) (None, None, None, 8 0 block6a_activation[0][0] \n block6a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6a_project_conv (Conv2D) (None, None, None, 2 189312 block6a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6a_project_bn (BatchNormal (None, None, None, 2 928 block6a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_conv (Conv2D) (None, None, None, 1 322944 block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_bn (BatchNormali (None, None, None, 1 5568 block6b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_activation (Acti (None, None, None, 1 0 block6b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_bn (BatchNormalization) (None, None, None, 1 5568 block6b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6b_activation (Activation) (None, None, None, 1 0 block6b_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_se_squeeze (GlobalAvera (None, 1392) 0 block6b_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6b_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6b_se_excite (Multiply) (None, None, None, 1 0 block6b_activation[0][0] \n block6b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6b_project_conv (Conv2D) (None, None, None, 2 322944 block6b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6b_project_bn (BatchNormal (None, None, None, 2 928 block6b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_drop (FixedDropout) (None, None, None, 2 0 block6b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_add (Add) (None, None, None, 2 0 block6b_drop[0][0] \n block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_conv (Conv2D) (None, None, None, 1 322944 block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_bn (BatchNormali (None, None, None, 1 5568 block6c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_activation (Acti (None, None, None, 1 0 block6c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_bn (BatchNormalization) (None, None, None, 1 5568 block6c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6c_activation (Activation) (None, None, None, 1 0 block6c_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_se_squeeze (GlobalAvera (None, 1392) 0 block6c_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6c_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6c_se_excite (Multiply) (None, None, None, 1 0 block6c_activation[0][0] \n block6c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6c_project_conv (Conv2D) (None, None, None, 2 322944 block6c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6c_project_bn (BatchNormal (None, None, None, 2 928 block6c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_drop (FixedDropout) (None, None, None, 2 0 block6c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_add (Add) (None, None, None, 2 0 block6c_drop[0][0] \n block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_conv (Conv2D) (None, None, None, 1 322944 block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_bn (BatchNormali (None, None, None, 1 5568 block6d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_activation (Acti (None, None, None, 1 0 block6d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_bn (BatchNormalization) (None, None, None, 1 5568 block6d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6d_activation (Activation) (None, None, None, 1 0 block6d_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_se_squeeze (GlobalAvera (None, 1392) 0 block6d_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6d_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6d_se_excite (Multiply) (None, None, None, 1 0 block6d_activation[0][0] \n block6d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6d_project_conv (Conv2D) (None, None, None, 2 322944 block6d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6d_project_bn (BatchNormal (None, None, None, 2 928 block6d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_drop (FixedDropout) (None, None, None, 2 0 block6d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_add (Add) (None, None, None, 2 0 block6d_drop[0][0] \n block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_conv (Conv2D) (None, None, None, 1 322944 block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_bn (BatchNormali (None, None, None, 1 5568 block6e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_activation (Acti (None, None, None, 1 0 block6e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_bn (BatchNormalization) (None, None, None, 1 5568 block6e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6e_activation (Activation) (None, None, None, 1 0 block6e_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_se_squeeze (GlobalAvera (None, 1392) 0 block6e_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6e_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6e_se_excite (Multiply) (None, None, None, 1 0 block6e_activation[0][0] \n block6e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6e_project_conv (Conv2D) (None, None, None, 2 322944 block6e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6e_project_bn (BatchNormal (None, None, None, 2 928 block6e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_drop (FixedDropout) (None, None, None, 2 0 block6e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_add (Add) (None, None, None, 2 0 block6e_drop[0][0] \n block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_conv (Conv2D) (None, None, None, 1 322944 block6e_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_bn (BatchNormali (None, None, None, 1 5568 block6f_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_activation (Acti (None, None, None, 1 0 block6f_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_dwconv (DepthwiseConv2D (None, None, None, 1 34800 block6f_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_bn (BatchNormalization) (None, None, None, 1 5568 block6f_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6f_activation (Activation) (None, None, None, 1 0 block6f_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_se_squeeze (GlobalAvera (None, 1392) 0 block6f_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reshape (Reshape) (None, 1, 1, 1392) 0 block6f_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block6f_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6f_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block6f_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6f_se_excite (Multiply) (None, None, None, 1 0 block6f_activation[0][0] \n block6f_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6f_project_conv (Conv2D) (None, None, None, 2 322944 block6f_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6f_project_bn (BatchNormal (None, None, None, 2 928 block6f_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_drop (FixedDropout) (None, None, None, 2 0 block6f_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_add (Add) (None, None, None, 2 0 block6f_drop[0][0] \n block6e_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_conv (Conv2D) (None, None, None, 1 322944 block6f_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_bn (BatchNormali (None, None, None, 1 5568 block7a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_activation (Acti (None, None, None, 1 0 block7a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_dwconv (DepthwiseConv2D (None, None, None, 1 12528 block7a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_bn (BatchNormalization) (None, None, None, 1 5568 block7a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7a_activation (Activation) (None, None, None, 1 0 block7a_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_se_squeeze (GlobalAvera (None, 1392) 0 block7a_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reshape (Reshape) (None, 1, 1, 1392) 0 block7a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reduce (Conv2D) (None, 1, 1, 58) 80794 block7a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7a_se_expand (Conv2D) (None, 1, 1, 1392) 82128 block7a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7a_se_excite (Multiply) (None, None, None, 1 0 block7a_activation[0][0] \n block7a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7a_project_conv (Conv2D) (None, None, None, 3 534528 block7a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7a_project_bn (BatchNormal (None, None, None, 3 1536 block7a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_conv (Conv2D) (None, None, None, 2 884736 block7a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_bn (BatchNormali (None, None, None, 2 9216 block7b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_activation (Acti (None, None, None, 2 0 block7b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_dwconv (DepthwiseConv2D (None, None, None, 2 20736 block7b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_bn (BatchNormalization) (None, None, None, 2 9216 block7b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7b_activation (Activation) (None, None, None, 2 0 block7b_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_se_squeeze (GlobalAvera (None, 2304) 0 block7b_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reshape (Reshape) (None, 1, 1, 2304) 0 block7b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reduce (Conv2D) (None, 1, 1, 96) 221280 block7b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7b_se_expand (Conv2D) (None, 1, 1, 2304) 223488 block7b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7b_se_excite (Multiply) (None, None, None, 2 0 block7b_activation[0][0] \n block7b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7b_project_conv (Conv2D) (None, None, None, 3 884736 block7b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7b_project_bn (BatchNormal (None, None, None, 3 1536 block7b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_drop (FixedDropout) (None, None, None, 3 0 block7b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_add (Add) (None, None, None, 3 0 block7b_drop[0][0] \n block7a_project_bn[0][0] \n__________________________________________________________________________________________________\ntop_conv (Conv2D) (None, None, None, 1 589824 block7b_add[0][0] \n__________________________________________________________________________________________________\ntop_bn (BatchNormalization) (None, None, None, 1 6144 top_conv[0][0] \n__________________________________________________________________________________________________\ntop_activation (Activation) (None, None, None, 1 0 top_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0_upsampling (UpSa (None, None, None, 1 0 top_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0_concat (Concaten (None, None, None, 2 0 decoder_stage0_upsampling[0][0] \n block6a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_conv (Conv2D) (None, None, None, 2 5419008 decoder_stage0_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_bn (BatchNormal (None, None, None, 2 1024 decoder_stage0a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0a_relu (Activatio (None, None, None, 2 0 decoder_stage0a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_conv (Conv2D) (None, None, None, 2 589824 decoder_stage0a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_bn (BatchNormal (None, None, None, 2 1024 decoder_stage0b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage0b_relu (Activatio (None, None, None, 2 0 decoder_stage0b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1_upsampling (UpSa (None, None, None, 2 0 decoder_stage0b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1_concat (Concaten (None, None, None, 5 0 decoder_stage1_upsampling[0][0] \n block4a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_conv (Conv2D) (None, None, None, 1 626688 decoder_stage1_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_bn (BatchNormal (None, None, None, 1 512 decoder_stage1a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1a_relu (Activatio (None, None, None, 1 0 decoder_stage1a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_conv (Conv2D) (None, None, None, 1 147456 decoder_stage1a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_bn (BatchNormal (None, None, None, 1 512 decoder_stage1b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage1b_relu (Activatio (None, None, None, 1 0 decoder_stage1b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2_upsampling (UpSa (None, None, None, 1 0 decoder_stage1b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2_concat (Concaten (None, None, None, 3 0 decoder_stage2_upsampling[0][0] \n block3a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_conv (Conv2D) (None, None, None, 6 184320 decoder_stage2_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_bn (BatchNormal (None, None, None, 6 256 decoder_stage2a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2a_relu (Activatio (None, None, None, 6 0 decoder_stage2a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_conv (Conv2D) (None, None, None, 6 36864 decoder_stage2a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_bn (BatchNormal (None, None, None, 6 256 decoder_stage2b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage2b_relu (Activatio (None, None, None, 6 0 decoder_stage2b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3_upsampling (UpSa (None, None, None, 6 0 decoder_stage2b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3_concat (Concaten (None, None, None, 2 0 decoder_stage3_upsampling[0][0] \n block2a_expand_activation[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_conv (Conv2D) (None, None, None, 3 59904 decoder_stage3_concat[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_bn (BatchNormal (None, None, None, 3 128 decoder_stage3a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3a_relu (Activatio (None, None, None, 3 0 decoder_stage3a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_conv (Conv2D) (None, None, None, 3 9216 decoder_stage3a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_bn (BatchNormal (None, None, None, 3 128 decoder_stage3b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage3b_relu (Activatio (None, None, None, 3 0 decoder_stage3b_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4_upsampling (UpSa (None, None, None, 3 0 decoder_stage3b_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_conv (Conv2D) (None, None, None, 1 4608 decoder_stage4_upsampling[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_bn (BatchNormal (None, None, None, 1 64 decoder_stage4a_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4a_relu (Activatio (None, None, None, 1 0 decoder_stage4a_bn[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_conv (Conv2D) (None, None, None, 1 2304 decoder_stage4a_relu[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_bn (BatchNormal (None, None, None, 1 64 decoder_stage4b_conv[0][0] \n__________________________________________________________________________________________________\ndecoder_stage4b_relu (Activatio (None, None, None, 1 0 decoder_stage4b_bn[0][0] \n__________________________________________________________________________________________________\nfinal_conv (Conv2D) (None, None, None, 4 580 decoder_stage4b_relu[0][0] \n__________________________________________________________________________________________________\nsigmoid (Activation) (None, None, None, 4 0 final_conv[0][0] \n==================================================================================================\nTotal params: 17,868,268\nTrainable params: 17,778,988\nNon-trainable params: 89,280\n__________________________________________________________________________________________________\n" ], [ "history = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n callbacks=callback_list,\n epochs=EPOCHS,\n verbose=2).history", "Epoch 1/15\n" ] ], [ [ "## Model loss graph", "_____no_output_____" ] ], [ [ "plot_metrics(history, metric_list=['loss', 'dice_coef', 'iou_score', 'f1-score'])", "_____no_output_____" ] ], [ [ "## Scheduler learning rates", "_____no_output_____" ] ], [ [ "fig, ax1 = plt.subplots(1, 1, figsize=(20, 6))\n\nplt.xlabel('Training Iterations')\nplt.ylabel('Learning Rate')\nplt.title(\"CLR\")\nplt.plot(oneCycleLR.history['lr'])\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e726f7fd14e0713b56e9591340188b0c96ab9052
9,471
ipynb
Jupyter Notebook
.ipynb_checkpoints/minedunews-checkpoint.ipynb
wcmckee/ece-display
43ded4219ba34f51370e82c742eed0fc2cba7330
[ "MIT" ]
null
null
null
.ipynb_checkpoints/minedunews-checkpoint.ipynb
wcmckee/ece-display
43ded4219ba34f51370e82c742eed0fc2cba7330
[ "MIT" ]
null
null
null
.ipynb_checkpoints/minedunews-checkpoint.ipynb
wcmckee/ece-display
43ded4219ba34f51370e82c742eed0fc2cba7330
[ "MIT" ]
null
null
null
27.372832
263
0.54461
[ [ [ "Ministry of Education News\n\nScript to parse news on education.govt.nz", "_____no_output_____" ] ], [ [ "import bs4\nimport requests\nimport os\nimport mammoth", "_____no_output_____" ], [ "opnewsfo = ('/media/removable/lemonyellow/educ/www.education.govt.nz/news')", "_____no_output_____" ], [ "osliz = os.listdir(opnewsfo)", "_____no_output_____" ], [ "osliz", "_____no_output_____" ], [ "osrem = osliz.remove('index.html')", "_____no_output_____" ], [ "osrem", "_____no_output_____" ], [ "for repoz in osliz:\n #print repoz\n indef = os.listdir(opnewsfo + '/' + repoz + '/')\n for ind in indef:\n opso = open(opnewsfo + '/' + repoz + '/' + ind, 'r')\n\n souprep = bs4.BeautifulSoup(opso)\n #print souprep\n #Instead of just getting class I need it to return specific \n #class - intro./\n \n for link in souprep.find_all('p', class_=\"intro\"):\n print link\n #print(link.get('class')):\n # print (link.get('class'))\n #link = souprep.find_all('p')\n #print(link.('class=\"intro\"'))\n \n #print souprep.prettify\n #print souprep.text", "<p class=\"intro\">\n 1815 schools are now connected to the government-funded internet Managed Network. \n </p>\n<p class=\"intro\">\n The Ministry of Education is working with our partners to implement new Vulnerable Children Act safety checking regulations, in force from 1 July 2015.\n </p>\n<p class=\"intro\">\n The latest Better Public Service (BPS) target results show that child participation in quality early childhood education (ECE) and NCEA level 2 achievement rates are at record highs.\n </p>\n<p class=\"intro\">\nTeacher enthusiasm for the Teacher-led Innovation Fund is huge. 40 projects, worth $2.7 million, involving 78 schools, have been funded from the first application round.\n</p>\n<p class=\"intro\">\n Modern, multi-purpose and transportable classrooms will soon be in use in many New Zealand schools.\n </p>\n<p class=\"intro\">\n The experience, skill and dedication of New Zealand teachers have been highlighted in new international data. \n </p>\n<p class=\"intro\">\n There has been some reporting about the condition of some of the buildings at Bay of Islands College. The full facts haven’t been reported, so it’s important to put them on the table. \n </p>\n<p class=\"intro\">\n Finalists in the 2015 Prime Minister’s Education Excellence Awards have been announced, and judges are currently visiting them, to determine the eventual winners.\n </p>\n" ], [ "with open(\"/home/wcmckee/Downloads/test.docx\", \"r\") as docx_file:\n result = mammoth.extract_raw_text(docx_file)\n text = result.value # The raw text\n messages = result.messages # Any messages", "_____no_output_____" ], [ "import bs4", "_____no_output_____" ], [ "soudocx = bs4.BeautifulSoup(html)", "_____no_output_____" ], [ "soupnop = soudocx.findAll('p')[1:]", "_____no_output_____" ], [ "for sonp in soupnop:\n print sonp.text", "Senior Data Analyst – Data Quality (ELI)\nEvidence Data and Knowledge\nDevelop, implement and run processes for checking and correcting early childhood service information in the Early Learning Information system (ELI). Improve data quality processes to ensure they reflect changes to Ministry policy and data needs.\nReports to Manager: Data Collection Unit\nStaff: no staff\nWhat \nOur Purpose\n \n \nLift aspiration, raise educational achievement for every New Zealander\nWhy \n \nOur Vision\n \n \nEvery New Zealander:\n \n \n•Is strong in their national and cultural identity\n \n \n•Aspires for themselves and their children to achieve more\n \n \n•Has the choice and opportunity to be the best they can be\n \n \n•Is an active participant and citizen in creating a strong civil society\n \n \n•Is productive, valued and competitive in the world\n \n \nNew Zealand and New Zealanders lead globally\nHow \n \nOur Behaviours:\n \n \n•We get the job done\n \n \n• We are respectful, we listen, we learn\n \n \n• We back ourselves and others to win\n \n \n• We work together for maximum impact\n \n \nGreat results are our bottom line\nSenior Data Analyst – Data Quality ELIEvidence Data and Knowledge\nThe Senior Data Analyst needs to have strong working relationships with members of the Collection Team, ECE Analysis Team; Early Years, Parents and Whānau group; and ministry employees working with ELI. As well as the following external relationships:\n. \nSenior Data Analyst – Data Quality ELIEvidence Data and Knowledge\n Action oriented\nWritten communication\nInterpersonal Savvy\nProblem solving\nPerspective\nManaging and measuring work\nTātai Pou \nDemonstration of Tātai Pou competencies at least a ‘developing’ level: \nCustomer focus \n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e72703e96e2ae66b26f4e254a43a3d78e5cee379
96,816
ipynb
Jupyter Notebook
getting-started/02_Working_with_Python.ipynb
UConn-Cooper/engineering-dynamics
d89a591321634905c1c0e3522a3a9f7aab5abbd3
[ "BSD-3-Clause" ]
1
2021-02-16T23:51:30.000Z
2021-02-16T23:51:30.000Z
getting-started/02_Working_with_Python.ipynb
UConn-Cooper/engineering-dynamics
d89a591321634905c1c0e3522a3a9f7aab5abbd3
[ "BSD-3-Clause" ]
2
2021-02-16T01:25:06.000Z
2021-06-24T20:29:00.000Z
getting-started/02_Working_with_Python.ipynb
UConn-Cooper/engineering-dynamics
d89a591321634905c1c0e3522a3a9f7aab5abbd3
[ "BSD-3-Clause" ]
3
2021-02-16T01:20:32.000Z
2021-09-02T19:19:58.000Z
53.667406
27,696
0.756084
[ [ [ "# Working with Python\n__Content modified under Creative Commons Attribution license CC-BY 4.0,\ncode under BSD 3-Clause License © 2020 R.C. Cooper__\n\n", "_____no_output_____" ], [ "## Good coding habits\n### naming folders and files", "_____no_output_____" ], [ "## [Stanford file naming best practices](https://library.stanford.edu/research/data-management-services/data-best-practices/best-practices-file-naming)\n\n1. Include information to distinguish file name e.g. project name, objective of function, name/initials, type of data, conditions, version of file, \n2. if using dates, use YYYYMMDD, so the computer organizes by year, then month, then day\n3. avoid special characters e.g. !, #, \\$, ...\n4. avoid using spaces if not necessary, some programs consider a space as a break in code use dashes `-` or underscores `_` or CamelCase", "_____no_output_____" ], [ "## Commenting your code\n\nIts important to comment your code \n\n- what are variable's units,\n\n- what the is the function supposed to do, \n\n- etc. \n", "_____no_output_____" ] ], [ [ "def code(i):\n '''Example of bad variable names and bad function name'''\n m=1\n for j in range(1,i+1):\n m*=j;\n return m", "_____no_output_____" ], [ "code(10)", "_____no_output_____" ] ], [ [ "## Choose variable names that describe the variable\n\nYou might not have recognized that `code(i)` is meant to calculate the [factorial of a number](https://en.wikipedia.org/wiki/Factorial), \n\n$N!= N*(N-1)*(N-2)*(N-3)*...3*2*1$. \n\nFor example, \n\n- 4! = 24\n\n- 5! = 120\n\n- 10! = 3,628,800\n\nIn the next block, we have rewritten `code` so the output is unchanged, but another user can read the code *and* help debug if there is an issue. \n\nA function is a compact collection of code that executes some action on its arguments. \n\nOnce *defined*, you can *call* a function as many times as you want. When we *call* a function, we execute all the code inside the function. The result of the execution depends on the *definition* of the function and on the values that are *passed* into it as *arguments*. Functions might or might not *return* values in their last operation. \n\nThe syntax for defining custom Python functions is:\n\n```python\ndef function_name(arg_1, arg_2, ...):\n '''\n docstring: description of the function\n '''\n <body of the function>\n```\n\nThe **docstring** of a function is a message from the programmer documenting what he or she built. Docstrings should be descriptive and concise. They are important because they explain (or remind) the intended use of the function to the users. You can later access the docstring of a function using the function `help()` and passing the name of the function. If you are in a notebook, you can also prepend a question mark `'?'` before the name of the function and run the cell to display the information of a function. \n\nTry it!", "_____no_output_____" ] ], [ [ "def factorial_function(input_value):\n '''Good variable names and better help documentation\n \n factorial_function(input_number): calculates the factorial of the input_number\n where the factorial is defined as N*(N-1)*(N-2)*...*3*2*1\n \n Arguments\n ---------\n input_value: an integer >= 0\n \n Returns\n -------\n factorial_output: the factorial of input_value'''\n \n factorial_output=1 # define 0! = 1\n for factor in range(1,input_value+1):\n factorial_output*=factor; # mutliply factorial_output by 1*2*3*...*N (factor)\n return factorial_output\n ", "_____no_output_____" ], [ "factorial_function(4)", "_____no_output_____" ] ], [ [ "Defining the function with descriptive variable names and inputs helps to make the function much more useable. \n\nConsider the structure of a Python function:\n\n```python\ndef factorial_function(input_value):\n```\nThis first line declares that we are `def`-ining a function that is named `factorial_function`. The inputs to the line are given inside the parantheses, `(input_value)`. We can define as many inputs as we want and even assign default values. \n\n```python\n '''Good variable names and better help documentation\n \n factorial_function(input_number): calculates the factorial of the input_number\n where the factorial is defined as N*(N-1)*(N-2)*...*3*2*1'''\n```\nThe next 4 lines define a help documentation that can be accessed with in a couple ways:\n\n1. `?factorial_function`\n\n2. `factorial_function?`\n\n3. `help(factorial_function)`\n\n\n", "_____no_output_____" ] ], [ [ "factorial_function?", "_____no_output_____" ] ], [ [ "```python\n factorial_output=1 # define 0! = 1\n```\n\nThis line sets the variable `factorial_output` to 1. In the next 2 lines we update this value based upon the mathematical formula we want to use. In this case, its $1*1*2*3*...*(N-1)*N$\n\n```python\n for factor in range(1,input_value+1):\n factorial_output*=factor; # mutliply m by 1*2*3*...*N (factor)\n``` \n\nThese two lines perform the computation that we set out to do. The `for`-loop is going to start at 1 and end at our input value. For each step in the `for`-loop, we will mulitply the factorial_output by the factor. So when we calculate 4!, the loop updates factorial_output 4 times:\n\n1. i=1: factorial_output = $1*1=1$\n\n2. i=2: factorial_output = $1*1*2=2$\n\n3. i=3: factorial_output = $1*1*2*3=6$\n\n4. i=4: factorial_output = $1*1*2*3*4=24$\n\n\n\n```python\n return factorial_output\n```\n\nThis final line in our function returns the calculated value, `factorial_output`. We can also return as many values as necessary on this line, \n\nfor example, if we had variables: `value_1`, `value_2`, and `value_3` we could return all three as such,\n\n```python\n return value_1,value_2,value_3\n```", "_____no_output_____" ], [ "## Play with NumPy Arrays\n\n\nIn engineering applications, most computing situations benefit from using *arrays*: they are sequences of data all of the _same type_. They behave a lot like lists, except for the constraint in the type of their elements. There is a huge efficiency advantage when you know that all elements of a sequence are of the same type—so equivalent methods for arrays execute a lot faster than those for lists.\n\nThe Python language is expanded for special applications, like scientific computing, with **libraries**. The most important library in science and engineering is **NumPy**, providing the _n-dimensional array_ data structure (a.k.a, `ndarray`) and a wealth of functions, operations and algorithms for efficient linear-algebra computations.\n\nIn this lesson, you'll start playing with NumPy arrays and discover their power. You'll also meet another widely loved library: **Matplotlib**, for creating two-dimensional plots of data.", "_____no_output_____" ], [ "## Importing libraries\n\nFirst, a word on importing libraries to expand your running Python session. Because libraries are large collections of code and are for special purposes, they are not loaded automatically when you launch Python (or IPython, or Jupyter). You have to import a library using the `import` command. For example, to import **NumPy**, with all its linear-algebra goodness, we enter:\n\n```python\nimport numpy as np\n```\n\nOnce you execute that command in a code cell, you can call any NumPy function using the dot notation, prepending the library name. For example, some commonly used functions are:\n\n* [`np.linspace()`](https://docs.scipy.org/doc/numpy/reference/generated/np.linspace.html)\n* [`np.ones()`](https://docs.scipy.org/doc/numpy/reference/generated/np.ones.html#np.ones)\n* [`np.zeros()`](https://docs.scipy.org/doc/numpy/reference/generated/np.zeros.html#np.zeros)\n* [`np.empty()`](https://docs.scipy.org/doc/numpy/reference/generated/np.empty.html#np.empty)\n* [`np.copy()`](https://docs.scipy.org/doc/numpy/reference/generated/np.copy.html#np.copy)\n\nFollow the links to explore the documentation for these very useful NumPy functions!", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "## Creating arrays\n\nTo create a NumPy array from an existing list of (homogeneous) numbers, we call **`np.array()`**, like this:", "_____no_output_____" ] ], [ [ "np.array([3, 5, 8, 17])", "_____no_output_____" ] ], [ [ "NumPy offers many [ways to create arrays](https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html#routines-array-creation) in addition to this. We already mentioned some of them above. \n\nPlay with `np.ones()` and `np.zeros()`: they create arrays full of ones and zeros, respectively. We pass as an argument the number of array elements we want. ", "_____no_output_____" ] ], [ [ "np.ones(5)", "_____no_output_____" ], [ "np.zeros(3)", "_____no_output_____" ] ], [ [ "Another useful one: `np.arange()` gives an array of evenly spaced values in a defined interval. \n\n*Syntax:*\n\n`np.arange(start, stop, step)`\n\nwhere `start` by default is zero, `stop` is not inclusive, and the default\nfor `step` is one. Play with it!\n", "_____no_output_____" ] ], [ [ "np.arange(4)", "_____no_output_____" ], [ "np.arange(2, 6)", "_____no_output_____" ], [ "np.arange(2, 6, 2)", "_____no_output_____" ], [ "np.arange(2, 6, 0.5)", "_____no_output_____" ] ], [ [ "`np.linspace()` is similar to `np.arange()`, but uses number of samples instead of a step size. It returns an array with evenly spaced numbers over the specified interval. \n\n*Syntax:*\n\n`np.linspace(start, stop, num)`\n\n`stop` is included by default (it can be removed, read the docs), and `num` by default is 50. ", "_____no_output_____" ] ], [ [ "np.linspace(2.0, 3.0)", "_____no_output_____" ], [ "len(np.linspace(2.0, 3.0))", "_____no_output_____" ], [ "np.linspace(2.0, 3.0, 6)", "_____no_output_____" ], [ "np.linspace(-1, 1, 9)", "_____no_output_____" ] ], [ [ "## Array operations\n\nLet's assign some arrays to variable names and perform some operations with them.", "_____no_output_____" ] ], [ [ "x_array = np.linspace(-1, 1, 9)", "_____no_output_____" ] ], [ [ "Now that we've saved it with a variable name, we can do some computations with the array. E.g., take the square of every element of the array, in one go:", "_____no_output_____" ] ], [ [ "y_array = x_array**2\nprint(y_array)", "[1. 0.5625 0.25 0.0625 0. 0.0625 0.25 0.5625 1. ]\n" ] ], [ [ "We can also take the square root of a positive array, using the `np.sqrt()` function:", "_____no_output_____" ] ], [ [ "z_array = np.sqrt(y_array)\nprint(z_array)", "[1. 0.75 0.5 0.25 0. 0.25 0.5 0.75 1. ]\n" ] ], [ [ "Now that we have different arrays `x_array`, `y_array` and `z_array`, we can do more computations, like add or multiply them. For example:", "_____no_output_____" ] ], [ [ "add_array = x_array + y_array \nprint(add_array)", "[ 0. -0.1875 -0.25 -0.1875 0. 0.3125 0.75 1.3125 2. ]\n" ] ], [ [ "Array addition is defined element-wise, like when adding two vectors (or matrices). Array multiplication is also element-wise:", "_____no_output_____" ] ], [ [ "mult_array = x_array * z_array\nprint(mult_array)", "[-1. -0.5625 -0.25 -0.0625 0. 0.0625 0.25 0.5625 1. ]\n" ] ], [ [ "We can also divide arrays, but you have to be careful not to divide by zero. This operation will result in a **`nan`** which stands for *Not a Number*. Python will still perform the division, but will tell us about the problem. \n\nLet's see how this might look:", "_____no_output_____" ] ], [ [ "x_array / y_array", "/opt/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in true_divide\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "## Multidimensional arrays\n\n### 2D arrays \n\nNumPy can create arrays of N dimensions. For example, a 2D array is like a matrix, and is created from a nested list as follows:", "_____no_output_____" ] ], [ [ "array_2d = np.array([[1, 2], [3, 4]])\nprint(array_2d)", "[[1 2]\n [3 4]]\n" ] ], [ [ "2D arrays can be added, subtracted, and multiplied:", "_____no_output_____" ] ], [ [ "X = np.array([[1, 2], [3, 4]])\nY = np.array([[1, -1], [0, 1]])", "_____no_output_____" ] ], [ [ "The addition of these two matrices works exactly as you would expect:", "_____no_output_____" ] ], [ [ "X + Y", "_____no_output_____" ] ], [ [ "What if we try to multiply arrays using the `'*'`operator?", "_____no_output_____" ] ], [ [ "X * Y", "_____no_output_____" ] ], [ [ "The multiplication using the `'*'` operator is element-wise. If we want to do matrix multiplication we use the `'@'` operator:", "_____no_output_____" ] ], [ [ "X @ Y", "_____no_output_____" ] ], [ [ "Or equivalently we can use `np.dot()`:", "_____no_output_____" ] ], [ [ "np.dot(X, Y)", "_____no_output_____" ] ], [ [ "### 3D arrays\n\nLet's create a 3D array by reshaping a 1D array. We can use [`np.reshape()`](https://docs.scipy.org/doc/numpy/reference/generated/np.reshape.html), where we pass the array we want to reshape and the shape we want to give it, i.e., the number of elements in each dimension. \n\n*Syntax*\n \n`np.reshape(array, newshape)`\n\nFor example:", "_____no_output_____" ] ], [ [ "a = np.arange(24)", "_____no_output_____" ], [ "a_3D = np.reshape(a, (2, 3, 4))\nprint(a_3D)", "[[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n\n [[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]]\n" ] ], [ [ "We can check for the shape of a NumPy array using the function `np.shape()`:", "_____no_output_____" ] ], [ [ "np.shape(a_3D)", "_____no_output_____" ] ], [ [ "Visualizing the dimensions of the `a_3D` array can be tricky, so here is a diagram that will help you to understand how the dimensions are assigned: each dimension is shown as a coordinate axis. For a 3D array, on the \"x axis\", we have the sub-arrays that themselves are two-dimensional (matrices). We have two of these 2D sub-arrays, in this case; each one has 3 rows and 4 columns. Study this sketch carefully, while comparing with how the array `a_3D` is printed out above. \n\n<img src=\"../images/3d_array_sketch.png\" style=\"width: 400px;\"/> ", "_____no_output_____" ], [ "When we have multidimensional arrays, we can access slices of their elements by slicing on each dimension. This is one of the advantages of using arrays: we cannot do this with lists. \n\nLet's access some elements of our 2D array called `X`.", "_____no_output_____" ] ], [ [ "X", "_____no_output_____" ], [ "# Grab the element in the 1st row and 1st column \nX[0, 0]", "_____no_output_____" ], [ "# Grab the element in the 1st row and 2nd column \nX[0, 1]", "_____no_output_____" ] ], [ [ "##### Exercises:\n\nFrom the X array:\n\n1. Grab the 2nd element in the 1st column.\n2. Grab the 2nd element in the 2nd column.", "_____no_output_____" ], [ "Play with slicing on this array:", "_____no_output_____" ] ], [ [ "# Grab the 1st column\nX[:, 0]", "_____no_output_____" ] ], [ [ "When we don't specify the start and/or end point in the slicing, the symbol `':'` means \"all\". In the example above, we are telling NumPy that we want all the elements from the 0-th index in the second dimension (the first column).", "_____no_output_____" ] ], [ [ "# Grab the 1st row\nX[0, :]", "_____no_output_____" ] ], [ [ "##### Exercises:\n\nFrom the X array:\n\n1. Grab the 2nd column.\n2. Grab the 2nd row.", "_____no_output_____" ], [ "Let's practice with a 3D array. ", "_____no_output_____" ] ], [ [ "a_3D", "_____no_output_____" ] ], [ [ "If we want to grab the first column of both matrices in our `a_3D` array, we do:", "_____no_output_____" ] ], [ [ "a_3D[:, :, 0]", "_____no_output_____" ] ], [ [ "The line above is telling NumPy that we want:\n\n* first `':'` : from the first dimension, grab all the elements (2 matrices).\n* second `':'`: from the second dimension, grab all the elements (all the rows).\n* `'0'` : from the third dimension, grab the first element (first column).\n\nIf we want the first 2 elements of the first column of both matrices: ", "_____no_output_____" ] ], [ [ "a_3D[:, 0:2, 0]", "_____no_output_____" ] ], [ [ "Below, from the first matrix in our `a_3D` array, we will grab the two middle elements (5,6):", "_____no_output_____" ] ], [ [ "a_3D[0, 1, 1:3]", "_____no_output_____" ] ], [ [ "##### Exercises:\n\nFrom the array named `a_3D`: \n\n1. Grab the two middle elements (17, 18) from the second matrix.\n2. Grab the last row from both matrices.\n3. Grab the elements of the 1st matrix that exclude the first row and the first column. \n4. Grab the elements of the 2nd matrix that exclude the last row and the last column. ", "_____no_output_____" ], [ "## NumPy == Fast and Clean! \n\nWhen we are working with numbers, arrays are a better option because the NumPy library has built-in functions that are optimized, and therefore faster than vanilla Python. Especially if we have big arrays. Besides, using NumPy arrays and exploiting their properties makes our code more readable.\n\nFor example, if we wanted to add element-wise the elements of 2 lists, we need to do it with a `for` statement. If we want to add two NumPy arrays, we just use the addtion `'+'` symbol!\n\nBelow, we will add two lists and two arrays (with random elements) and we'll compare the time it takes to compute each addition.", "_____no_output_____" ], [ "### Element-wise sum of a Python list\n\nUsing the Python library [`random`](https://docs.python.org/3/library/random.html), we will generate two lists with 100 pseudo-random elements in the range [0,100), with no numbers repeated.", "_____no_output_____" ] ], [ [ "#import random library\nimport random", "_____no_output_____" ], [ "lst_1 = random.sample(range(100), 100)\nlst_2 = random.sample(range(100), 100)", "_____no_output_____" ], [ "#print first 10 elements\nprint(lst_1[0:10])\nprint(lst_2[0:10])", "[78, 71, 32, 12, 88, 84, 7, 49, 93, 13]\n[24, 70, 98, 72, 17, 48, 82, 55, 87, 54]\n" ] ], [ [ "We need to write a `for` statement, appending the result of the element-wise sum into a new list we call `result_lst`. \n\nFor timing, we can use the IPython \"magic\" `%%time`. Writing at the beginning of the code cell the command `%%time` will give us the time it takes to execute all the code in that cell. ", "_____no_output_____" ] ], [ [ "%%time\nres_lst = []\nfor i in range(100):\n res_lst.append(lst_1[i] + lst_2[i])", "CPU times: user 30 µs, sys: 8 µs, total: 38 µs\nWall time: 41 µs\n" ], [ "print(res_lst[0:10])", "[102, 141, 130, 84, 105, 132, 89, 104, 180, 67]\n" ] ], [ [ "### Element-wise sum of NumPy arrays\n\nIn this case, we generate arrays with random integers using the NumPy function [`np.random.randint()`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/np.random.randint.html). The arrays we generate with this function are not going to be like the lists: in this case we'll have 100 elements in the range [0, 100) but they can repeat. Our goal is to compare the time it takes to compute addition of a _list_ or an _array_ of numbers, so all that matters is that the arrays and the lists are of the same length and type (integers).", "_____no_output_____" ] ], [ [ "arr_1 = np.random.randint(0, 100, size=100)\narr_2 = np.random.randint(0, 100, size=100)", "_____no_output_____" ], [ "#print first 10 elements\nprint(arr_1[0:10])\nprint(arr_2[0:10])", "[ 3 67 98 94 87 71 28 60 12 24]\n[29 59 1 45 85 92 17 0 15 58]\n" ] ], [ [ "Now we can use the `%%time` cell magic, again, to see how long it takes NumPy to compute the element-wise sum.", "_____no_output_____" ] ], [ [ "%%time\narr_res = arr_1 + arr_2", "CPU times: user 27 µs, sys: 7 µs, total: 34 µs\nWall time: 39.6 µs\n" ] ], [ [ "Notice that in the case of arrays, the code not only is more readable (just one line of code), but it is also faster than with lists. This time advantage will be larger with bigger arrays/lists. \n\n(Your timing results may vary to the ones we show in this notebook, because you will be computing in a different machine.)", "_____no_output_____" ], [ "##### Exercise\n\n1. Try the comparison between lists and arrays, using bigger arrays; for example, of size 10,000. \n2. Repeat the analysis, but now computing the operation that raises each element of an array/list to the power two. Use arrays of 10,000 elements. ", "_____no_output_____" ], [ "## Time to Plot\n\nYou will love the Python library **Matplotlib**! You'll learn here about its module `pyplot`, which makes line plots. \n\nWe need some data to plot. Let's define a NumPy array, compute derived data using its square, cube and square root (element-wise), and plot these values with the original array in the x-axis. ", "_____no_output_____" ] ], [ [ "xarray = np.linspace(0, 2, 41)\nprint(xarray)", "[0. 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65\n 0.7 0.75 0.8 0.85 0.9 0.95 1. 1.05 1.1 1.15 1.2 1.25 1.3 1.35\n 1.4 1.45 1.5 1.55 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2. ]\n" ], [ "pow2 = xarray**2\npow3 = xarray**3\npow_half = np.sqrt(xarray)", "_____no_output_____" ] ], [ [ "## Introduction to plotting\n\nTo plot the resulting arrays as a function of the orginal one (`xarray`) in the x-axis, we need to import the module `pyplot` from **Matplotlib**.\n\n", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Set up default plotting parameters\n\nThe default Matplotlib fonts and linewidths are a little small. Pixels are free, so the next two lines increase the fontsize and linewidth", "_____no_output_____" ] ], [ [ "plt.rcParams.update({'font.size': 22})\nplt.rcParams['lines.linewidth'] = 3", "_____no_output_____" ] ], [ [ "The line `%matplotlib inline` is an instruction to get the output of plotting commands displayed \"inline\" inside the notebook. Other options for how to deal with plot output are available, but not of interest to you right now. ", "_____no_output_____" ], [ "We'll use the **pyplot** `plt.plot()` function, specifying the line color (`'k'` for black) and line style (`'-'`, `'--'` and `':'` for continuous, dashed and dotted line), and giving each line a label. Note that the values for `color`, `linestyle` and `label` are given in quotes.", "_____no_output_____" ] ], [ [ "#Plot x^2\nplt.plot(xarray, pow2, color='k', linestyle='-', label='square')\n#Plot x^3\nplt.plot(xarray, pow3, color='k', linestyle='--', label='cube')\n#Plot sqrt(x)\nplt.plot(xarray, pow_half, color='k', linestyle=':', label='square root')\n#Plot the legends in the best location\nplt.legend(loc='best')", "_____no_output_____" ] ], [ [ "To illustrate other features, we will plot the same data, but varying the colors instead of the line style. We'll also use LaTeX syntax to write formulas in the labels. If you want to know more about LaTeX syntax, there is a [quick guide to LaTeX](https://users.dickinson.edu/~richesod/latex/latexcheatsheet.pdf) available online.\n\nAdding a semicolon (`';'`) to the last line in the plotting code block prevents that ugly output, like `<matplotlib.legend.Legend at 0x7f8c83cc7898>`. Try it.", "_____no_output_____" ] ], [ [ "#Plot x^2\nplt.plot(xarray, pow2, color='red', linestyle='-', label='$x^2$')\n#Plot x^3\nplt.plot(xarray, pow3, color='green', linestyle='-', label='$x^3$')\n#Plot sqrt(x)\nplt.plot(xarray, pow_half, color='blue', linestyle='-', label='$\\sqrt{x}$')\n#Plot the legends in the best location\nplt.legend(loc='best'); ", "_____no_output_____" ] ], [ [ "That's very nice! By now, you are probably imagining all the great stuff you can do with Jupyter notebooks, Python and its scientific libraries **NumPy** and **Matplotlib**. We just saw an introduction to plotting but we will keep learning about the power of **Matplotlib** in the next lesson. \n\nIf you are curious, you can explore all the beautiful plots you can make by browsing the [Matplotlib gallery](http://matplotlib.org/gallery.html).", "_____no_output_____" ], [ "##### Exercise:\n\nPick two different operations to apply to the `xarray` and plot them the resulting data in the same plot. ", "_____no_output_____" ], [ "## What we've learned\n\n* Good coding habits and file naming\n* How to define a function and return outputs\n* How to import libraries\n* Multidimensional arrays using NumPy\n* Accessing values and slicing in NumPy arrays\n* `%%time` magic to time cell execution.\n* Performance comparison: lists vs NumPy arrays\n* Basic plotting with `pyplot`.", "_____no_output_____" ], [ "## References\n\n1. [Best practices for file naming](https://library.stanford.edu/research/data-management-services/data-best-practices/best-practices-file-naming). Stanford Libraries\n\n1. _Effective Computation in Physics: Field Guide to Research with Python_ (2015). Anthony Scopatz & Kathryn D. Huff. O'Reilly Media, Inc.\n\n2. _Numerical Python: A Practical Techniques Approach for Industry_. (2015). Robert Johansson. Appress. \n\n2. [\"The world of Jupyter\"—a tutorial](https://github.com/barbagroup/jupyter-tutorial). Lorena A. Barba - 2016", "_____no_output_____" ], [ "## Problems\n\n1. Create a function called `sincos(x)` that returns two arrays, `sinx` and `cosx` that return the sine and cosine of the input array, `x`. \n\n a. Document your function with a help file in `'''help'''`\n \n b. Use your function to plot sin(x) and cos(x) for x=$0..2\\pi$\n\n\n", "_____no_output_____" ], [ "4. The following linear interpolation function has an error. It is supposed to return y(x) given the the two points $p_1=[x_1,~y_1]$ and $p_2=[x_2,~y_2]$. Currently, it just returns an error.", "_____no_output_____" ] ], [ [ "def linInterp(x,p1,p2):\n '''linear interplation function\n return y(x) given the two endpoints \n p1 = np.array([x1,y1])\n and\n p2 = np.array([x2,y2])'''\n slope = (p2[2]-p1[2])/(p2[1]-p1[1])\n \n return p1[2]+slope*(x - p1[1])\n ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
e7271ae55fce4ddc101a57f9f1901e510d9d0d60
458,655
ipynb
Jupyter Notebook
dataAnalysis.ipynb
abidshafee/dataAnalysis
69b8a33d220cffeeaf38eb00236393980d94f8f6
[ "Apache-2.0" ]
null
null
null
dataAnalysis.ipynb
abidshafee/dataAnalysis
69b8a33d220cffeeaf38eb00236393980d94f8f6
[ "Apache-2.0" ]
null
null
null
dataAnalysis.ipynb
abidshafee/dataAnalysis
69b8a33d220cffeeaf38eb00236393980d94f8f6
[ "Apache-2.0" ]
null
null
null
95.973007
170,878
0.727117
[ [ [ "<a href=\"https://colab.research.google.com/github/abidshafee/dataAnalysis/blob/master/dataAnalysis.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Data Analysis Mockup Project\n## What is Data Analysis\n#### It is a process of **Inspecting**, **Cleaning**, **transforming**, and **Modeling** Data with the goal of discovering useful information, informing conclusion, and supporting decision-makiing.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n# import datetime as dt\nimport seaborn as sns\n# import pandas.util.testing as tm", "/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ], [ "from google.colab import files\nuploaded = files.upload()", "_____no_output_____" ], [ "df = pd.read_csv('superstore_dataset2011-2015.csv', encoding = 'unicode_escape', index_col='Row ID', parse_dates=['Order Date', 'Ship Date'])\n# parsing Date Column as python Datetime formate", "_____no_output_____" ], [ "df.head(2)", "_____no_output_____" ], [ "# Converting Order Date column to pandas datetime object\n# pd.to_datetime(df['Order Date'])", "_____no_output_____" ], [ "# Now lets create Year column accessing year from Order date\ndf['Year'] = df['Order Date'].dt.year", "_____no_output_____" ], [ "df.head(1)", "_____no_output_____" ], [ "df.describe(include='all')", "_____no_output_____" ], [ "df['Category'].value_counts()", "_____no_output_____" ], [ "# df.loc[:, df['Category'] == 'Furniture']\ndf[['Category']]", "_____no_output_____" ], [ "# I don't know what it is doing!\n# dict(df)", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 51290 entries, 42433 to 36388\nData columns (total 24 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Order ID 51290 non-null object \n 1 Order Date 51290 non-null datetime64[ns]\n 2 Ship Date 51290 non-null datetime64[ns]\n 3 Ship Mode 51290 non-null object \n 4 Customer ID 51290 non-null object \n 5 Customer Name 51290 non-null object \n 6 Segment 51290 non-null object \n 7 City 51290 non-null object \n 8 State 51290 non-null object \n 9 Country 51290 non-null object \n 10 Postal Code 9994 non-null float64 \n 11 Market 51290 non-null object \n 12 Region 51290 non-null object \n 13 Product ID 51290 non-null object \n 14 Category 51290 non-null object \n 15 Sub-Category 51290 non-null object \n 16 Product Name 51290 non-null object \n 17 Sales 51290 non-null float64 \n 18 Quantity 51290 non-null int64 \n 19 Discount 51290 non-null float64 \n 20 Profit 51290 non-null float64 \n 21 Shipping Cost 51290 non-null float64 \n 22 Order Priority 51290 non-null object \n 23 Year 51290 non-null int64 \ndtypes: datetime64[ns](2), float64(5), int64(2), object(15)\nmemory usage: 9.8+ MB\n" ], [ "df.isnull().sum()", "_____no_output_____" ] ], [ [ "isnull method by default accept axis argument = 0, means axis=0, it indicates any operation through the row direction", "_____no_output_____" ] ], [ [ "df[df['Postal Code'].isnull()]\n# this will return number of all rows where Postal Code is null", "_____no_output_____" ], [ "df['Ship Mode'].value_counts()", "_____no_output_____" ], [ "df['Country'].unique() #.tolist()", "_____no_output_____" ], [ "df['Country'].value_counts()", "_____no_output_____" ] ], [ [ "# Numerical Analysis and Visualization", "_____no_output_____" ] ], [ [ "df['Sales'].describe()", "_____no_output_____" ] ], [ [ "### Boxplot", "_____no_output_____" ] ], [ [ "cln_df = df.dropna(how='any')\n# df['Sales'].plot(kind='box', vert=False, figsize=(14, 5))\ncln_df['Sales'].plot(kind='box', vert=False, figsize=(14, 5))", "_____no_output_____" ], [ "df['Profit'].describe()", "_____no_output_____" ], [ "cln_df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 9994 entries, 34662 to 36388\nData columns (total 24 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Order ID 9994 non-null object \n 1 Order Date 9994 non-null datetime64[ns]\n 2 Ship Date 9994 non-null datetime64[ns]\n 3 Ship Mode 9994 non-null object \n 4 Customer ID 9994 non-null object \n 5 Customer Name 9994 non-null object \n 6 Segment 9994 non-null object \n 7 City 9994 non-null object \n 8 State 9994 non-null object \n 9 Country 9994 non-null object \n 10 Postal Code 9994 non-null float64 \n 11 Market 9994 non-null object \n 12 Region 9994 non-null object \n 13 Product ID 9994 non-null object \n 14 Category 9994 non-null object \n 15 Sub-Category 9994 non-null object \n 16 Product Name 9994 non-null object \n 17 Sales 9994 non-null float64 \n 18 Quantity 9994 non-null int64 \n 19 Discount 9994 non-null float64 \n 20 Profit 9994 non-null float64 \n 21 Shipping Cost 9994 non-null float64 \n 22 Order Priority 9994 non-null object \n 23 Year 9994 non-null int64 \ndtypes: datetime64[ns](2), float64(5), int64(2), object(15)\nmemory usage: 1.9+ MB\n" ], [ "df['Profit'].mean()", "_____no_output_____" ] ], [ [ "### Distribution Plot", "_____no_output_____" ] ], [ [ "sns.distplot(df['Profit'])", "_____no_output_____" ] ], [ [ "### Density Plot", "_____no_output_____" ] ], [ [ "var = df['Profit'].plot(kind='density', figsize=(13, 7))\nvar.set_xlabel('Raw ID')\nvar.set_ylabel('Profit')\nvar.axvline(df['Profit'].mean(), color='green')\nvar.axvline(df['Profit'].median(), color='red')", "_____no_output_____" ], [ "var2 = df['Sales'].plot(kind='density', figsize=(13, 8))\nvar2.set_xlabel('Raw ID')\nvar2.set_ylabel('Sales')\nvar2.axvline(df['Sales'].mean(), color='green')\nvar2.axvline(df['Sales'].median(), color='red')", "_____no_output_____" ] ], [ [ "Inabove way we plotting directly using pandas ", "_____no_output_____" ] ], [ [ "plt.scatter(df['Sales'], df['Profit'])", "_____no_output_____" ], [ "df['Category'].value_counts().plot(kind='pie', figsize=(8,8))", "_____no_output_____" ], [ "df['Category'].value_counts().plot(kind='bar')", "_____no_output_____" ], [ "corr = df.corr()\ncorr", "_____no_output_____" ], [ "# sns.heatmap(df)", "_____no_output_____" ], [ "corr = df.corr()\nfig = plt.figure(figsize=(11,11))\nax = plt.subplot()\nax.matshow(corr)\n\n# fig = plt.figure(figsize=(11,11))", "_____no_output_____" ], [ "fig = plt.figure(figsize=(11,11))\nplt.matshow(corr, cmap='RdBu', fignum=fig.number)\nplt.xticks(range(len(corr.columns)), corr.columns, rotation='vertical')\nplt.yticks(range(len(corr.columns)), corr.columns)\nplt.show()\n\n# Dark blue indicates values are highly positively corelated\n# Dark Red indicates values are highly negatively corelated", "_____no_output_____" ], [ "# Slicing Data Frame\n# Accessing all numeric column\nsldf = df.iloc[:, 17:].head()\n# ax = sns.heatmap(sldf)\nsldf", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 51290 entries, 42433 to 36388\nData columns (total 24 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Order ID 51290 non-null object \n 1 Order Date 51290 non-null datetime64[ns]\n 2 Ship Date 51290 non-null datetime64[ns]\n 3 Ship Mode 51290 non-null object \n 4 Customer ID 51290 non-null object \n 5 Customer Name 51290 non-null object \n 6 Segment 51290 non-null object \n 7 City 51290 non-null object \n 8 State 51290 non-null object \n 9 Country 51290 non-null object \n 10 Postal Code 9994 non-null float64 \n 11 Market 51290 non-null object \n 12 Region 51290 non-null object \n 13 Product ID 51290 non-null object \n 14 Category 51290 non-null object \n 15 Sub-Category 51290 non-null object \n 16 Product Name 51290 non-null object \n 17 Sales 51290 non-null float64 \n 18 Quantity 51290 non-null int64 \n 19 Discount 51290 non-null float64 \n 20 Profit 51290 non-null float64 \n 21 Shipping Cost 51290 non-null float64 \n 22 Order Priority 51290 non-null object \n 23 Year 51290 non-null int64 \ndtypes: datetime64[ns](2), float64(5), int64(2), object(15)\nmemory usage: 9.8+ MB\n" ], [ "cln_df.head()", "_____no_output_____" ], [ "# cln_df['Region'].value_counts().plot(kind='bar')\n\nsouth_region = cln_df.loc[df['Region']=='South']\nsouth_region.head(1)", "_____no_output_____" ] ], [ [ "### Number of unique values in ['Region'] column", "_____no_output_____" ] ], [ [ "cln_df['Region'].value_counts().plot(kind='bar', color='red')", "_____no_output_____" ], [ "len(south_region)", "_____no_output_____" ], [ "south_region = len(cln_df.loc[cln_df['Region'] == 'South'])\ncentral_region = len(cln_df.loc[cln_df['Region'] == 'Central'])\neast_region = len(cln_df.loc[cln_df['Region'] == 'East'])\nwest_region = len(cln_df.loc[cln_df['Region'] == 'West'])", "_____no_output_____" ], [ "print(f\"Parcent of Sales in West Region: {round((west_region/(west_region+east_region+central_region+south_region)*100), 2)} % - Number of Sales: {west_region}\")\nprint(f\"Parcent of Sales in East Region: {round((east_region/(west_region+east_region+central_region+south_region)*100),2)} % - Number of Sales: {east_region}\")\nprint(f\"Parcent of Sales in Central Region: {round((central_region/(west_region+east_region+central_region+south_region)*100), 2)} % - Number of Sales: {central_region}\")\nprint(f\"Parcent of Sales in South Region: {round((south_region/(west_region+east_region+central_region+south_region)*100), 2)} % - Number of Sales: {south_region}\")", "Parcent of Sales in West Region: 32.05 % - Number of Sales: 3203\nParcent of Sales in East Region: 28.5 % - Number of Sales: 2848\nParcent of Sales in Central Region: 23.24 % - Number of Sales: 2323\nParcent of Sales in South Region: 16.21 % - Number of Sales: 1620\n" ], [ "cln_df['Discount']*=2", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "cln_df[['Discount']].head()", "_____no_output_____" ] ], [ [ "## Inserting second dataset", "_____no_output_____" ] ], [ [ "from google.colab import files\nuploaded2= files.upload()", "_____no_output_____" ], [ "df2 = pd.read_csv('sales_data_sample.csv', encoding = 'unicode_escape')", "_____no_output_____" ], [ "df2.describe(include='all')", "_____no_output_____" ], [ "len(df2)", "_____no_output_____" ], [ "df2.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2823 entries, 0 to 2822\nData columns (total 25 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ORDERNUMBER 2823 non-null int64 \n 1 QUANTITYORDERED 2823 non-null int64 \n 2 PRICEEACH 2823 non-null float64\n 3 ORDERLINENUMBER 2823 non-null int64 \n 4 SALES 2823 non-null float64\n 5 ORDERDATE 2823 non-null object \n 6 STATUS 2823 non-null object \n 7 QTR_ID 2823 non-null int64 \n 8 MONTH_ID 2823 non-null int64 \n 9 YEAR_ID 2823 non-null int64 \n 10 PRODUCTLINE 2823 non-null object \n 11 MSRP 2823 non-null int64 \n 12 PRODUCTCODE 2823 non-null object \n 13 CUSTOMERNAME 2823 non-null object \n 14 PHONE 2823 non-null object \n 15 ADDRESSLINE1 2823 non-null object \n 16 ADDRESSLINE2 302 non-null object \n 17 CITY 2823 non-null object \n 18 STATE 1337 non-null object \n 19 POSTALCODE 2747 non-null object \n 20 COUNTRY 2823 non-null object \n 21 TERRITORY 1749 non-null object \n 22 CONTACTLASTNAME 2823 non-null object \n 23 CONTACTFIRSTNAME 2823 non-null object \n 24 DEALSIZE 2823 non-null object \ndtypes: float64(2), int64(7), object(16)\nmemory usage: 551.5+ KB\n" ], [ "df2['PRODUCTLINE'].unique().tolist()", "_____no_output_____" ] ], [ [ "Generating Random Number Using Numpy and then plot those number using matplotlib ", "_____no_output_____" ] ], [ [ "x = np.linspace(0, 10, 500)\ny = np.cumsum(np.random.randn(500, 6), 0)", "_____no_output_____" ], [ "# len(y)\ntype(x)", "_____no_output_____" ], [ "len(x)", "_____no_output_____" ] ], [ [ "Ploting those Random Number", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(16,8))\nplt.plot(x, y)\nplt.grid()\nplt.legend('ABCDEF', ncol=2, loc='upper left')", "_____no_output_____" ], [ "cov = df2.corr()\ncov", "_____no_output_____" ], [ "df2.tail(2)", "_____no_output_____" ] ], [ [ "## Inserting Data using pandas loc[index] attributes\n#### length of df2 is 2823, 0 to 2822\nevery colum in dataframe represents aseries object which is one dimentional array", "_____no_output_____" ] ], [ [ "df2.loc[2823] = [1110909,\t49,\t101.00,\t14,\t52005.27,\t'',\t\n 'Shipped',\t4,\t10,\t2003,\t'Motorcycles',\t95,\t'S10_1678',\t\n 'Corporate Gift Ideas CA',\t'6505551386', '7734 Strong St.',\t'',\t\n 'San Francisco',\t'CA',\t'',\t'USA',\t'',\t'Abid', 'Shafekul',\t'Medium']", "_____no_output_____" ], [ "df2.loc[2823]", "_____no_output_____" ], [ "df2.tail()", "_____no_output_____" ], [ "from bokeh.plotting import figure, output_file, show", "_____no_output_____" ], [ "from bokeh.io import output_notebook", "_____no_output_____" ], [ "output_notebook()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7271f9c7d947f50a05af74cdc986700bc39dfb1
15,715
ipynb
Jupyter Notebook
.ipynb_checkpoints/Vol-Lamarie-Cheyenne-Copy1-checkpoint.ipynb
Apoorb/Hive-SDC
5db0228d6dff1e5162e6a21cb59eab9b61f4e357
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Vol-Lamarie-Cheyenne-Copy1-checkpoint.ipynb
Apoorb/Hive-SDC
5db0228d6dff1e5162e6a21cb59eab9b61f4e357
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Vol-Lamarie-Cheyenne-Copy1-checkpoint.ipynb
Apoorb/Hive-SDC
5db0228d6dff1e5162e6a21cb59eab9b61f4e357
[ "MIT" ]
null
null
null
28.315315
176
0.479478
[ [ [ "# Get 5 min volume for Feb 2018 from Unprocessed Speed", "_____no_output_____" ], [ " Created by: Apoorba Bibeka\n Date: March 20, 2019", "_____no_output_____" ], [ "## Import required modules", "_____no_output_____" ] ], [ [ "import getpass\nimport boto3\nimport pandas as pd\nfrom impala.dbapi import connect\nfrom impala.util import as_pandas\nfrom io import BytesIO as StringIO", "_____no_output_____" ] ], [ [ "## Connect to the database", "_____no_output_____" ] ], [ [ "username = \"[email protected]\"\npassword1 = getpass.getpass()", "········\n" ], [ "conn = connect(host=\"172.18.1.20\",auth_mechanism='PLAIN',port=10000,user=username, password=password1)", "_____no_output_____" ] ], [ [ "## Look at the device ids from \"wydot_speed_sensors_index\"", "_____no_output_____" ] ], [ [ "cursor = conn.cursor()\ncursor.execute('select * from wydot_speed_sensors_index')\nSSindex = as_pandas(cursor)\nRename1 ={x:x.split('wydot_speed_sensors_index.')[1] for x in SSindex}\nSSindex=SSindex.rename(index=str,columns=Rename1)\nprint(SSindex['deviceid'].values,SSindex.columns)", "(array([ 382, 383, 393, 394, 395, 396, 398, 400, 405, 407, 408,\n 411, 384, 482, 1075, 1084, 1100, 1134, 1145, 1153, 1167, 1219,\n 1231, 385, 1241, 1251, 1258, 1269, 1280, 1327, 1342, 1837, 1838,\n 1839, 386, 2020, 2032, 2049, 2070, 2079, 2090, 2146, 2147, 2178,\n 2191, 387, 2202, 2213, 2246, 2263, 2274, 2289, 2298, 2310, 2319,\n 2334, 388, 2346, 2359, 2372, 2383, 2395, 2409, 2421, 2433, 2445,\n 2578, 389, 2607, 2609, 2916, 3236, 3243, 3249, 3296, 3402, 3482,\n 3654, 390, 3897, 3899, 3901, 3903, 3905, 3907, 3909, 3911, 391],\n dtype=int64), Index([u'deviceid', u'lat_decimal', u'long_decimal', u'road_code', u'sitename',\n u'devicename', u'sensortype', u'public_route', u'gis_route',\n u'direction', u'milepost', u'sensor_loc', u'nearest_rwis', u'rwis',\n u'backup_rwis', u'2015_adt', u'vsl_id', u'eb_vsl', u'wb_vsl',\n u'horiz_d', u'horiz_i', u'vert_i', u'vert_d', u'notes'],\n dtype='object'))\n" ] ], [ [ "### Subset data for Lamarie & Cheyenne", "_____no_output_____" ] ], [ [ "SS2index=SSindex.loc[(SSindex['milepost']>=314)&(SSindex['milepost']<=360)]\nSS2index = SS2index[[\"deviceid\",\"milepost\",\"direction\",\"2015_adt\"]].copy()\nprint(SS2index['deviceid'].values)", "_____no_output_____" ] ], [ [ "## Get the index Data (314 <= Milepost <=360) from S3 bucket (Source: WyDoT Report)", "_____no_output_____" ] ], [ [ "#Data I got from a Report\nclient=boto3.client('s3')\nobj=client.get_object(Bucket='prod-sdc-tti-911061262852-us-east-1-bucket',Key=\"abibeka/uploaded_files/SpeedSen_MP_314_360.csv\")\nAB_SS=pd.read_csv(obj['Body'])\nAB_SS.head()", "_____no_output_____" ] ], [ [ "## Compare index from WyDOT Report and Database", "_____no_output_____" ] ], [ [ "# Return set with elements in either the set or ohter but not both\nprint(set(AB_SS['DEVICEID'].values) ^ set(SS2index['deviceid'].values))\n# Return intersection \nprint(set(AB_SS['DEVICEID'].values) & set(SS2index['deviceid'].values))", "set([])\nset([384, 385, 386, 387, 388, 389, 2310, 391, 393, 394, 395, 396, 2178, 2191, 2202, 3654, 1839, 2246, 3911, 2146, 2263, 2319, 390, 2213, 2274, 2147, 2289, 2298, 383])\n" ] ], [ [ "## Get relevant columns and Rows from the Index Data", "_____no_output_____" ] ], [ [ "SS2index=SS2index.sort_values(by=\"milepost\")\nSS2index=SS2index.reset_index(drop=True)\nSS2index.head()", "_____no_output_____" ] ], [ [ "## Initial Approach to get Volume Data (Not good - Can Aggregate Directly on Hive)", "_____no_output_____" ] ], [ [ "#create_query='''SELECT t1.controller, t1.mountain, t2.direction, t2.milepost, t2.2015_adt FROM \n#(SELECT * FROM wydot_speed_unprocessed WHERE mountain between '2018-02-01' AND '2018-02-28') t1\n#JOIN wydot_speed_sensors_index t2 \n#ON (t1.controller = t2.deviceid)'''\n\n#create_query='''SELECT t1.controller, t1.mountain, t2.direction, t2.milepost, t2.2015_adt FROM \n#(SELECT * FROM wydot_speed_unprocessed WHERE mountain between '2018-02-01' AND '2018-02-10') t1\n#JOIN wydot_speed_sensors_index t2 \n#ON (t1.controller = t2.deviceid)'''\n#cursor.execute(create_query)\n#Vol_dat=as_pandas(cursor)\n#Vol_dat.to_csv(\"C:/Users/abibeka/Documents/Hive-SDC/Vol_dat.csv\")\n#print(cursor.fetchall())", "_____no_output_____" ] ], [ [ "## $2^{nd}$ Approach to get Count Data", "_____no_output_____" ] ], [ [ "create_query='''SELECT t1.controller, t1.Time5M,t1.NRec, t2.direction, t2.milepost, t2.2015_adt FROM \n (SELECT controller, FROM_UNIXTIME(CEILING(UNIX_TIMESTAMP(mountain)/300)*300) AS Time5M, count(*) AS NRec\n FROM wydot_speed_unprocessed \n WHERE mountain between '2018-02-01' AND '2018-02-28' \n GROUP BY controller,FROM_UNIXTIME(CEILING(UNIX_TIMESTAMP(mountain)/300)*300)) t1\nJOIN (SELECT * FROM wydot_speed_sensors_index WHERE milepost BETWEEN 314 AND 360) t2 \nON (t1.controller = t2.deviceid)\nORDER BY t1.controller, t1.Time5M'''\ncursor.execute(create_query)\nVol_dat2=as_pandas(cursor)", "_____no_output_____" ], [ "Vol_dat2.to_csv(\"Z:/Apoorb/Vol_dat_V1.csv\")", "_____no_output_____" ], [ "print(Vol_dat2['t1.controller'].unique(),SS2index['deviceid'].unique()) ", "(array([ 384, 385, 386, 387, 388, 389, 390, 391, 396, 1839, 2146,\n 2147, 2178, 2191, 2202, 2213, 2246, 2263, 2274, 2289, 2298, 2310,\n 2319, 3654, 3911], dtype=int64), array([ 396, 3654, 3911, 395, 2146, 2147, 394, 383, 384, 385, 2178,\n 393, 2191, 2202, 2213, 386, 387, 388, 389, 2246, 390, 2263,\n 391, 2274, 2289, 2298, 2310, 2319, 1839], dtype=int64))\n" ] ], [ [ "## Check for Missing Index", "_____no_output_____" ] ], [ [ "# Return set with sensor ids missing from the database:\nprint(set(SS2index['deviceid'].values)-set(Vol_dat2['t1.controller'].values))", "set([393, 394, 395, 383])\n" ] ], [ [ "## Write file directly to my Bucket", "_____no_output_____" ], [ " Note: For Python 2.7 we imported BytesIO as StringIO\n Don't need to do this for Python 3.x\n Just use import StringIO as StringIO", "_____no_output_____" ] ], [ [ "csv_buffer=StringIO()\nVol_dat2.to_csv(csv_buffer,sep=\",\",index=False)\ns3=boto3.resource('s3')\ns3.Object('prod-sdc-tti-911061262852-us-east-1-bucket','abibeka/Vol_dat.csv').put(Body=csv_buffer.getvalue())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7272ac4d76446947fee85504c533d44d6aff3a0
15,482
ipynb
Jupyter Notebook
next-2020/TPU/PyTorch_on_Cloud_TPUs.ipynb
ultrons/ml-experiments
2999c9f64555e278dfc63c2750be636d8a0bb523
[ "MIT" ]
null
null
null
next-2020/TPU/PyTorch_on_Cloud_TPUs.ipynb
ultrons/ml-experiments
2999c9f64555e278dfc63c2750be636d8a0bb523
[ "MIT" ]
1
2020-09-28T20:07:39.000Z
2020-09-28T20:07:39.000Z
next-2020/TPU/PyTorch_on_Cloud_TPUs.ipynb
ultrons/ml-experiments
2999c9f64555e278dfc63c2750be636d8a0bb523
[ "MIT" ]
null
null
null
45.940653
287
0.543534
[ [ [ "## Train Your PyTorch Model on Cloud TPU\n\nThis notebook will show you how to:\n\n* Install PyTorch/XLA on Colab, which lets you use PyTorch with TPUs.\n* Outlines the syntactical elements use specific to TPUs.\n\n", "_____no_output_____" ], [ "<h3> &nbsp;&nbsp;Use Colab Cloud TPU&nbsp;&nbsp; <a href=\"https://cloud.google.com/tpu/\"><img valign=\"middle\" src=\"https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png\" width=\"50\"></a></h3>\n\n* On the main menu, click Runtime and select **Change runtime type**. Set \"TPU\" as the hardware accelerator.\n* The cell below makes sure you have access to a TPU on Colab.\n", "_____no_output_____" ] ], [ [ "import os\nassert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'", "_____no_output_____" ] ], [ [ "## Installing PyTorch/XLA\n\nRun the following cell (or copy it into your own notebook!) to install PyTorch, Torchvision, and PyTorch/XLA. It will take a couple minutes to run.\n\nThe PyTorch/XLA package lets PyTorch connect to Cloud TPUs. (It's named PyTorch/XLA, not PyTorch/TPU, because XLA is the name of the TPU compiler.) In particular, PyTorch/XLA makes TPU cores available as PyTorch devices. This lets PyTorch create and manipulate tensors on TPUs.", "_____no_output_____" ] ], [ [ "\n!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp36-cp36m-linux_x86_64.whl", "Requirement already satisfied: cloud-tpu-client==0.10 in /usr/local/lib/python3.6/dist-packages (0.10)\nRequirement already satisfied: torch-xla==1.7 from https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp36-cp36m-linux_x86_64.whl in /usr/local/lib/python3.6/dist-packages (1.7)\nRequirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client==0.10) (4.1.3)\nRequirement already satisfied: google-api-python-client==1.8.0 in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client==0.10) (1.8.0)\nRequirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client==0.10) (0.2.8)\nRequirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client==0.10) (0.17.4)\nRequirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client==0.10) (4.6)\nRequirement already satisfied: six>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client==0.10) (1.15.0)\nRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client==0.10) (0.4.8)\nRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client==0.10) (0.0.4)\nRequirement already satisfied: google-api-core<2dev,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client==0.10) (1.16.0)\nRequirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client==0.10) (1.17.2)\nRequirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client==0.10) (3.0.1)\nRequirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (2018.9)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (2.23.0)\nRequirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (50.3.2)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (1.52.0)\nRequirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (3.12.4)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (4.1.1)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (2020.11.8)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client==0.10) (3.0.4)\n" ], [ "# Imports\nimport torch\nimport torch.nn as nn\n\nimport torchvision\nimport torchvision.transforms as transforms\n\n# PyTorch/XLA Library Elements\nimport torch_xla\nimport torch_xla.core.xla_model as xm\nimport torch_xla.distributed.parallel_loader as pl\nimport torch_xla.distributed.xla_multiprocessing as xmp\nimport torch_xla.debug.metrics as met", "WARNING:root:TPU has started up successfully with version pytorch-1.7\n" ], [ "# Model\nclass ToyModel(nn.Module):\n \"\"\" Toy Classifier \"\"\"\n def __init__(self):\n super(ToyModel, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, 5)\n self.mp1 = nn.MaxPool2d(2)\n self.fc1 = nn.Linear(1440, 10)\n\n def forward(self, x):\n x = nn.ReLU()(self.conv1(x))\n x = self.mp1(x)\n x = torch.flatten(x, 1)\n x = self.fc1(x)\n x = nn.Softmax(dim=-1)(x)\n return x", "_____no_output_____" ], [ "# Config Parameters\nFLAGS = {\n 'batch_size': 32,\n 'world_size': 1,\n 'epochs': 1,\n 'log_steps': 10,\n 'metrics_debug': False,\n 'updates_per_epoch' : 400\n}\nSERIAL_EXEC = xmp.MpSerialExecutor()\nWRAPPED_MODEL = xmp.MpModelWrapper(ToyModel())", "_____no_output_____" ], [ "# Training Loop\ndef train(rank, FLAGS):\n print(\"Starting train method on rank: {}\".format(rank))\n device = xm.xla_device()\n model = WRAPPED_MODEL.to(device)\n criterion = nn.CrossEntropyLoss()\n optimizer = torch.optim.SGD(model.parameters(), 1e-4)\n\n def get_dataset():\n transform = transforms.Compose(\n [\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize((0.1307,), (0.3081,))\n ]\n )\n \n return torchvision.datasets.MNIST( \n '/tmp/', train=True, download=True, transform=transform)\n\n train_dataset = SERIAL_EXEC.run(get_dataset) \n\n train_sampler = torch.utils.data.distributed.DistributedSampler(\n train_dataset, num_replicas=FLAGS['world_size'], rank=rank)\n \n train_loader = torch.utils.data.DataLoader(\n train_dataset, batch_size=FLAGS['batch_size'], shuffle=False,\n num_workers=0, sampler=train_sampler)\n # PyTorch/XLA: Parallel Loader Wrapper\n train_loader = pl.MpDeviceLoader(train_loader, device)\n\n for epoch in range(FLAGS['epochs']):\n for i, (images, labels) in enumerate(train_loader):\n if i > FLAGS['updates_per_epoch']:\n break\n # Forward pass\n outputs = model(images)\n loss = criterion(outputs, labels)\n\n # Backward and optimize\n optimizer.zero_grad()\n loss.backward()\n # PyTorch/XLA: All Reduce followed by parameter update \n xm.optimizer_step(optimizer)\n\n if not i % FLAGS['log_steps']:\n xm.master_print(\n 'Epoch: {}/{}, Loss:{}'.format(\n epoch + 1, FLAGS['epochs'], loss.item()\n )\n )\n if FLAGS['metrics_debug']:\n xm.master_print(met.metrics_report())", "_____no_output_____" ], [ "#PyTorch/XLA: Distributed training on 4 TPU Chips (8 cores)\nxmp.spawn(train, nprocs=FLAGS['world_size'], args=(FLAGS,), start_method='fork')", "Starting train method on rank: 0\nEpoch: 1/1, Loss:2.296616315841675\nEpoch: 1/1, Loss:2.299046754837036\nEpoch: 1/1, Loss:2.29319429397583\nEpoch: 1/1, Loss:2.2989964485168457\nEpoch: 1/1, Loss:2.2943575382232666\nEpoch: 1/1, Loss:2.2907660007476807\nEpoch: 1/1, Loss:2.284073829650879\nEpoch: 1/1, Loss:2.284733772277832\nEpoch: 1/1, Loss:2.2944512367248535\nEpoch: 1/1, Loss:2.2889037132263184\nEpoch: 1/1, Loss:2.302015542984009\nEpoch: 1/1, Loss:2.2987492084503174\nEpoch: 1/1, Loss:2.2814888954162598\nEpoch: 1/1, Loss:2.2919561862945557\nEpoch: 1/1, Loss:2.300222873687744\nEpoch: 1/1, Loss:2.2973620891571045\nEpoch: 1/1, Loss:2.287539005279541\nEpoch: 1/1, Loss:2.295802593231201\nEpoch: 1/1, Loss:2.2876291275024414\nEpoch: 1/1, Loss:2.2938716411590576\nEpoch: 1/1, Loss:2.3000991344451904\nEpoch: 1/1, Loss:2.2910425662994385\nEpoch: 1/1, Loss:2.2864394187927246\nEpoch: 1/1, Loss:2.284902572631836\nEpoch: 1/1, Loss:2.270642042160034\nEpoch: 1/1, Loss:2.2965376377105713\nEpoch: 1/1, Loss:2.304417371749878\nEpoch: 1/1, Loss:2.302474021911621\nEpoch: 1/1, Loss:2.310530662536621\nEpoch: 1/1, Loss:2.2839159965515137\nEpoch: 1/1, Loss:2.2952780723571777\nEpoch: 1/1, Loss:2.2826762199401855\nEpoch: 1/1, Loss:2.2912559509277344\nEpoch: 1/1, Loss:2.2923243045806885\nEpoch: 1/1, Loss:2.2986843585968018\nEpoch: 1/1, Loss:2.2928972244262695\nEpoch: 1/1, Loss:2.2917392253875732\nEpoch: 1/1, Loss:2.3000595569610596\nEpoch: 1/1, Loss:2.2889580726623535\nEpoch: 1/1, Loss:2.2901248931884766\nEpoch: 1/1, Loss:2.3025403022766113\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7272c5361e381e47ef366ff036bf77040b976e2
818,846
ipynb
Jupyter Notebook
notebook/data-visualization/foraging_inspector.ipynb
jverswijver/map-ephys
73f380ac67344234228eb39a9e8ac7e5115dc3be
[ "MIT" ]
7
2017-11-21T02:32:44.000Z
2020-09-18T15:05:12.000Z
notebook/data-visualization/foraging_inspector.ipynb
jverswijver/map-ephys
73f380ac67344234228eb39a9e8ac7e5115dc3be
[ "MIT" ]
41
2017-11-28T05:04:14.000Z
2022-03-03T17:51:58.000Z
notebook/data-visualization/foraging_inspector.ipynb
jverswijver/map-ephys
73f380ac67344234228eb39a9e8ac7e5115dc3be
[ "MIT" ]
15
2017-11-16T21:32:29.000Z
2020-11-10T01:46:14.000Z
1,776.238612
366,764
0.955283
[ [ [ "%cd ../..\nimport datajoint as dj\nimport ipywidgets as widgets\nfrom ipywidgets import interact, interact_manual\nfrom IPython.display import display,clear_output\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport json\n\nfrom pipeline import lab, experiment, foraging_analysis\nfrom pipeline.plot.foraging_plot import *", "_____no_output_____" ], [ "#data.df_behaviortrial=[]\n\ndef update_params(df_behaviortrial=None): # this script make sure that the widgets are synchronized\n plottype = task_selector.value\n if '2' in plottype:\n wr_names = (foraging_analysis.SessionTaskProtocol()*experiment.Session()*lab.WaterRestriction()&'session_task_protocol = 100').fetch('water_restriction_number')\n elif '3' in plottype:\n wr_names = (foraging_analysis.SessionTaskProtocol()*experiment.Session()*lab.WaterRestriction()&'session_task_protocol = 101').fetch('water_restriction_number')\n wr_names = np.unique(wr_names)\n wr_name_selector.options = wr_names\n wr_name = wr_name_selector.value\n subject_id = (lab.WaterRestriction()&'water_restriction_number = \"{}\"'.format(wr_name)).fetch1('subject_id')\n\n df_sessions = pd.DataFrame(foraging_analysis.SessionTaskProtocol()*experiment.Session()&'subject_id = {}'.format(subject_id))\n if '2' in plottype:\n needed = df_sessions['session_task_protocol'] == 100\n elif '3' in plottype:\n needed = df_sessions['session_task_protocol'] == 101\n df_sessions=df_sessions[needed]\n if len(df_sessions)>0:\n try:\n session_selector.min = df_sessions['session'].min()\n session_selector.max = df_sessions['session'].max()\n except:\n session_selector.max = df_sessions['session'].max()\n session_selector.min = df_sessions['session'].min()\n else:\n session_selector.min = 0\n session_selector.max = 0\n session_real_foraging = (foraging_analysis.SessionTaskProtocol()&'subject_id = {}'.format(subject_id)).fetch('session_real_foraging')\n metadata_display.value = 'Pretraining session num: {}'.format(sum(session_real_foraging ==0))\n \n \n if type(df_behaviortrial) == pd.DataFrame:\n overall_sum_prob_efficiency = (foraging_analysis.BlockEfficiency()&'subject_id = {}'.format(subject_id)).fetch('block_effi_sum_p_reward')\n overall_sum_prob_efficiency = overall_sum_prob_efficiency[overall_sum_prob_efficiency != np.array(None)]\n overall_sum_prob_efficiency = np.array([float(x) for x in overall_sum_prob_efficiency])\n metadata_display.value = metadata_display.value + '<br/> trial number: {}'.format(len(df_behaviortrial)) + '<br/> overall_sum_prob_efficiency : {:.2f}'.format(np.nanmean(overall_sum_prob_efficiency))\n\n \n \ndef enable_local_efficiency(dummy=None): # enable controls to calculating local efficiency\n if calculate_efficiency_locally.value:\n setvalue = False\n else:\n setvalue = True\n \n local_bias_sliding_window.disabled= setvalue\n local_bias_matching_window.disabled= setvalue\n local_bias_matching_step.disabled= setvalue\n \n \ndef plot_stuff(dummy=None): # this script does the plotting\n # generate figure structure\n fig=plt.figure()\n ax1=fig.add_axes([0,0,2,.8])\n ax2=fig.add_axes([0,-.6,2,.4])\n ax3=fig.add_axes([0,-1.6,2,.8])\n ax4=fig.add_axes([0,-2.6,2,.8])\n ax5 = fig.add_axes([0,-3.6,2,.8])\n plt.close(fig)\n \n with output:\n clear_output(wait=True)\n print('waiting for data..')\n # invoke plot functions \n filters = {'ignore_rate_max':filter_max_ignore_rate.value,\n 'averaging_window':plot_choice_averaging_window.value}\n \n local_matching = {'calculate_local_matching': calculate_efficiency_locally.value,\n 'sliding_window':local_bias_sliding_window.value,\n 'matching_window':local_bias_matching_window.value,\n 'matching_step':local_bias_matching_step.value,\n 'efficiency_type':plot_efficiency.value}\n \n df_behaviortrial = extract_trials(plottype = task_selector.value,\n wr_name = wr_name_selector.value,\n sessions = session_selector.value,\n show_bias_check_trials = bias_check_selector.value,\n kernel = np.ones(plot_choice_averaging_window.value)/plot_choice_averaging_window.value,\n filters = filters,\n local_matching = local_matching)\n plot_trials(df_behaviortrial,\n ax1,\n ax2,\n plottype = task_selector.value,\n wr_name = wr_name_selector.value,\n sessions = session_selector.value,\n plot_every_choice= plot_each_choice.value,\n show_bias_check_trials = bias_check_selector.value,\n choice_filter = np.ones(plot_choice_averaging_window.value)/plot_choice_averaging_window.value)\n if local_matching['calculate_local_matching']:\n plot_local_efficiency_matching_bias(df_behaviortrial,\n ax3)\n \n else:\n plot_efficiency_matching_bias(ax3,\n plottype = task_selector.value,\n wr_name = wr_name_selector.value,\n sessions= session_selector.value,\n show_bias_check_trials = bias_check_selector.value,\n plot_efficiency_type = plot_efficiency.value)\n plot_rt_iti(df_behaviortrial,\n ax4,\n ax5,\n plottype = task_selector.value,\n wr_name = wr_name_selector.value,\n sessions = session_selector.value,\n show_bias_check_trials = bias_check_selector.value,\n kernel = np.ones(plot_choice_averaging_window.value)/plot_choice_averaging_window.value)\n \n # display figure on widget\n with output:\n clear_output(wait=True)\n display(fig)\n update_params(df_behaviortrial)\n \n if export_data.value:\n if not os.path.exists('export'):\n os.mkdir('export')\n choice = np.zeros(len(df_behaviortrial),dtype = int)\n choice[df_behaviortrial['trial_choice']=='left']=1\n choice[df_behaviortrial['trial_choice']=='right']=2\n reward = np.zeros(len(df_behaviortrial),dtype=int)\n reward[(df_behaviortrial['trial_choice']=='left')&(df_behaviortrial['outcome']=='hit')]=1\n reward[(df_behaviortrial['trial_choice']=='right')&(df_behaviortrial['outcome']=='hit')]=2\n output_dict = {'choice':choice,\n 'reward':reward,\n 'p1':np.asarray(df_behaviortrial['p_reward_left'].values,float),\n 'p2':np.asarray(df_behaviortrial['p_reward_right'].values,float).tolist(),\n 'session':np.asarray(df_behaviortrial['session'].values,int)}\n filename = os.path.join('.','export','{}.npz'.format(wr_name_selector.value))\n np.savez_compressed(filename,\n choice=output_dict['choice'],\n reward = output_dict['reward'],\n p1 = output_dict['p1'],\n p2 = output_dict['p2'],\n session = output_dict['session'])\n filename = os.path.join('.','export','{}.json'.format(wr_name_selector.value))\n with open(filename, 'w') as outfile:\n json.dump(filters, outfile, indent=2, sort_keys=True)\n \n\n \n## widgets are defined here\nborder = widgets.Output(layout={'border': '1px solid black'})\noutput = widgets.Output()\noutput2 = widgets.Output()\ntask_selector = widgets.Dropdown(layout={'width': 'auto'},\n options=['2lickport','3lickport'],\n disabled=False)\ntask_selector.observe(update_params,'value')\n\nwr_name_selector = widgets.Dropdown(layout={'width': 'auto'},\n options=[],\n disabled=False)\nwr_name_selector.observe(update_params,'value') # runs update params upon change in value\n\nsession_selector = widgets.IntRangeSlider(value=[5, 7],\n min=0,\n max=10,\n step=1,\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='d')\n\nbias_check_selector = widgets.Checkbox(value=True,\n description='Include bias check',\n disabled=False,\n indent=False,\n layout={'width': 'auto'})\nmetadata_display = widgets.HTML(value=\"\",\n placeholder=' ',\n description=' ',\n layout={'width': 'auto'})\n\n\n\nplot_button = widgets.Button(description=\"plot\")\nplot_button.on_click(plot_stuff)\n\nexport_data = widgets.Checkbox(value=False,\n description='export on plot',\n disabled=False,\n indent=False,\n layout={'width': 'auto'})\n\n\n# plotting related widgets\nplot_each_choice = widgets.Checkbox(value=True,\n description='Plot each choice',\n disabled=False,\n indent=False,\n layout={'width': 'auto'})\nplot_choice_averaging_window = widgets.IntText(value=10,\n description='Choice avg window:',\n style={'description_width': 'initial'},\n disabled=False,\n layout={'width': '70%'}\n )\nplot_efficiency = widgets.Dropdown(layout={'width': 'initial'},\n options=['ideal','ideal_regret','max_prob','sum_prob','max_available','sum_available'],\n value = 'ideal',\n description = 'Efficiency:',\n disabled=False)\ncalculate_efficiency_locally = widgets.Checkbox(value=False,\n description='local matching, bias, efficiency',\n disabled=False,\n indent=False)\ncalculate_efficiency_locally.observe(enable_local_efficiency,'value')\n\nlocal_bias_sliding_window = widgets.IntText(value=100,\n description='sliding window:',\n style={'description_width': 'initial'},\n disabled=True,\n layout={'width': '70%'})\nlocal_bias_matching_window = widgets.IntText(value=300,\n description='matching window:',\n style={'description_width': 'initial'},\n disabled=True,\n layout={'width': '70%'})\nlocal_bias_matching_step= widgets.IntText(value=30,\n description='matching step:',\n style={'description_width': 'initial'},\n disabled=True,\n layout={'width': '70%'})\n\n# data filtering related widgets\nfilter_max_ignore_rate = widgets.IntText(value=100,\n description='Max ignore rate %:',\n style={'description_width': 'initial'},\n disabled=False,\n layout={'width': '70%'})\n## layout of widgets\ncontrols = widgets.HBox([widgets.VBox([widgets.Label('Task'), task_selector]),\n widgets.VBox([widgets.Label('Subject'), wr_name_selector]),\n widgets.VBox([widgets.Label('Sessions'),session_selector]),\n widgets.VBox([bias_check_selector]),\n widgets.VBox([plot_button]),\n export_data])\n\nplot_controls = widgets.HBox([widgets.VBox([plot_each_choice,plot_choice_averaging_window]),\n widgets.VBox([plot_efficiency]),\n widgets.VBox([calculate_efficiency_locally,local_bias_sliding_window,local_bias_matching_window,local_bias_matching_step])])\nfilter_controls = widgets.HBox([widgets.VBox([filter_max_ignore_rate])])\nfilter_plot_main = widgets.VBox(children=(controls,plot_controls,filter_controls,metadata_display), title=('Select data', 'Plot details','Metadata'))\n\n## Model fitting widgets\n\nmodel_fitting_widget_temp = widgets.HBox([])\n\naccordion = widgets.Accordion(children=[filter_plot_main,model_fitting_widget_temp], titles=('Filtering', 'Fitting'))\naccordion.set_title(0, 'Filter data and plot')\naccordion.set_title(1, 'Fit model')\n\n\n# displaying widgets\ndisplay(accordion)\ndisplay(border)\ndisplay(widgets.VBox([output,output2]))\ndisplay(border)\nupdate_params()", "_____no_output_____" ] ], [ [ "## Prototypes for WebGUI", "_____no_output_____" ], [ "### 1. Subject panel", "_____no_output_____" ] ], [ [ "plot_foragingWebGUI_subject(wr_name_selected='HH07', session_selected=17)", "_____no_output_____" ] ], [ [ "### 2. Session panel", "_____no_output_____" ] ], [ [ "plot_foragingWebGUI_session(wr_name_selected='HH07', session_selected=17)", "_____no_output_____" ] ], [ [ "### 3. Trial panel", "_____no_output_____" ] ], [ [ "plot_foragingWebGUI_trial(wr_name_selected='HH07', session_selected=17)", "_____no_output_____" ] ], [ [ "### 4. Model panel (to be done)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e727419b4630b99fd8b54e6e9d08d3825af5729e
4,439
ipynb
Jupyter Notebook
.ipynb_checkpoints/data_prepare-checkpoint.ipynb
littlebaba/secondpaper
1a07bb6b311ff0664d41bea571557ebb68ff18a2
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/data_prepare-checkpoint.ipynb
littlebaba/secondpaper
1a07bb6b311ff0664d41bea571557ebb68ff18a2
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/data_prepare-checkpoint.ipynb
littlebaba/secondpaper
1a07bb6b311ff0664d41bea571557ebb68ff18a2
[ "Apache-2.0" ]
null
null
null
28.455128
97
0.512052
[ [ [ "from keras.preprocessing.image import ImageDataGenerator\nfrom keras.preprocessing import image\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Using TensorFlow backend.\n" ], [ "X_PATH=r\"F:\\matlabPro\\imagefusion_noisy_lrr\\focus_images\\focus_images\"\nY_PATH=r\"F:\\matlabPro\\imagefusion_noisy_lrr\\original_images\"\nAUG_PATH=r\"F:\\matlabPro\\imagefusion_noisy_lrr\\aug\"\nIMAGE_SHAPE=(64,64)", "_____no_output_____" ], [ "def load_dataset():\n \"\"\"返回x_left、x_right、y\"\"\"\n \n x_left=[]\n x_right=[]\n y=[]\n \n for x in sorted(os.listdir(X_PATH),key=lambda x:x.split(\"_\")[0][6:]):\n img=image.load_img(os.path.join(X_PATH,x))\n img=img.resize((64,64))\n img=image.img_to_array(img)\n if 'left' in x:\n x_left.append(img)\n elif 'right' in x:\n x_right.append(img)\n \n for y_ in sorted(os.listdir(Y_PATH),key=lambda x:x.split(\"_\")[0][6:]):\n img=image.load_img(os.path.join(Y_PATH,y_))\n img=img.resize((64,64))\n img=image.img_to_array(img)\n y.append(img)\n return np.array(x_left),np.array(x_right),np.array(y)\n\ndef train_generate(batch_size,x_lf,x_rg,y_,aug_dict,lf_img_save_prefix=\"L\",\n rg_img_save_prefix=\"R\",y_img_prefix=\"O\",save_to_dir=None,seed=1):\n \n x_left_gen=ImageDataGenerator(**aug_dict)\n x_right_gen=ImageDataGenerator(**aug_dict)\n y_gen=ImageDataGenerator(**aug_dict)\n \n x_left_generator=x_left_gen.flow(\n x_lf,\n batch_size=batch_size,\n seed=seed,\n save_to_dir=save_to_dir,\n save_prefix=lf_img_save_prefix)\n \n x_right_generator=x_right_gen.flow(\n x_rg,\n batch_size=batch_size,\n seed=seed,\n save_to_dir=save_to_dir,\n save_prefix=rg_img_save_prefix)\n \n y_generator=y_gen.flow(\n y_,\n batch_size=batch_size,\n seed=seed,\n save_to_dir=save_to_dir,\n save_prefix=y_img_prefix)\n \n train_generator=zip(x_left_generator,x_right_generator,y_generator)\n for (x_left,x_right,y) in train_generator:\n x_left=x_left/255\n x_right=x_right/255\n y=y/255\n yield([x_left,x_right],y)", "_____no_output_____" ], [ "x_lf,x_rg,y_=load_dataset()\ndata_gen_args = dict(rotation_range=0.2,\n width_shift_range=0.05,\n height_shift_range=0.05,\n shear_range=0.05,\n zoom_range=0.05,\n horizontal_flip=True,\n fill_mode='nearest')\ngen=train_generate(2,x_lf,x_rg,y_,data_gen_args)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e72745818e223c958aa21271aa210c079dee594f
17,317
ipynb
Jupyter Notebook
notebooks/premier_league.ipynb
WayneLambert/data-science-portfolio
8f8794629457d657fff2de82e5e04891aee0d84a
[ "MIT" ]
null
null
null
notebooks/premier_league.ipynb
WayneLambert/data-science-portfolio
8f8794629457d657fff2de82e5e04891aee0d84a
[ "MIT" ]
null
null
null
notebooks/premier_league.ipynb
WayneLambert/data-science-portfolio
8f8794629457d657fff2de82e5e04891aee0d84a
[ "MIT" ]
null
null
null
37.810044
197
0.411214
[ [ [ "![Premier League Logo](https://wl-portfolio.s3.eu-west-2.amazonaws.com/images/premier_league_logo.png)", "_____no_output_____" ], [ "In the English Premier League, there is the idea of the \"big 6\". Let's use Pandas to see the top 6 finishing teams within the Premier League between the seasons of 2006-2007 until 2017-2018", "_____no_output_____" ] ], [ [ "# Check Python version for compatibility/reference\nimport sys\nprint(sys.executable)\nprint(sys.version)\nprint(sys.version_info)", "/Applications/JupyterLab.app/Contents/Resources/jlab_server/bin/python\n3.8.12 | packaged by conda-forge | (default, Sep 16 2021, 01:59:00) \n[Clang 11.1.0 ]\nsys.version_info(major=3, minor=8, micro=12, releaselevel='final', serial=0)\n" ], [ "# Import requisite data science libraries\nimport pandas as pd", "_____no_output_____" ], [ "# Check Package version numbering for compatibility/reference\nprint(f\"{'Pandas version:'} \\t{pd.__version__}\")", "Pandas version: \t1.3.3\n" ], [ "# Set format of number fields to be formatted as whole numbers\npd.options.display.float_format = '{:,.0f}'.format", "_____no_output_____" ], [ "# Read the file into a Pandas dataframe\ndf = pd.read_csv('../data_files/prem_stats.csv', index_col=None)", "_____no_output_____" ], [ "df.head(3)", "_____no_output_____" ], [ "# Draws are not included in the dataset, so this constant exists to enable the calculation of draws. There have always been 38 games per season since the Premier League's inception.\nGAMES_PER_SEASON = 38", "_____no_output_____" ], [ "# Select just the columns required\ndf = df.filter([ 'team', 'wins', 'losses', 'goals', 'season'])", "_____no_output_____" ], [ "# Insert a calculated field for the number of draws\ndf.insert(2, 'draws', GAMES_PER_SEASON - df['wins'] - df['losses'])", "_____no_output_____" ], [ "# Change all of the column headers to be title case\ndf.columns = map(lambda x: str(x).title(), df.columns)", "_____no_output_____" ], [ "# Append a column and use it as a position number for each group\ndf['dummy'] = 0\ndf['Pos'] = df.groupby(['Season'])['dummy'].cumcount() + 1\ndf = df.drop('dummy', axis=1)", "_____no_output_____" ], [ "# Organise the dataframe in the order required for presentation\ndf = df[['Pos', 'Team', 'Wins', 'Draws', 'Losses', 'Goals', 'Season']]", "_____no_output_____" ], [ "# Set the seasons groups to iterate over\nseasons = df.groupby(['Season'])", "_____no_output_____" ], [ "# Reflect the top 6 finishing teams for each of the seasons\nseasons_tables = [seasons.get_group(season).head(6) for season in seasons.groups]", "_____no_output_____" ], [ "seasons_tables", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7275222b9af1fbfb695eb8cc3ee7b78de1350d5
394,213
ipynb
Jupyter Notebook
Regularized polynomial regression with linear and random sampling.ipynb
rajan-praveen/Machine-Learning-Concepts
6c028a724fd371cf33371f94652f9864e3806eec
[ "MIT" ]
142
2019-05-01T01:14:05.000Z
2022-03-13T14:59:32.000Z
Regularized polynomial regression with linear and random sampling.ipynb
ahmetemresertdemir/Machine-Learning-Concepts
9b987533bf0ee33743f2cba500b6f4cd3dce6028
[ "MIT" ]
null
null
null
Regularized polynomial regression with linear and random sampling.ipynb
ahmetemresertdemir/Machine-Learning-Concepts
9b987533bf0ee33743f2cba500b6f4cd3dce6028
[ "MIT" ]
50
2019-05-01T01:20:48.000Z
2022-01-15T17:38:48.000Z
383.475681
39,914
0.921073
[ [ [ "# Ridge/LASSO polynomial regression with linear and random sampling\n* Input variable space is constructed using random sampling/cluster pick/uniform sampling\n* Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output\n* Ridge and LASSO regression are used with varying model complexity (degree of polynomial)\n* Model score is obtained on a test set and average score over a # of runs is compared for linear and random sampling", "_____no_output_____" ], [ "### Import libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Global variables for the program", "_____no_output_____" ] ], [ [ "N_points = 41 # Number of points for constructing function\nx_min = 1 # Min of the range of x (feature)\nx_max = 10 # Max of the range of x (feature)\nnoise_mean = 0 # Mean of the Gaussian noise adder\nnoise_sd = 2 # Std.Dev of the Gaussian noise adder\nridge_alpha = tuple([10**(x) for x in range(-3,0,1) ]) # Alpha (regularization strength) of ridge regression\nlasso_eps = 0.001\nlasso_nalpha=20\nlasso_iter=1000\ndegree_min = 2\ndegree_max = 8", "_____no_output_____" ] ], [ [ "### Generate feature and output vector following a non-linear function\n$$ The\\ ground\\ truth\\ or\\ originating\\ function\\ is\\ as\\ follows:\\ $$\n\n$$ y=f(x)= x^2.sin(x).e^{-0.1x}+\\psi(x) $$\n\n$$: \\psi(x) = {\\displaystyle f(x\\;|\\;\\mu ,\\sigma ^{2})={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\;e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}} $$", "_____no_output_____" ] ], [ [ "x_smooth = np.array(np.linspace(x_min,x_max,1001))", "_____no_output_____" ], [ "# Linearly spaced sample points\nX=np.array(np.linspace(x_min,x_max,N_points))\n\n# Samples drawn from uniform random distribution\nX_sample = x_min+np.random.rand(N_points)*(x_max-x_min)", "_____no_output_____" ], [ "def func(x):\n result = x**2*np.sin(x)*np.exp(-(1/x_max)*x)\n return (result)", "_____no_output_____" ], [ "noise_x = np.random.normal(loc=noise_mean,scale=noise_sd,size=N_points)", "_____no_output_____" ], [ "y = func(X)+noise_x\ny_sampled = func(X_sample)+noise_x", "_____no_output_____" ], [ "df = pd.DataFrame(data=X,columns=['X'])\ndf['Ideal y']=df['X'].apply(func)\ndf['y']=y\ndf['X_sampled']=X_sample\ndf['y_sampled']=y_sampled\ndf.head()", "_____no_output_____" ] ], [ [ "### Plot the function(s), both the ideal characteristic and the observed output (with process and observation noise)", "_____no_output_____" ] ], [ [ "df.plot.scatter('X','Ideal y',title='Ideal y',grid=True,edgecolors=(0,0,0),c='blue',s=40,figsize=(10,5))\nplt.plot(x_smooth,func(x_smooth),'k')", "_____no_output_____" ], [ "df.plot.scatter('X_sampled',y='y_sampled',title='Randomly sampled y',\n grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))\nplt.plot(x_smooth,func(x_smooth),'k')", "_____no_output_____" ], [ "df.plot.scatter('X',y='y',title='Linearly sampled y',grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))\nplt.plot(x_smooth,func(x_smooth),'k')", "_____no_output_____" ] ], [ [ "### Import scikit-learn librares and prepare train/test splits", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LassoCV\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.ensemble import AdaBoostRegressor\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.pipeline import make_pipeline", "C:\\Users\\Tirtha\\Python\\Anaconda3\\lib\\site-packages\\sklearn\\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n" ], [ "X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)\nX_train=X_train.values.reshape(-1,1)\nX_test=X_test.values.reshape(-1,1)", "_____no_output_____" ] ], [ [ "### Polynomial model with Ridge regularization (pipelined) with lineary spaced samples\n** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **", "_____no_output_____" ] ], [ [ "linear_sample_score = []\npoly_degree = []\nfor degree in range(degree_min,degree_max+1):\n #model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))\n model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha, \n max_iter=lasso_iter,normalize=True,cv=5))\n #model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))\n model.fit(X_train, y_train)\n y_pred = np.array(model.predict(X_train))\n test_pred = np.array(model.predict(X_test))\n RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))\n test_score = model.score(X_test,y_test)\n linear_sample_score.append(test_score)\n poly_degree.append(degree)\n print(\"Test score of model with degree {}: {}\\n\".format(degree,test_score))\n \n #plt.figure()\n #plt.title(\"RMSE: {}\".format(RMSE),fontsize=10)\n #plt.suptitle(\"Polynomial of degree {}\".format(degree),fontsize=15)\n #plt.xlabel(\"X training values\")\n #plt.ylabel(\"Fitted and training values\")\n #plt.scatter(X_train,y_pred)\n #plt.scatter(X_train,y_train)\n \n plt.figure()\n plt.title(\"Predicted vs. actual for polynomial of degree {}\".format(degree),fontsize=15)\n plt.xlabel(\"Actual values\")\n plt.ylabel(\"Predicted values\")\n plt.scatter(y_test,test_pred)\n plt.plot(y_test,y_test,'r',lw=2)", "Test score of model with degree 2: -0.04323708983722585\n\nTest score of model with degree 3: -0.04323708983722607\n\n" ], [ "linear_sample_score", "_____no_output_____" ] ], [ [ "### Modeling with randomly sampled data set", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train, y_test = train_test_split(df['X_sampled'], df['y_sampled'], test_size=0.33)\nX_train=X_train.values.reshape(-1,1)\nX_test=X_test.values.reshape(-1,1)", "_____no_output_____" ], [ "random_sample_score = []\npoly_degree = []\nfor degree in range(degree_min,degree_max+1):\n #model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))\n model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha, \n max_iter=lasso_iter,normalize=True,cv=5))\n #model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))\n model.fit(X_train, y_train)\n y_pred = np.array(model.predict(X_train))\n test_pred = np.array(model.predict(X_test))\n RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))\n test_score = model.score(X_test,y_test)\n random_sample_score.append(test_score)\n poly_degree.append(degree)\n \n print(\"Test score of model with degree {}: {}\\n\".format(degree,test_score))\n \n #plt.figure()\n #plt.title(\"RMSE: {}\".format(RMSE),fontsize=10)\n #plt.suptitle(\"Polynomial of degree {}\".format(degree),fontsize=15)\n #plt.xlabel(\"X training values\")\n #plt.ylabel(\"Fitted and training values\")\n #plt.scatter(X_train,y_pred)\n #plt.scatter(X_train,y_train)\n \n plt.figure()\n plt.title(\"Predicted vs. actual for polynomial of degree {}\".format(degree),fontsize=15)\n plt.xlabel(\"Actual values\")\n plt.ylabel(\"Predicted values\")\n plt.scatter(y_test,test_pred)\n plt.plot(y_test,y_test,'r',lw=2)", "Test score of model with degree 2: -0.12434801463459723\n\nTest score of model with degree 3: -0.0769230959117706\n\n" ], [ "random_sample_score", "_____no_output_____" ], [ "df_score = pd.DataFrame(data={'degree':[d for d in range(degree_min,degree_max+1)],\n 'Linear sample score':linear_sample_score,\n 'Random sample score':random_sample_score})\ndf_score", "_____no_output_____" ], [ "plt.figure(figsize=(8,5))\nplt.grid(True)\nplt.plot(df_score['degree'],df_score['Linear sample score'],lw=2)\nplt.plot(df_score['degree'],df_score['Random sample score'],lw=2)\nplt.xlabel (\"Model Complexity: Degree of polynomial\",fontsize=20)\nplt.ylabel (\"Model Score: R^2 score on test set\",fontsize=15)\nplt.legend(fontsize=15)", "_____no_output_____" ] ], [ [ "#### Cehcking the regularization strength from the cross-validated model pipeline", "_____no_output_____" ] ], [ [ "m=model.steps[1][1]\nm.alpha_", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e72756472d7cd723e5bf2de12003cc9e4361f3ef
50,997
ipynb
Jupyter Notebook
Lecture 2/Notebook-5-python-os-and-filesystem.ipynb
FabianAC07/Data-Analysis-with-Python-Zero-to-Pandas
7452f50cdced89d8dbd85e2f9c9cd46152313994
[ "MIT" ]
1
2022-03-02T11:54:20.000Z
2022-03-02T11:54:20.000Z
Lecture 2/Notebook-5-python-os-and-filesystem.ipynb
FabianAC07/Data-Analysis-with-Python-Zero-to-Pandas
7452f50cdced89d8dbd85e2f9c9cd46152313994
[ "MIT" ]
null
null
null
Lecture 2/Notebook-5-python-os-and-filesystem.ipynb
FabianAC07/Data-Analysis-with-Python-Zero-to-Pandas
7452f50cdced89d8dbd85e2f9c9cd46152313994
[ "MIT" ]
5
2021-01-28T13:45:13.000Z
2022-03-12T10:24:34.000Z
29.241399
1,153
0.54213
[ [ [ "# Reading from and Writing to Files using Python\n\n### Part 5 of \"A Gentle Introduction to Programming with Python\"\n\nThis tutorial is the fifth in a series on introduction to programming using the Python language. These tutorials take a practical coding-based approach, and the best way to learn the material is to execute the code and experiment with the examples. Check out the full series here: \n\n1. [First Steps with Python and Jupyter](https://jovian.ml/aakashns/first-steps-with-python)\n2. [A Quick Tour of Variables and Data Types](https://jovian.ml/aakashns/python-variables-and-data-types)\n3. [Branching using Conditional Statements and Loops](https://jovian.ml/aakashns/python-branching-and-loops)\n4. [Writing Reusable Code Using Functions](https://jovian.ml/aakashns/python-functions-and-scope)\n5. [Reading from and Writing to Files](https://jovian.ml/aakashns/python-os-and-filesystem)\n6. [Object Oriented Programming with Classes](https://jovian.ml/aakashns/python-object-oriented-programming)\n", "_____no_output_____" ], [ "## How to run the code\n\nThis tutorial hosted on [Jovian.ml](https://www.jovian.ml), a platform for sharing data science projects online. You can \"run\" this tutorial and experiment with the code examples in a couple of ways: *using free online resources* (recommended) or *on your own computer*.\n\n> This tutorial is a [Jupyter notebook](https://jupyter.org) - a document made of \"cells\", which can contain explanations in text or code written in Python. Code cells can be executed and their outputs e.g. numbers, messages, graphs, tables, files etc. can be viewed within the notebook, which makes it a really powerful platform for experimentation and analysis. Don't afraid to experiment with the code & break things - you'll learn a lot by encoutering and fixing errors. You can use the \"Kernel > Restart & Clear Output\" menu option to clear all outputs and start again from the top of the notebook.\n\n### Option 1: Running using free online resources (1-click, recommended)\n\nThe easiest way to start executing this notebook is to click the \"Run\" button at the top of this page, and select \"Run on Binder\". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks. You can also select \"Run on Colab\" or \"Run on Kaggle\", but you'll need to create an account on [Google Colab](https://colab.research.google.com) or [Kaggle](https://kaggle.com) to use these platforms.\n\n\n### Option 2: Running on your computer locally\n\nYou'll need to install Python and download this notebook on your computer to run in locally. We recommend using the [Conda](https://docs.conda.io/en/latest/) distribution of Python. Here's what you need to do to get started:\n\n1. Install Conda by [following these instructions](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). Make sure to add Conda binaries to your system `PATH` to be able to run the `conda` command line tool from your Mac/Linux terminal or Windows command prompt. \n\n\n2. Create and activate a [Conda virtual environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) called `intro-to-python` which you can use for this tutorial series:\n```\nconda create -n intro-to-python -y python=3.8 \nconda activate intro-to-python\n```\nYou'll need to create the environment only once, but you'll have to activate it every time want to run the notebook. When the environment is activated, you should be able to see a prefix `(intro-to-python)` within your terminal or command prompt.\n\n\n3. Install the required Python libraries within the environmebt by the running the following command on your terminal or command prompt:\n```\npip install jovian jupyter numpy pandas matplotlib seaborn --upgrade\n```\n\n4. Download the notebook for this tutorial using the `jovian clone` command:\n```\njovian clone aakashns/python-os-and-filesystem\n```\nThe notebook is downloaded to the directory `python-os-and-filesystem`.\n\n\n5. Enter the project directory and start the Jupyter notebook:\n```\ncd python-os-and-filesystem\njupyter notebook\n```\n\n6. You can now access Jupyter's web interface by clicking the link that shows up on the terminal or by visiting http://localhost:8888 on your browser. Click on the notebook `python-os-and-filesystem.ipynb` to open it and run the code. If you want to type out the code yourself, you can also create a new notebook using the \"New\" button.\n\n\n", "_____no_output_____" ], [ "## Interacting with the OS and filesystem\n\nThe `os` module in Python provides many functions for interacting with the OS and the filesystem. Let's import it and try out some examples.", "_____no_output_____" ] ], [ [ "import os", "_____no_output_____" ] ], [ [ "We can check the present working directory using the `os.getcwd` function.", "_____no_output_____" ] ], [ [ "os.getcwd()", "_____no_output_____" ] ], [ [ "To get the list of files in a directory, use `os.listdir`. You pass an absolute or relative path of a directory as the argument to the function.", "_____no_output_____" ] ], [ [ "help(os.listdir)", "Help on built-in function listdir in module posix:\n\nlistdir(path=None)\n Return a list containing the names of the files in the directory.\n \n path can be specified as either str, bytes, or a path-like object. If path is bytes,\n the filenames returned will also be bytes; in all other circumstances\n the filenames returned will be str.\n If path is None, uses the path='.'.\n On some platforms, path may also be specified as an open file descriptor;\\\n the file descriptor must refer to a directory.\n If this functionality is unavailable, using it raises NotImplementedError.\n \n The list is in arbitrary order. It does not include the special\n entries '.' and '..' even if they are present in the directory.\n\n" ], [ "os.listdir('.') # relative path", "_____no_output_____" ], [ "os.listdir('/usr') # absolute path", "_____no_output_____" ] ], [ [ "A new directory can be created using `os.makedirs`. Let's create a new directory called `data`, where we'll later download some files.", "_____no_output_____" ] ], [ [ "os.makedirs('./data', exist_ok=True)", "_____no_output_____" ] ], [ [ "Can you figure out what the argument `exist_ok` does? Try using the `help` function or [read the documentation](https://docs.python.org/3/library/os.html#os.makedirs).\n\nLet's verify that the directory was in fact, created, and is currently empty.", "_____no_output_____" ] ], [ [ "'data' in os.listdir('.')", "_____no_output_____" ], [ "os.listdir('./data')", "_____no_output_____" ] ], [ [ "Let us download some files into the `data` directory using the `urllib` module.", "_____no_output_____" ] ], [ [ "url1 = 'https://hub.jovian.ml/wp-content/uploads/2020/08/loans1.txt'\nurl2 = 'https://hub.jovian.ml/wp-content/uploads/2020/08/loans2.txt'\nurl3 = 'https://hub.jovian.ml/wp-content/uploads/2020/08/loans3.txt'", "_____no_output_____" ], [ "import urllib.request", "_____no_output_____" ], [ "urllib.request.urlretrieve(url1, './data/loans1.txt')", "_____no_output_____" ], [ "urllib.request.urlretrieve(url2, './data/loans2.txt')", "_____no_output_____" ], [ "urllib.request.urlretrieve(url3, './data/loans3.txt')", "_____no_output_____" ] ], [ [ "Let's verify that the files were downloaded.", "_____no_output_____" ] ], [ [ "os.listdir('./data')", "_____no_output_____" ] ], [ [ "### Reading from a file \n\nTo read the contents of a file, we first need to open the file using the built-in `open` function. The `open` function returns a file object, provides several methods for interacting with the contents of the file. It also accepts a `mode` argument", "_____no_output_____" ] ], [ [ "file1 = open('./data/loans1.txt', mode='r')", "_____no_output_____" ] ], [ [ "The `open` function also accepts a `mode` argument to specifies how we can interact with the file. The following options are supported:\n\n```\n ========= ===============================================================\n Character Meaning\n --------- ---------------------------------------------------------------\n 'r' open for reading (default)\n 'w' open for writing, truncating the file first\n 'x' create a new file and open it for writing\n 'a' open for writing, appending to the end of the file if it exists\n 'b' binary mode\n 't' text mode (default)\n '+' open a disk file for updating (reading and writing)\n 'U' universal newline mode (deprecated)\n ========= ===============================================================\n```\n\nTo view the contents of the file we can use the `read` method of the file object.", "_____no_output_____" ] ], [ [ "file1_contents = file1.read()", "_____no_output_____" ], [ "print(file1_contents)", "amount,duration,rate,down_payment\n10000,36,0.08,20000\n200000,12,0.1,\n628400,120,0.12,100000\n4637400,240,0.06,\n42900,90,0.07,8900\n916000,16,0.13,\n45230,48,0.08,4300\n991360,99,0.08,\n423000,27,0.09,47200\n" ] ], [ [ "The file contains information about loans. It is a set of comma-separated values (CSV). \n\n> **CSVs**: A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. A CSV file typically stores tabular data (numbers and text) in plain text, in which case each line will have the same number of fields. (Wikipedia)\n\nThe first line of the file is the header, which indicates what each of the numbers on the remaining lines represent. Each of the remaining lines provides information about a loan. Thus, the second line `10000,36,0.08,20000` represents a loan with:\n\n* an *amount* of `$10000`, \n* *duration* of `36` months, \n* *rate of interest* of `8%` per annum, and \n* a down payment of `$20000`\n\nThe CSV is a common file format used for sharing data for analysis and visualization. Over the course of this tutorial, we will read the data from these CSV files, process it, and write the results back to files. Before we continue, let's close the file using the `close` method (otherwise Python will continue to hold the entire file in the RAM)", "_____no_output_____" ] ], [ [ "file1.close()", "_____no_output_____" ] ], [ [ "Once a file is closed, it can no longer be read.", "_____no_output_____" ] ], [ [ "file1.read()", "_____no_output_____" ] ], [ [ "### Closing files automatically using `with`\n\nTo make it easy to automatically close a file once you are done processing it, you can open it using the `with` statement.", "_____no_output_____" ] ], [ [ "with open('./data/loans2.txt') as file2:\n file2_contents = file2.read()\n print(file2_contents)", "amount,duration,rate,down_payment\n828400,120,0.11,100000\n4633400,240,0.06,\n42900,90,0.08,8900\n983000,16,0.14,\n15230,48,0.07,4300\n\n" ] ], [ [ "Once the statements within the `with` block are executed, the `.close` method on `file2` is automatically invoked. Let's verify this by trying to read from the file object again.", "_____no_output_____" ] ], [ [ "file2.read()", "_____no_output_____" ] ], [ [ "### Reading a file line by line\n\n\nFile objects provide a `readlines` method to read a file line-by-line. ", "_____no_output_____" ] ], [ [ "with open('./data/loans3.txt', 'r') as file3:\n file3_lines = file3.readlines()", "_____no_output_____" ], [ "file3_lines", "_____no_output_____" ] ], [ [ "### Processing data from files\n\nBefore performing any operations on the data stored in a file, we need to convert the contents of the file from one large string into Python data types. For the file `loans1.txt` containing information about loans in a CSV format, we can do the following:\n\n* Read the file line by line\n* Parse the first line to get list of the column names or headers\n* Split each remaining line and convert each value into a float\n* Create a dictionary for each loan using the headers as keys\n* Create a list of dictionaries to keep track of all the loans\n\nSince we will perform the same operations for multiple files, it would be useful define a function `read_csv` to do this. We'll also define some helper functions to build up the functionality step by step. \n\nLet's start by defining a function `parse_header` which takes a line as input and returns a list of column headers.", "_____no_output_____" ] ], [ [ "def parse_headers(header_line):\n return header_line.strip().split(',')", "_____no_output_____" ] ], [ [ "The `strip` method removes any extra spaces and the newline character `\\n`, and the split method breaks a string into a list using the given separator (`,` in this case).", "_____no_output_____" ] ], [ [ "file3_lines[0]", "_____no_output_____" ], [ "headers = parse_headers(file3_lines[0])", "_____no_output_____" ], [ "headers", "_____no_output_____" ] ], [ [ "Next, let's define a function `parse_values` which takes a line containing some data, and returns a list of floating point numbers.", "_____no_output_____" ] ], [ [ "def parse_values(data_line):\n values = []\n for item in data_line.strip().split(','):\n values.append(float(item))\n return values", "_____no_output_____" ], [ "file3_lines[1]", "_____no_output_____" ], [ "parse_values(file3_lines[1])", "_____no_output_____" ] ], [ [ "The values were parsed and converted to floating point numbers, as expected. Let's try it for another line from the file, which does not contain a value for the down payment.", "_____no_output_____" ] ], [ [ "file3_lines[2]", "_____no_output_____" ], [ "parse_values(file3_lines[2])", "_____no_output_____" ] ], [ [ "This leads to a `ValueError` because the empty string `''` cannot be converted to a float. We can enhance the `parse_values` function to handle this *edge case*.", "_____no_output_____" ] ], [ [ "def parse_values(data_line):\n values = []\n for item in data_line.strip().split(','):\n if item == '':\n values.append(0.0)\n else:\n values.append(float(item))\n return values", "_____no_output_____" ], [ "file3_lines[2]", "_____no_output_____" ], [ "parse_values(file3_lines[2])", "_____no_output_____" ] ], [ [ "Next, let's define a function `create_item_dict` which takes a list of values and a list of headers as inputs, and returns a dictionary with the values associated with their respective headers as keys.", "_____no_output_____" ] ], [ [ "def create_item_dict(values, headers):\n result = {}\n for value, header in zip(values, headers):\n result[header] = value\n return result", "_____no_output_____" ] ], [ [ "Can you figure out what the Python built-in function `zip` does? Try out an example, or [read the documentation](https://docs.python.org/3.3/library/functions.html#zip).", "_____no_output_____" ] ], [ [ "for item in zip([1,2,3], ['a', 'b', 'c']):\n print(item)", "(1, 'a')\n(2, 'b')\n(3, 'c')\n" ] ], [ [ "Let's try out `crate_item_dict` with a couple of examples.", "_____no_output_____" ] ], [ [ "file3_lines[1]", "_____no_output_____" ], [ "values1 = parse_values(file3_lines[1])\ncreate_item_dict(values1, headers)", "_____no_output_____" ], [ "file3_lines[2]", "_____no_output_____" ], [ "values2 = parse_values(file3_lines[2])\ncreate_item_dict(values2, headers)", "_____no_output_____" ] ], [ [ "As expected, the values & header are combined to create a dictionary with the approriate key-value pairs.\n\nWe are now ready to put it all together and define the `read_csv` function.", "_____no_output_____" ] ], [ [ "def read_csv(path):\n result = []\n # Open the file in read mode\n with open(path, 'r') as f:\n # Get a list of lines\n lines = f.readlines()\n # Parse the header\n headers = parse_headers(lines[0])\n # Loop over the remaining lines\n for data_line in lines[1:]:\n # Parse the values\n values = parse_values(data_line)\n # Create a dictionary using values & headers\n item_dict = create_item_dict(values, headers)\n # Add the dictionary to the result\n result.append(item_dict)\n return result", "_____no_output_____" ] ], [ [ "Let's try it out!", "_____no_output_____" ] ], [ [ "with open('./data/loans2.txt') as file2:\n print(file2.read())", "amount,duration,rate,down_payment\n828400,120,0.11,100000\n4633400,240,0.06,\n42900,90,0.08,8900\n983000,16,0.14,\n15230,48,0.07,4300\n\n" ], [ "read_csv('./data/loans2.txt')", "_____no_output_____" ] ], [ [ "The file is read and converted to a list of dictionaries, as expected. The `read_csv` file is generic enough that it can parse any file in the CSV format, with any number of rows or columns. Here's the full code for `read_csv` along with the helper functions:", "_____no_output_____" ] ], [ [ "def parse_headers(header_line):\n return header_line.strip().split(',')\n\ndef parse_values(data_line):\n values = []\n for item in data_line.strip().split(','):\n if item == '':\n values.append(0.0)\n else:\n values.append(float(item))\n return values\n\ndef create_item_dict(values, headers):\n result = {}\n for value, header in zip(values, headers):\n result[header] = value\n return result\n\ndef read_csv(path):\n result = []\n # Open the file in read mode\n with open(path, 'r') as f:\n # Get a list of lines\n lines = f.readlines()\n # Parse the header\n headers = parse_headers(lines[0])\n # Loop over the remaining lines\n for data_line in lines[1:]:\n # Parse the values\n values = parse_values(data_line)\n # Create a dictionary using values & headers\n item_dict = create_item_dict(values, headers)\n # Add the dictionary to the result\n result.append(item_dict)\n return result", "_____no_output_____" ] ], [ [ "Try to create small, generic and reusable functions whenever possible, as they will likely be useful beyond just the problem at hand, and save you a lot of effort in the future.\n\nIn the [previous tutorial](https://jovian.ml/aakashns/python-functions-and-scope), we defined a function to calculate the equal monthly installments for a loan. Here's what it looked like:", "_____no_output_____" ] ], [ [ "import math\n\ndef loan_emi(amount, duration, rate, down_payment=0):\n \"\"\"Calculates the equal montly installment (EMI) for a loan.\n \n Arguments:\n amount - Total amount to be spent (loan + down payment)\n duration - Duration of the loan (in months)\n rate - Rate of interest (monthly)\n down_payment (optional) - Optional intial payment (deducted from amount)\n \"\"\"\n loan_amount = amount - down_payment\n try:\n emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)\n except ZeroDivisionError:\n emi = loan_amount / duration\n emi = math.ceil(emi)\n return emi", "_____no_output_____" ] ], [ [ "We can use this function to calculate EMIs for all the loans in a file.", "_____no_output_____" ] ], [ [ "loans2 = read_csv('./data/loans2.txt')", "_____no_output_____" ], [ "loans2", "_____no_output_____" ], [ "for loan in loans2:\n loan['emi'] = loan_emi(loan['amount'], \n loan['duration'], \n loan['rate']/12, # the CSV contains yearly rates\n loan['down_payment'])", "_____no_output_____" ], [ "loans2", "_____no_output_____" ] ], [ [ "You can see that each loan now has a new key `emi`, which provides the EMI for the loan. We can extract this logic into a function, so that we can be used for other files too.", "_____no_output_____" ] ], [ [ "def compute_emis(loans):\n for loan in loans:\n loan['emi'] = loan_emi(\n loan['amount'], \n loan['duration'], \n loan['rate']/12, # the CSV contains yearly rates\n loan['down_payment'])", "_____no_output_____" ] ], [ [ "### Writing to files\n\nNow that we have performed some processing on the data, it would be a good idea to write the results back to a file in the CSV format. We can do this by creating/opening a file in write mode with `open` and using the `.write` method of the file object. The string `format` method will be useful for ", "_____no_output_____" ] ], [ [ "loans2 = read_csv('./data/loans2.txt')", "_____no_output_____" ], [ "compute_emis(loans2)", "_____no_output_____" ], [ "loans2", "_____no_output_____" ], [ "with open('./data/emis2.txt', 'w') as f:\n for loan in loans2:\n f.write('{},{},{},{},{}\\n'.format(\n loan['amount'], \n loan['duration'], \n loan['rate'], \n loan['down_payment'], \n loan['emi']))", "_____no_output_____" ] ], [ [ "Let's verify that the file was created and written to as expected.", "_____no_output_____" ] ], [ [ "os.listdir('data')", "_____no_output_____" ], [ "with open('./data/emis2.txt', 'r') as f:\n print(f.read())", "828400.0,120.0,0.11,100000.0,10034\n4633400.0,240.0,0.06,0.0,33196\n42900.0,90.0,0.08,8900.0,504\n983000.0,16.0,0.14,0.0,67707\n15230.0,48.0,0.07,4300.0,262\n\n" ] ], [ [ "Great, looks like the loan details (along with the computed EMIs) were written into the file.\n\nLet's define a generic function `write_csv` which takes a list of dictionaries and writes it to a file in CSV format. We will also include the column headers in the first line.", "_____no_output_____" ] ], [ [ "def write_csv(items, path):\n # Open the file in write mode\n with open(path, 'w') as f:\n # Return if there's nothing to write\n if len(items) == 0:\n return\n \n # Write the headers in the first line\n headers = list(items[0].keys())\n f.write(','.join(headers) + '\\n')\n \n # Write one item per line\n for item in items:\n values = []\n for header in headers:\n values.append(str(item.get(header, \"\")))\n f.write(','.join(values) + \"\\n\")", "_____no_output_____" ] ], [ [ "Do you understand how the function works? If now, try executing each statement by line by line or a different cell to figure out how it works. \n\nLet's try it out!", "_____no_output_____" ] ], [ [ "loans3 = read_csv('./data/loans3.txt')", "_____no_output_____" ], [ "compute_emis(loans3)", "_____no_output_____" ], [ "write_csv(loans3, './data/emis3.txt')", "_____no_output_____" ], [ "with open('./data/emis3.txt', 'r') as f:\n print(f.read())", "amount,duration,rate,down_payment,emi\n883000.0,16.0,0.14,0.0,60819\n45230.0,48.0,0.07,4300.0,981\n100000.0,12.0,0.1,0.0,8792\n728400.0,120.0,0.12,100000.0,9016\n3637400.0,240.0,0.06,0.0,26060\n82900.0,90.0,0.07,8900.0,1060\n316000.0,16.0,0.13,0.0,21618\n15230.0,48.0,0.08,4300.0,267\n991360.0,99.0,0.08,0.0,13712\n323000.0,27.0,0.09,4720010000.0,-193751447\n528400.0,120.0,0.11,100000.0,5902\n8633400.0,240.0,0.06,0.0,61853\n12900.0,90.0,0.08,8900.0,60\n\n" ] ], [ [ "With just 4 lines of code, we can now read each downloaded file, calcualte the EMIs, and write the results back to new files:", "_____no_output_____" ] ], [ [ "for i in range(1,4):\n loans = read_csv('./data/loans{}.txt'.format(i))\n compute_emis(loans)\n write_csv(loans, './data/emis{}.txt'.format(i))", "_____no_output_____" ], [ "os.listdir('./data')", "_____no_output_____" ] ], [ [ "Isn't that wonderful? Once all the functions are defined, we can calculate EMIs for thousands or even millions of loans across many files with just a few lines of code, in a few seconds. Now we're starting to see the real power of using a programming language like Python for processing data!", "_____no_output_____" ], [ "## Save and upload your notebook\n\nWhether you're running this Jupyter notebook on an online service like Binder or on your local machine, it's important to save your work from time, so that you can access it later, or share it online. You can upload this notebook to your [Jovian.ml](https://jovian.ml) account using the `jovian` Python library.", "_____no_output_____" ] ], [ [ "# Install the library\n!pip install jovian --upgrade --quiet", "_____no_output_____" ], [ "# Import the jovian module\nimport jovian", "_____no_output_____" ], [ "jovian.commit(project='python-os-and-filesystem')", "_____no_output_____" ] ], [ [ "## Summary and Further Reading\n\nWith this we complete our discussion of reading from and writing to files in Python. We've covered the following topics in this tutorial:\n\n* Interacting with the filesystem using the `os` module\n* Downloading files from URLs using the `urllib` module\n* Opening files using the `open` built-in function\n* Reading the contents of a file using `.read`\n* Closing file automatically using `with`\n* Reading a file line by line using `readlines`\n* Processing data from a CSV file using our own functions\n* Using helper funtions to build more complext functions\n* Writing data to a file using `.write`\n\n\n\nThis is by no means an exhaustive or comprehensive tutorial on working with files in Python. Following are some more resources you should check out:\n\n* Python Tutorial at W3Schools: https://www.w3schools.com/python/\n* Practical Python Programming: https://dabeaz-course.github.io/practical-python/Notes/Contents.html\n* Python official documentation: https://docs.python.org/3/tutorial/index.html\n\nYou are ready to move on to the next tutorial: [\"Object-oriented programming using classes in Python\"](https://jovian.ml/aakashns/python-object-oriented-programming).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e727586dcb9784ecc97a69c7862978260c710d28
6,848
ipynb
Jupyter Notebook
docs/!ml/notebooks/One Hot encoding.ipynb
a-mt/dev-roadmap
0484e018b2a51019577b0f2caafa6182bce689d1
[ "MIT" ]
1
2019-10-28T05:40:06.000Z
2019-10-28T05:40:06.000Z
docs/!ml/notebooks/One Hot encoding.ipynb
a-mt/dev-roadmap
0484e018b2a51019577b0f2caafa6182bce689d1
[ "MIT" ]
null
null
null
docs/!ml/notebooks/One Hot encoding.ipynb
a-mt/dev-roadmap
0484e018b2a51019577b0f2caafa6182bce689d1
[ "MIT" ]
null
null
null
21.948718
102
0.428884
[ [ [ "corpus = [\n \"another five fish find another faraway fish\",\n \"i love fantastic flying fish\"\n]", "_____no_output_____" ] ], [ [ "## Bag of words", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import CountVectorizer", "_____no_output_____" ], [ "vectorizer = CountVectorizer(\n tokenizer=None,\n token_pattern=r\"(?u)\\b\\w+\\b\"\n)\nvectorizer.fit_transform(corpus).toarray()", "_____no_output_____" ], [ "vectorizer.vocabulary_", "_____no_output_____" ] ], [ [ "## TF-IDF\n\nhttps://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L609", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import TfidfVectorizer", "_____no_output_____" ], [ "vectorizer = TfidfVectorizer(\n norm=None,\n token_pattern=r\"(?u)\\b\\w+\\b\"\n)\nvectorizer.fit_transform(corpus).toarray()", "_____no_output_____" ] ], [ [ "## Integer encoding", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelEncoder\nimport numpy as np", "_____no_output_____" ], [ "tokens = [txt.split(' ') for txt in corpus]\ntokens", "_____no_output_____" ], [ "vocab = np.unique(np.concatenate(tokens))\nvocab", "_____no_output_____" ], [ "vectorizer = LabelEncoder()\nvectorizer.fit(vocab)\n[vectorizer.transform(x) for x in tokens]", "_____no_output_____" ] ], [ [ "## One-hot encoding", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import OneHotEncoder", "_____no_output_____" ], [ "vocab.reshape(-1,1)", "_____no_output_____" ], [ "vectorizer = OneHotEncoder(handle_unknown='ignore', sparse=False)\nvectorizer.fit(vocab.reshape(-1,1))\n[vectorizer.transform(np.array(x).reshape(-1,1)) for x in tokens]", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7275c30fa6d2e3cc5070cdf06020ae9346e4375
1,594
ipynb
Jupyter Notebook
2021_Math/m11.ipynb
AramMoon/Math
d248b0375a62573a11f4ba1216172888ce6fd65a
[ "MIT" ]
null
null
null
2021_Math/m11.ipynb
AramMoon/Math
d248b0375a62573a11f4ba1216172888ce6fd65a
[ "MIT" ]
null
null
null
2021_Math/m11.ipynb
AramMoon/Math
d248b0375a62573a11f4ba1216172888ce6fd65a
[ "MIT" ]
null
null
null
19.439024
109
0.493726
[ [ [ "import math\n\nn = 1000000\n\ns = 0\nfor k in range(1, n+1):\n v = math.sqrt( 3 * n / (3*n + k))\n s += v\nr = s / n\nprint(r)", "0.9282031632882576\n" ], [ "print(4 * math.sqrt(3) - 6)\nprint(math.sqrt(3) - 1)\nprint(5 * math.sqrt(3) - 8)\nprint(2 * math.sqrt(3) - 3)\nprint(3 * math.sqrt(3) - 5)", "0.9282032302755088\n0.7320508075688772\n0.6602540378443855\n0.4641016151377544\n0.19615242270663202\n" ] ], [ [ "답 1 $ 4 \\sqrt3 - 6 $", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ] ]
e7275da67f168e1bd6d7021be1628b66b2b4829d
20,435
ipynb
Jupyter Notebook
src/templates/v0.1.9/modules/counting/collocation.ipynb
whatevery1says/we1s-templates
ce16ae4a39e3286ed7d9bf4a95bff001ac2d123e
[ "MIT" ]
null
null
null
src/templates/v0.1.9/modules/counting/collocation.ipynb
whatevery1says/we1s-templates
ce16ae4a39e3286ed7d9bf4a95bff001ac2d123e
[ "MIT" ]
null
null
null
src/templates/v0.1.9/modules/counting/collocation.ipynb
whatevery1says/we1s-templates
ce16ae4a39e3286ed7d9bf4a95bff001ac2d123e
[ "MIT" ]
null
null
null
45.818386
739
0.625055
[ [ [ "# Collocation Metrics for Bi- and Trigrams\n\nCollocation is another way of discussing co-occurrence; in natural language processing, the term \"collocation\" usually refers to phrases of two or more tokens that commonly occur together in a given context. You can use this notebook to understand how common certain bi- and trigrams are in your project. Generally speaking, the more tokens you have in your project, and the larger your project data is, the more meaningful these metrics will be. \n\nFor a brief introduction to the concept of collocation in natural language processing and to some of the metrics used in this notebook, see <a href=\"https://medium.com/@nicharuch/collocations-identifying-phrases-that-act-like-individual-words-in-nlp-f58a93a2f84a\" target=\"_blank\">Collocations</a>. More in-depth explanations can be found in a <a href=\"https://nlp.stanford.edu/fsnlp/promo/colloc.pdf\" target=\"_blank\">NLP textbook chapter on collocations</a> and in Gerlof Bouma's <a href=\"https://www.semanticscholar.org/paper/Normalized-%28pointwise%29-mutual-information-in-Bouma/15218d9c029cbb903ae7c729b2c644c24994c201?p2df\" target=\"_blank\">Normalized (Pointwise) Mutual Information in Collocation Extraction</a>.\n\nThis notebook allows you to calculate five different collocation metrics: 1) Likelihood ratio; 2) Mutual information (MI) scores; 2) Pointwise mutual information (PMI) scores; 4) Student's t-test; and 5) Chi-squared test.\n\n<strong>Important:</strong> Collocation metrics are only useful when you can tokenize on bi- and trigrams. Therefore, this notebook assumes your documents include full-text data, and that this data is stored as a string in the `content` field of each document (see the **Settings** cell).\n\n### Technical Note\n\nThis notebook uses the NLTK package to build a custom tokenizer to tokenize project uni-, bi-, and trigrams. This tokenizer differs from the one used in the WE1S preprocessing pipeline. See the module's <a href=\"README.md\" target=\"_blank\">README.md</a> file for more information.\n\n### INFO\n\n__author__ = 'Lindsay Thomas' \n__copyright__ = 'copyright 2020, The WE1S Project' \n__license__ = 'MIT' \n__version__ = '2.0' \n__email__ = '[email protected]'", "_____no_output_____" ], [ "## Settings", "_____no_output_____" ] ], [ [ "# Python imports\nimport os\nimport csv\nfrom pathlib import Path\nfrom IPython.display import display, HTML\n\n# Import scripts\n%run scripts/count_tokens.py\n\n# Define paths\ncurrent_dir = %pwd\ncurrent_pathobj = Path(current_dir)\nproject_dir = str(current_pathobj.parent.parent)\nproject_name = os.path.basename(project_dir)\ncurrent_reldir = current_dir.split(\"/write/\")[1]\ndata_dir = project_dir + '/project_data'\njson_dir = project_dir + '/project_data/json'\ncontent_field = 'content'\nstopword_file = '/home/jovyan/write/pub/templates/project_template/modules/topic_modeling/scripts/we1s_standard_stoplist.txt'\n\ndisplay(HTML('<p style=\"color: green;\"><strong>Setup complete.</strong></p>'))", "_____no_output_____" ] ], [ [ "## 1. Configure Code\n\nYou must run all of the cells in this \"Configure Code\" section, even if you do not change the values.", "_____no_output_____" ], [ "### Set Tokenization Length\nConfigure the `set_length` variable below according to the length of ngram you are analyzing. Since collocations always involve 2 or more words, this section of the notebook only works with bigrams and trigrams. The default is bigrams; to count trigrams, comment out the bigram line, and uncomment the trigram line. \n\n**Note:** Because this code does not strip hyphens, hyphenated words like \"first-generation\" are considered unigrams.", "_____no_output_____" ] ], [ [ "# Choose to analyze bigrams, or trigrams\nset_length = 'bigram'\n# set_length = 'trigram'\n\n\nif set_length not in ['bigram', 'trigram']:\n display(HTML(\"<p style=\\\"color: red;\\\">The <code>set_length</code> variable must be <code>'bigram'</code> or <code>'trigram'</code>.</p>\"))\nelse:\n msg = 'You have set the <code>set_length</code> variable to <code>' + set_length + '</code>.'\n display(HTML('<p style=\"color: green;\">' + msg + '</p>'))", "_____no_output_____" ] ], [ [ "### Configure Punctuation Setting\n\nThis cell strips common punctuation from project documents. It will **NOT** strip hyphens, single or double, in order to account for hyphenated words and phrases such as \"first-generation\". Because this punctuation list is bespoke and not standardized (standardized options strip hyphens), some punctuation marks or other non-Unicode characters may make it through. You do not need to change anything about the below cell (unless you are interested in the frequency of punctuation marks, or @ signs, etc.), but you do need to run it. If you do not want to remove punctuation from your documents, you should set the `punctuations` variable to an empty string by uncommenting the line that says `punctuations = ''` in the cell below.", "_____no_output_____" ] ], [ [ "# Define punctuation to strip\npunctuations = \"_______________________\\'m\\'n\\'ve!()[]{};:\\'\\\"\\,<>./?@#$%^&*_~''``''\"\n\n# To strip no punctuation, uncomment the line below\n# punctuations = ''\n\nif punctuations == '':\n display(HTML('<p style=\"color: red;\">You have elected not to strip any punctuation.</p>'))\nelse:\n msg = 'You have set the <code>punctuations</code> variable to <code>' + punctuations + '</code>.'\n display(HTML('<p style=\"color: green;\">' + msg + '</p>')) ", "_____no_output_____" ] ], [ [ "### Configure Stop Word Setting\n\nThe default setting is to delete stop words from your data using the WE1S standard stoplist. You can view this list in you project's `modules/topic_modeling/scripts` folder. You can edit this file for your project or create a custom stoplist. If you use a custom list, make sure that it is a plain text file with one word per line. Upload the file to your project and configure the `stopword_file` variable in the **Settings** cell to indicate the path to your custom stop word file.\n\nIf your data has already had the stop words you want removed or if you do not want to remove stop words, change the value of `set_stopwords` to `False`.\n\nIt is generally recommended to delete stop words from a document before obtaining bi- and/or trigram frequencies. This will result in \"inexact\" bi- and trigrams, however, as any stop words will be deleted *before* tokenization into bi- or trigrams. If you are interested in specific bi- or trigrams that contain stop words, such as \"first\" in \"first generation\" (without a hyphen), you may want to create a custom stop word list.", "_____no_output_____" ] ], [ [ "# Delete stopwords from content fields before obtaining word frequencies.\n# If set to True, stop words will be deleted. If set to false, stop words will not be deleted.\nset_stopwords = True\n\nif set_stopwords == True:\n display(HTML('<p style=\"color: green;\">You have elected to strip stopwords.</p>'))\nelse:\n display(HTML('<p style=\"color: red;\">You have elected not to strip stopwords.</p>'))", "_____no_output_____" ] ], [ [ "## 2. Calculate Token Frequencies\n\nThe cell below obtains the frequency values you need to calculate all collocation metrics below. To run any cells in **Section 3** of this notebook, you must run this cell.", "_____no_output_____" ] ], [ [ "# Obtain tf-idf scores\nall_finders_freq, all_finders_list, freq, bad_jsons = frequency_dir(json_dir, content_field, set_stopwords, \n punctuations, set_length, stopword_file)\n\nif len(bad_jsons) > 0:\n msg = 'Token frequency calculations complete. Warning! ' + str(len(bad_jsons)) + ' documents failed to load and will not be included in the calculation. '\n msg += 'If this number is large, this may significantly affect your results.'\n display(HTML('<p style=\"color: red;\">' + msg + '</p>'))\n\nif all_finders_list != []:\n msg = 'Token frequency calculations complete. Calculate collocation metrics in the next section.'\n display(HTML('<p style=\"color:green;\">' + msg + '</p>'))\nif all_finders_list == []:\n display(HTML('<p style=\"color: red;\">No results found.</p>'))", "_____no_output_____" ] ], [ [ "## 3. Calculate Collocation Metrics\nThis section of the notebook allows you to calculate five different collocation metrics: 1) Likelihood ratio; 2) Mutual information (MI) scores; 2) Pointwise mutual information (PMI) scores; 4) Student's t-test; and 5) Chi-squared test. All cells in this section of the notebook rely on the calculations you performed in section 2. You must run the previous cell before you can run any cells below.\n\nFor more information about each of these collocation metrics, see this module's <a href=\"README.md\" target=\"_blank\">README.md</a> file.", "_____no_output_____" ], [ "### Select Collocation Metric\nSelect the collocation metric you would like to calculate in the cell below. You may select `'likelihood'`, `'mi'`, `'pmi'`, `'t-test'`, `'chi-square'`. If you select `pmi` or `mi`, you should select a value for `freq_filter` below.", "_____no_output_____" ] ], [ [ "# Set to 'likelihood', 'mi', 'pmi', 't-test', or 'chi-square'\nmetric = ''\n\nif metric not in ['likelihood', 'mi', 'pmi', 't-test', 'chi-square']:\n display(HTML('<p style=\"color:red;\">The <code>metric</code> variable must be set to <code>likelihood</code>, <code>mi</code>, <code>pmi</code>, <code>t-test</code>, <code>chi-square</code>.</p>'))\nelif metric in ['mi', 'pmi']:\n display(HTML('<p style=\"color:green;\">The <code>metric</code> variable has been set to <code>' + metric + '</code>.</p>'))\n display(HTML('<p style=\"color:red;\">You should set a <code>freq_filter</code> value in the next cell.</p>'))\nelse:\n display(HTML('<p style=\"color:green;\">The <code>metric</code> variable has been set to <code>' + metric + '</code>. You do not need to run the next cell.</p>'))\n freq_filter=None", "_____no_output_____" ] ], [ [ "### Set Frequency Filter (MI and PMI Metrics Only)\n\nMI and PMI scores are sensitive to unique words, which can make results less meaningful because often unique words will occur much less frequently throughout a corpus. To account for this, you can set a frequency filter so that you only measure MI or PMI scores for bi- or trigrams that occur a certain number of times. \n\nThe `freq_filter` variable is set to `None` by default below. If you would like to apply a frequency filter, please provide a value for `freq_filter`, such as `freq_filter=5`. This value determines the frequency cutoff. \n\nIf you are NOT calculating MI or PMI scores, you do not need to run the cell below. ", "_____no_output_____" ] ], [ [ "# Set frequency cutoff \nfreq_filter = None\ndisplay(HTML('<p style=\"color:green;\">You have set the <code>freq_filter</code> variable to <code>' + str(freq_filter) + '</code>.</p>'))", "_____no_output_____" ] ], [ [ "## Perform Calculations", "_____no_output_____" ] ], [ [ "# Perform Calculations\ntry:\n ResultsTable, all_scores = collocation_metric(set_length, all_finders_list, metric, freq_filter=freq_filter)\n display(HTML('<p style=\"color: green;\">Calculations complete. View results in the below cells.</p>'))\nexcept NameError:\n display(HTML('<p style=\"color:red;\">You have not provided values for all required variables. Check sections 1-2 of this notebook.</p>'))", "_____no_output_____" ] ], [ [ "### View Dataframe of Scores\n\nThe below cell uses a <a href=\"https://github.com/quantopian/qgrid\" target=\"_blank\">QGrid</a> widget to display results in a dataframe, sorted from from highest to lowest. Click a column label to sort by that column. Click it again to reverse sort. Click the filter icon to the right of the column label to apply filters (for instance, reducing the table to only documents from specific sources). You can re-order the columns by dragging the column label.", "_____no_output_____" ] ], [ [ "# Display dataframe\nqgrid_widget = qgrid.show_grid(ResultsTable, grid_options=grid_options, show_toolbar=False)\n\nqgrid_widget", "_____no_output_____" ] ], [ [ "### Save Dataframe to CSV\n\nThe cell below will save the version of the dataframe you see displayed in the cell above. To save the full version of the dataframe (disregarding any filtering, etc you have done in the qgrid dataframe), skip the next cell, uncomment the code in the cell below it, and run that cell. \n\nEither cell will create a csv file in this module directory called whatever you value you assign to the `csv_file` variable.", "_____no_output_____" ] ], [ [ "# Configuree csv file name\ncsv_file = ''\n\n# Save version of dataframe you see above to csv\nif csv != '':\n changed_df = qgrid_widget.get_changed_df()\n changed_df.to_csv(csv_file, index_label = 'Index')\n display(HTML('<p style=\"color:green;\">Csv file called <code>' + csv_file + '</code> created.'))\nelif csv == '':\n display(HTML('<p style=\"color:red;\">You have not provided a filename for the csv file.</p>'))", "_____no_output_____" ], [ "## onfigure csv file name\n# csv_file = ''\n\n## save the above dataframe to a csv file\n# if csv != '':\n# LikelihoodTable.to_csv(csv_file, index_label = 'Index')\n# display(HTML('<p style=\"color:green;\">Csv file called <code>' + csv_file + '</code> created.'))\n# elif csv == '':\n# display(HTML('<p style=\"color:red;\">You have not provided a filename for the csv file.</p>'))", "_____no_output_____" ] ], [ [ "### View Scores for a Specific Token and Save to CSV\n\nYou can check to see what other tokens are highly associated with your chosen token across your project, according to your selected metric. Enter only a single word below; it does not work if you enter a bigram or a trigram. Enter that word below following the format `token = 'example'`.", "_____no_output_____" ] ], [ [ "# Configure token\ntoken = ''\n\nif token == '' or token == None:\n display(HTML('<p style=\"color:red;\">You have not selected a token.</p>'))\nelse:\n check = token.split(' ')\n if len(check) > 1:\n display(HTML('<p style=\"color:red;\">Your <code>token</code> can only be a unigram</code>.</p>'))\n else:\n display(HTML('<p style=\"color:green;\">You have set the <code>token</code> variable to <code>' + token + '</code>.</p>'))", "_____no_output_____" ] ], [ [ "You may also choose to save this information to a csv file by changing the value of the `save_csv` variable to `True`.\nThis will create a csv file in this module's directory called whatever value you assign to the `csv_file` variable. If you do not wish to save a csv file set the value of the `csv_file` variable to `None`.", "_____no_output_____" ] ], [ [ "# Select True or False\nsave_csv = False\n# Give the csv_file a name or select None\ncsv_file = None\n\nif save_csv == False and csv_file == None:\n display(HTML('<p style=\"color:green;\">You have elected not to save a csv file.</p>'))\nelif save_csv == True and csv_file != None:\n display(HTML('<p style=\"color:green;\">You have elected to save a csv file and have set <code>csv_file</code> to <code>' + str(csv_file) + '</code>.</p>'))\nelif save_csv == False and csv_file != None:\n display(HTML('<p style=\"color:red;\">You have given the csv file a name but set <code>save_csv</code> to <code>' + str(save_csv) + '</code></p>'))\nelif save_csv == True and csv_file == None:\n display(HTML('<p style=\"color:red;\">You have set <code>save_csv</code> to <code>' + str(save_csv) + '</code> but not provided a value for <code>csv_file</code>.</p>')) ", "_____no_output_____" ] ], [ [ "Run the cell below to see other token or tokens (depending on if you have calculated bi- or trigram frequencies) your provided token occurs with throughout your project, and the scores for each grouping. If you have not elected to save results to csv they will print to cell output.", "_____no_output_____" ] ], [ [ "# Get token_scores\ntoken_scores = order_collocation_scores(all_scores, token, save_csv, csv_file)\n\nif save_csv == True:\n display(HTML('<p style=\"color:green;\">CSV file of results called <code>' + csv_file + '</code> created.'))\nelif token == '':\n display(HTML('<p style=\"color:red;\">You have not selected a token.</p>'))\nelse:\n print(token_scores)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7275f1225105157b83036e809b35ccf9a4be341
377,235
ipynb
Jupyter Notebook
03-Gaussians.ipynb
simonkamronn/Kalman-and-Bayesian-Filters-in-Python
5240944dd45415909228c233ddfb7f3c19e51189
[ "CC-BY-4.0" ]
1
2019-07-02T01:28:52.000Z
2019-07-02T01:28:52.000Z
03-Gaussians.ipynb
simonkamronn/Kalman-and-Bayesian-Filters-in-Python
5240944dd45415909228c233ddfb7f3c19e51189
[ "CC-BY-4.0" ]
null
null
null
03-Gaussians.ipynb
simonkamronn/Kalman-and-Bayesian-Filters-in-Python
5240944dd45415909228c233ddfb7f3c19e51189
[ "CC-BY-4.0" ]
null
null
null
218.181029
50,640
0.886739
[ [ [ "[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)", "_____no_output_____" ], [ "# Gaussian Probabilities", "_____no_output_____" ] ], [ [ "#format the book\n%matplotlib inline\nfrom __future__ import division, print_function\nfrom book_format import load_style\nload_style()", "_____no_output_____" ] ], [ [ "## Introduction", "_____no_output_____" ], [ "The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us \"it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79).\" That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.\n\nWe desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features.", "_____no_output_____" ], [ "## Mean, Variance, and Standard Deviations\n\n### Random Variables\n\n\nEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. \n\nThis combination of values and associated probabilities is called a *random variable*. *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.\n\nWhile we are defining things, the range of values is called the *sample space*. For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.\n\nAnother example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.\n\nRandom variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.\n\nDo not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. \n\nIn statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. ", "_____no_output_____" ], [ "## Probability Distribution\n\n\nThe *probability distribution* gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:\n\n|Value|Probability|\n|-----|-----------|\n|1|1/6|\n|2|1/6|\n|3|1/6|\n|4|1/6|\n|5|1/6|\n|6|1/6|\n\nSome sources call this the *probability function*. Using ordinary function notation, we would write:\n\n$$P(X{=}4) = f(4) = \\frac{1}{6}$$\n\nThis states that the probability of the die landing on 4 is $\\frac{1}{6}$. $P(X{=}x_k)$ is notation for \"the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.\n\n\nAnother example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as\n\n$$P(X{=}H) = 0.5\\\\P(X{=}T)=0.5$$\n\nSample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.\n\nThe probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.\n\nTo be a probability distribution the probability of each value $x_i$ must be $x_i \\ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as\n\n$$\\sum\\limits_u P(X{=}u)= 1$$\n\nfor discrete distributions, and as \n\n$$\\int P(X{=}u) \\,du= 1$$\n\nfor continuous distributions.", "_____no_output_____" ], [ "### The Mean, Median, and Mode of a Random Variable\n\nGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a *measure of central tendency*. For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is \n\n$$X = \\{1.8, 2.0, 1.7, 1.9, 1.6\\}$$\n\nwe compute the mean as\n\n$$\\mu = \\frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$\n\nIt is traditional to use the symbol $\\mu$ (mu) to denote the mean.\n\nWe can formalize this computation with the equation\n\n$$ \\mu = \\frac{1}{n}\\sum^n_{i=1} x_i$$\n\nNumPy provides `numpy.mean()` for computing the mean.", "_____no_output_____" ] ], [ [ "import numpy as np\nx = [1.85, 2.0, 1.7, 1.9, 1.6]\nprint(np.mean(x))", "1.81\n" ] ], [ [ "The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.\n\nFinally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.\n\nNumpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted.", "_____no_output_____" ] ], [ [ "print(np.median(x))", "1.85\n" ] ], [ [ "## Expected Value of a Random Variable\n\nThe *expected value* of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?\n\nIt would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.\n\nNow suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute\n\n$$\\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$\n\nHere I have introduced the notation $\\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.\n\nWe can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us\n\n$$\\mathbb E[X] = \\sum_{i=1}^n p_ix_i$$\n\nA trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:\n\n$$\\mathbb E[X] = \\sum_{i=1}^n p_ix_i = \\sum_{i=1}^n \\frac{1}{n}x_i = \\mu_x$$\n\nIf $x$ is continuous we substitute the sum for an integral, like so\n\n$$\\mathbb E[X] = \\int_{-\\infty}^\\infty x\\, f(x) \\,dx$$\n\nwhere $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.", "_____no_output_____" ], [ "### Variance of a Random Variable\n\nThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights:", "_____no_output_____" ] ], [ [ "X = [1.8, 2.0, 1.7, 1.9, 1.6]\nY = [2.2, 1.5, 2.3, 1.7, 1.3]\nZ = [1.8, 1.8, 1.8, 1.8, 1.8]", "_____no_output_____" ] ], [ [ "Using NumPy we see that the mean height of each class is the same. ", "_____no_output_____" ] ], [ [ "print(np.mean(X))\nprint(np.mean(Y))\nprint(np.mean(Z))", "1.8\n1.8\n1.8\n" ] ], [ [ "The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.", "_____no_output_____" ], [ "The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. \n\nStatistics has formalized this concept of measuring variation into the notion of *standard deviation* and *variance*. The equation for computing the *variance* is\n\n$$\\mathit{VAR}(X) = E[(X - \\mu)^2]$$\n\nIgnoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \\sum\\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get\n\n$$\\mathit{VAR}(X) = \\frac{1}{n}\\sum_{i=1}^n (x_i - \\mu)^2$$\n\nLet's compute the variance of the three classes to see what values we get and to become familiar with this concept.\n\nThe mean of $X$ is 1.8 ($\\mu_x = 1.8$) so we compute\n\n$$ \n\\begin{aligned}\n\\mathit{VAR}(X) &=\\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\\\\n&= \\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\\\n\\mathit{VAR}(X)&= 0.02 \\, m^2\n\\end{aligned}$$\n\nNumPy provides the function `var()` to compute the variance:", "_____no_output_____" ] ], [ [ "print(np.var(X), \"meters squared\")", "0.02 meters squared\n" ] ], [ [ "This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:\n\n$$\\sigma = \\sqrt{\\mathit{VAR}(X)}=\\sqrt{\\frac{1}{n}\\sum_{i=1}^n(x_i - \\mu)^2}$$\n\nIt is typical to use $\\sigma$ for the *standard deviation* and $\\sigma^2$ for the *variance*. In most of this book I will be using $\\sigma^2$ instead of $\\mathit{VAR}(X)$ for the variance; they symbolize the same thing.\n\nFor the first class we compute the standard deviation with\n\n$$ \n\\begin{aligned}\n\\sigma_x &=\\sqrt{\\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\\\\n&= \\sqrt{\\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\\\n\\sigma_x&= 0.1414\n\\end{aligned}$$\n\nWe can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation.", "_____no_output_____" ] ], [ [ "print('std {:.4f}'.format(np.std(X)))\nprint('var {:.4f}'.format(np.std(X)**2))", "std 0.1414\nvar 0.0200\n" ] ], [ [ "And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.", "_____no_output_____" ], [ "What does the standard deviation signify? It tells us how much the heights vary amongst themselves. \"How much\" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. \n\nWe can view this in a plot:", "_____no_output_____" ] ], [ [ "from book_format import set_figsize, figsize\nfrom gaussian_internal import plot_height_std\nimport matplotlib.pyplot as plt\n\nwith figsize(y=2):\n plot_height_std(X)", "_____no_output_____" ] ], [ [ "For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\\pm1\\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.\n\n> We write one standard deviation as $1\\sigma$, which is pronounced \"one standard deviation\", not \"one sigma\". Two standard deviations is $2\\sigma$, and so on.", "_____no_output_____" ] ], [ [ "from numpy.random import randn\ndata = [1.8 + .1414*randn() for i in range(100)]\n\nwith figsize(y=3.):\n plot_height_std(data, lw=2)\n\nprint('mean = {:.3f}'.format(np.mean(data)))\nprint('std = {:.3f}'.format(np.std(data)))", "_____no_output_____" ] ], [ [ "We can see by eye that roughly 68% of the heights lie within $\\pm1\\sigma$ of the mean 1.8.", "_____no_output_____" ], [ "We'll discuss this in greater depth soon. For now let's compute the standard deviation for \n\n$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$\n\nThe mean of $Y$ is $\\mu=1.8$ m, so \n\n$$ \n\\begin{aligned}\n\\sigma_y &=\\sqrt{\\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\\\\n&= \\sqrt{0.152} = 0.39 \\ m\n\\end{aligned}$$\n\nWe will verify that with NumPy with", "_____no_output_____" ] ], [ [ "print('std of Y is {:.4f} m'.format(np.std(Y)))", "std of Y is 0.3899 m\n" ] ], [ [ "This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.\n\nFinally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with\n\n$$ \n\\begin{aligned}\n\\sigma_z &=\\sqrt{\\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\\\\n&= \\sqrt{\\frac{0+0+0+0+0}{5}} \\\\\n\\sigma_z&= 0.0 \\ m\n\\end{aligned}$$", "_____no_output_____" ] ], [ [ "print(np.std(Z))", "0.0\n" ] ], [ [ "### Why the Square of the Differences\n\nWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$", "_____no_output_____" ] ], [ [ "with figsize(y=2.5):\n X = [3, -3, 3, -3]\n mean = np.average(X)\n for i in range(len(X)):\n plt.plot([i ,i], [mean, X[i]], color='k')\n plt.axhline(mean)\n plt.xlim(-1, len(X))\n plt.tick_params(axis='x', labelbottom='off')", "_____no_output_____" ] ], [ [ "If we didn't take the square of the differences the signs would cancel everything out:\n\n$$\\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$\n\nThis is clearly incorrect, as there is more than 0 variance in the data. \n\nMaybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.\n\nThis is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$.", "_____no_output_____" ] ], [ [ "X = [1, -1, 1, -2, 3, 2, 100]\nprint('Variance of X = {:.2f}'.format(np.var(X)))", "Variance of X = 1210.69\n" ] ], [ [ "Is this \"correct\"? You tell me. Without the outlier of 100 we get $\\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3].", "_____no_output_____" ], [ "## Gaussians", "_____no_output_____" ], [ "We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.\n\n> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.", "_____no_output_____" ], [ "Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about.", "_____no_output_____" ] ], [ [ "from filterpy.stats import plot_gaussian_pdf\nplot_gaussian_pdf(mean=1.8, variance=0.1414**2, \n xlabel='Student Height', ylabel='pdf');", "_____no_output_____" ] ], [ [ "This curves is a *probability density function* or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.\n\n> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the \nSupporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].\n\nThis may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.\n\nThis curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.\n\nTo further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter!", "_____no_output_____" ] ], [ [ "import book_plots\nbelief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]\nwith figsize(y=1.5):\n book_plots.bar_plot(belief)", "_____no_output_____" ] ], [ [ "## Nomenclature", "_____no_output_____" ], [ "A bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\\infty..\\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this:", "_____no_output_____" ] ], [ [ "with figsize(y=3.):\n plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)')", "_____no_output_____" ] ], [ [ "The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.\n\nYou may object that human heights or automobile speeds cannot be less than zero, let alone $-\\infty$ or $-\\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternative. \n\nYou will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. ", "_____no_output_____" ], [ "## Gaussian Distributions", "_____no_output_____" ], [ "Let's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\\mu$) and the variance ($\\sigma^2$). It is defined as:\n\n$$ \nf(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\exp\\big [{-\\frac{(x-\\mu)^2}{2\\sigma^2} }\\big ]\n$$\n\n$\\exp[x]$ is notation for $e^x$.", "_____no_output_____" ], [ "<p> Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.\n\n> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.\n\n```python\n%load -s gaussian stats.py\n\ndef gaussian(x, mean, var):\n \"\"\"returns normal distribution for x given a \n gaussian with the specified mean and variance. \n \"\"\"\n return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) /\n math.sqrt(2*math.pi*var))\n\n```", "_____no_output_____" ], [ "<p><p><p><p>We will plot a Gaussian with a mean of 22 $(\\mu=22)$, with a variance of 4 $(\\sigma^2=4)$, and then discuss what this means. ", "_____no_output_____" ] ], [ [ "from filterpy.stats import gaussian, norm_cdf\nplot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\\circ}C$');", "_____no_output_____" ] ], [ [ "What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called *Central Limit Theorem* states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can \"sort of\" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. \n\nRecall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.\n\nWhat is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. \n\nWe can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. ", "_____no_output_____" ], [ "How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian \n\n$$ \\int^{x_1}_{x_0} \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-\\frac{1}{2}{(x-\\mu)^2}/\\sigma^2 } dx$$\n\nI wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute", "_____no_output_____" ] ], [ [ "print('Probability of range 21.5 to 22.5 is {:.2f}%'.format(\n norm_cdf((21.5, 22.5), 22,4)*100))\nprint('Probability of range 23.5 to 24.5 is {:.2f}%'.format(\n norm_cdf((23.5, 24.5), 22,4)*100))", "Probability of range 21.5 to 22.5 is 19.74%\nProbability of range 23.5 to 24.5 is 12.10%\n" ] ], [ [ "The mean ($\\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. \n\nThe notation for a normal distribution for a random variable $X$ is $X \\sim\\ \\mathcal{N}(\\mu,\\sigma^2)$ where $\\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as\n\n$$\\text{temp} \\sim \\mathcal{N}(22,4)$$\n\nThis is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\\mu=22$ and $\\sigma^2=4$ I can compute the distribution of measurements for over any range.", "_____no_output_____" ], [ "## The Variance and Belief", "_____no_output_____" ], [ "Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\\infty$ to $\\infty$)", "_____no_output_____" ] ], [ [ "print(norm_cdf((-1e8, 1e8), mu=0, var=4))", "1.0\n" ] ], [ [ "This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.\n\nLet's look at that graphically:", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n \nxs = np.arange(15, 30, 0.05)\nplt.plot(xs, gaussian(xs, 23, 0.05), label='$\\sigma^2$=0.05', c='b')\nplt.plot(xs, gaussian(xs, 23, 1), label='$\\sigma^2$=1', ls=':', c='b')\nplt.plot(xs, gaussian(xs, 23, 5), label='$\\sigma^2$=5', ls='--', c='b')\nplt.legend();", "_____no_output_____" ] ], [ [ "What is this telling us? The Gaussian with $\\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\\sigma^2=5$ considers them nearly as likely as $23$.\n\nIf we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.\n\nAn equivalent formation for a Gaussian is $\\mathcal{N}(\\mu,1/\\tau)$ where $\\mu$ is the *mean* and $\\tau$ the *precision*. $1/\\tau = \\sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.\n\nI'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using.\n\n## The 68-95-99.7 Rule\n\nIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\\pm1\\sigma$) of the mean, 95% falls within two standard deviations ($\\pm2\\sigma$), and 99.7% within three ($\\pm3\\sigma$). This is often called the 68-95-99.7 rule. If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \\pm (2 * 9.4)$). \n\nFinally, these are not arbitrary numbers. If the Gaussian for our position is $\\mu=22$ meters, then the standard deviation also has units meters. Thus $\\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\\sigma^2 = .04$ meters$^2$.", "_____no_output_____" ], [ "The following graph depicts the relationship between the standard deviation and the normal distribution. ", "_____no_output_____" ] ], [ [ "from gaussian_internal import display_stddev_plot\nwith figsize(y=3):\n display_stddev_plot()", "_____no_output_____" ] ], [ [ "## Interactive Gaussians", "_____no_output_____" ], [ "For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\\mu$ and $\\sigma^2$. Adjusting $\\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\\sigma^2$ will make the bell curve thicker and thinner.", "_____no_output_____" ] ], [ [ "import math\nfrom IPython.html.widgets import interact, interactive, fixed\n\nset_figsize(y=3)\ndef plt_g(mu,variance):\n xs = np.arange(2, 8, 0.1)\n ys = gaussian(xs, mu, variance)\n plt.plot(xs, ys)\n plt.ylim((0, 1))\n\ninteract (plt_g, mu=(0., 10), variance = (.2, 1.));", "_____no_output_____" ] ], [ [ "Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\\mu=5$ and the variance is modified.\n\n<img src='animations/04_gaussian_animate.gif'>", "_____no_output_____" ], [ "## Computational Properties of the Gaussian", "_____no_output_____" ], [ "Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.\n\nA typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.\n\nThe key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. ", "_____no_output_____" ], [ "## Computing Probabilities with scipy.stats", "_____no_output_____" ], [ "In this chapter I used code from FilterPy to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with \"batteries included\" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.\n\nThe `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy.", "_____no_output_____" ] ], [ [ "from scipy.stats import norm\nimport filterpy.stats\nprint(norm(2, 3).pdf(1.5))\nprint(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))", "0.131146572034\n0.131146572034\n" ] ], [ [ "The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:", "_____no_output_____" ] ], [ [ "n23 = norm(2, 3)\nprint('pdf of 1.5 is %.4f' % n23.pdf(1.5))\nprint('pdf of 2.5 is also %.4f' % n23.pdf(2.5))\nprint('pdf of 2 is %.4f' % n23.pdf(2))", "pdf of 1.5 is 0.1311\npdf of 2.5 is also 0.1311\npdf of 2 is 0.1330\n" ] ], [ [ "The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function.", "_____no_output_____" ] ], [ [ "np.set_printoptions(precision=3, linewidth=50)\nprint(n23.rvs(size=15))", "[ 3.527 2.952 3.709 1.501 -0.532 -0.173 2.264\n 4.293 5.036 6.365 2.79 4.76 -0.052 0.789\n 2.733]\n" ] ], [ [ "We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.", "_____no_output_____" ] ], [ [ "# probability that a random value is less than the mean 2\nprint(n23.cdf(2))", "0.5\n" ] ], [ [ "We can get various properties of the distribution:", "_____no_output_____" ] ], [ [ "print('variance is', n23.var())\nprint('standard deviation is', n23.std())\nprint('mean is', n23.mean())", "variance is 9.0\nstandard deviation is 3.0\nmean is 2.0\n" ] ], [ [ "## Fat Tails", "_____no_output_____" ], [ "Earlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. \n\nHowever, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.\n\nLet's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.\n\nBut for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution.", "_____no_output_____" ] ], [ [ "xs = np.arange(10,100, 0.05)\nys = [gaussian(x, 90, 30) for x in xs]\nplt.plot(xs, ys, label='var=0.2')\nplt.xlim((0,120))\nplt.ylim(0, 0.09);", "_____no_output_____" ] ], [ [ "The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. \n\nKalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the *Student's $t$-distribution*. \n\nLet's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with:", "_____no_output_____" ] ], [ [ "from numpy.random import randn\ndef sense():\n return 10 + randn()*2", "_____no_output_____" ] ], [ [ "Let's plot that signal and see what it looks like.", "_____no_output_____" ] ], [ [ "zs = [sense() for i in range(5000)]\nplt.plot(zs, lw=1);", "_____no_output_____" ] ], [ [ "That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\\pm$ 2 of 10, and 99% will be within $\\pm$ 6 of 10, and that looks like what is happening. \n\nNow let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it.", "_____no_output_____" ] ], [ [ "import random\nimport math\n\ndef rand_student_t(df, mu=0, std=1):\n \"\"\"return random number distributed by Student's t \n distribution with `df` degrees of freedom with the \n specified mean and standard deviation.\n \"\"\"\n x = random.gauss(0, std)\n y = 2.0*random.gammavariate(0.5*df, 2.0)\n return x / (math.sqrt(y / df)) + mu", "_____no_output_____" ], [ "def sense_t():\n return 10 + rand_student_t(7)*2\n\nzs = [sense_t() for i in range(5000)]\nplt.plot(zs, lw=1);", "_____no_output_____" ] ], [ [ "We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). This is what causes the 'fat tail'.\n\nIt is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. \n\nThis is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. \n\nThe code for rand_student_t is included in `filterpy.stats`. You may use it with\n\n```python\nfrom filterpy.stats import rand_student_t\n```", "_____no_output_____" ], [ "## Summary and Key Points", "_____no_output_____" ], [ "This chapter is a poor introduction to statistics in general. I've only covered the concepts that needed to use Gaussians in the remainder of the book, no more. What I've covered will not get you very far if you intend to read the Kalman filter literature. If this is a new topic to you I suggest reading a statistics textbook. I've always liked the Schaum series for self study, and Alan Downey's *Think Stats* [5] is also very good. \n\nThe following points **must** be understood by you before we continue:\n\n* Normals express a continuous probability distribution\n* They are completely described by two parameters: the mean ($\\mu$) and variance ($\\sigma^2$)\n* $\\mu$ is the average of all possible values\n* The variance $\\sigma^2$ represents how much our measurements vary from the mean\n* The standard deviation ($\\sigma$) is the square root of the variance ($\\sigma^2$)\n* Many things in nature approximate a normal distribution", "_____no_output_____" ], [ "## References", "_____no_output_____" ], [ "[1] https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb\n\n[2] http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html\n\n[3] http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html\n\n[4] Huber, Peter J. *Robust Statistical Procedures*, Second Edition. Society for Industrial and Applied Mathematics, 1996.\n\n[5] Downey, Alan. *Think Stats*, Second Edition. O'Reilly Media.\n\nhttps://github.com/AllenDowney/ThinkStats2\n\nhttp://greenteapress.com/thinkstats/", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7276c878ad7517e24c0ed15386ebba02f59691c
10,510
ipynb
Jupyter Notebook
Convolutional Neural Networks/rock-paper-scissor-identifier.ipynb
ornob39/Tensor-Flow-in-Practice-Specialization
b955658ced6231ab19a1fc1197ae6616745bfe1d
[ "MIT" ]
null
null
null
Convolutional Neural Networks/rock-paper-scissor-identifier.ipynb
ornob39/Tensor-Flow-in-Practice-Specialization
b955658ced6231ab19a1fc1197ae6616745bfe1d
[ "MIT" ]
null
null
null
Convolutional Neural Networks/rock-paper-scissor-identifier.ipynb
ornob39/Tensor-Flow-in-Practice-Specialization
b955658ced6231ab19a1fc1197ae6616745bfe1d
[ "MIT" ]
null
null
null
10,510
10,510
0.640057
[ [ [ "# DataSets\n!wget --no-check-certificate \\\n https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip \\\n -O /tmp/rps.zip\n \n", "--2020-08-19 16:08:54-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip\nResolving storage.googleapis.com (storage.googleapis.com)... 74.125.31.128, 172.217.204.128, 173.194.216.128, ...\nConnecting to storage.googleapis.com (storage.googleapis.com)|74.125.31.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 200682221 (191M) [application/zip]\nSaving to: ‘/tmp/rps.zip’\n\n/tmp/rps.zip 100%[===================>] 191.38M 204MB/s in 0.9s \n\n2020-08-19 16:08:55 (204 MB/s) - ‘/tmp/rps.zip’ saved [200682221/200682221]\n\n" ], [ "!wget --no-check-certificate \\\n https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps-test-set.zip \\\n -O /tmp/rps-test-set.zip", "--2020-08-19 16:08:56-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps-test-set.zip\nResolving storage.googleapis.com (storage.googleapis.com)... 172.217.204.128, 74.125.31.128, 172.217.203.128, ...\nConnecting to storage.googleapis.com (storage.googleapis.com)|172.217.204.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 29516758 (28M) [application/zip]\nSaving to: ‘/tmp/rps-test-set.zip’\n\n/tmp/rps-test-set.z 100%[===================>] 28.15M 39.6MB/s in 0.7s \n\n2020-08-19 16:08:57 (39.6 MB/s) - ‘/tmp/rps-test-set.zip’ saved [29516758/29516758]\n\n" ], [ "import os\nimport zipfile\n\nlocal_zip = '/tmp/rps.zip'\nzip_ref = zipfile.ZipFile(local_zip, 'r')\nzip_ref.extractall('/tmp/')\nzip_ref.close()\n\nlocal_zip = '/tmp/rps-test-set.zip'\nzip_ref = zipfile.ZipFile(local_zip, 'r')\nzip_ref.extractall('/tmp/')\nzip_ref.close()", "_____no_output_____" ], [ "rock_dir = os.path.join('/tmp/rps/rock')\npaper_dir = os.path.join('/tmp/rps/paper')\nscissors_dir = os.path.join('/tmp/rps/scissors')", "_____no_output_____" ], [ "import tensorflow as tf\nimport keras_preprocessing\nfrom keras_preprocessing import image\nfrom keras_preprocessing.image import ImageDataGenerator\n\nTRAINING_DIR = \"/tmp/rps/\"\ntraining_datagen = ImageDataGenerator(\n rescale = 1./255,\n\t rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest')\n\nVALIDATION_DIR = \"/tmp/rps-test-set/\"\nvalidation_datagen = ImageDataGenerator(rescale = 1./255)\n\ntrain_generator = training_datagen.flow_from_directory(\n\tTRAINING_DIR,\n\ttarget_size=(150,150),\n\tclass_mode='categorical',\n batch_size=126\n)\n\nvalidation_generator = validation_datagen.flow_from_directory(\n\tVALIDATION_DIR,\n\ttarget_size=(150,150),\n\tclass_mode='categorical',\n batch_size=126\n)\n\nmodel = tf.keras.models.Sequential([\n \n tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),\n tf.keras.layers.MaxPooling2D(2, 2),\n tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Conv2D(128, (3,3), activation='relu'),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Conv2D(128, (3,3), activation='relu'),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.5),\n tf.keras.layers.Dense(512, activation='relu'),\n tf.keras.layers.Dense(3, activation='softmax')\n])\n\n\nmodel.summary()\n\nmodel.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])\n\nhistory = model.fit(train_generator, epochs=25, \n steps_per_epoch=20, validation_data = validation_generator, \n verbose = 1, validation_steps=3)", "Found 2520 images belonging to 3 classes.\nFound 372 images belonging to 3 classes.\nModel: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 148, 148, 64) 1792 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 74, 74, 64) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 72, 72, 64) 36928 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 36, 36, 64) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 34, 34, 128) 73856 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 17, 17, 128) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 15, 15, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 6272) 0 \n_________________________________________________________________\ndense (Dense) (None, 512) 3211776 \n_________________________________________________________________\ndense_1 (Dense) (None, 3) 1539 \n=================================================================\nTotal params: 3,473,475\nTrainable params: 3,473,475\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/25\n20/20 [==============================] - 25s 1s/step - loss: 1.2249 - accuracy: 0.3524 - val_loss: 1.0953 - val_accuracy: 0.3333\nEpoch 2/25\n20/20 [==============================] - 24s 1s/step - loss: 1.1039 - accuracy: 0.4139 - val_loss: 0.9527 - val_accuracy: 0.6048\nEpoch 3/25\n20/20 [==============================] - 24s 1s/step - loss: 0.9488 - accuracy: 0.5270 - val_loss: 0.4916 - val_accuracy: 0.6989\nEpoch 4/25\n20/20 [==============================] - 24s 1s/step - loss: 0.9191 - accuracy: 0.6270 - val_loss: 0.4519 - val_accuracy: 0.6828\nEpoch 5/25\n20/20 [==============================] - 23s 1s/step - loss: 0.6688 - accuracy: 0.7016 - val_loss: 0.4120 - val_accuracy: 0.9892\nEpoch 6/25\n20/20 [==============================] - 23s 1s/step - loss: 0.5600 - accuracy: 0.7528 - val_loss: 0.6469 - val_accuracy: 0.6210\nEpoch 7/25\n20/20 [==============================] - 25s 1s/step - loss: 0.4623 - accuracy: 0.7917 - val_loss: 0.1358 - val_accuracy: 0.9892\nEpoch 8/25\n20/20 [==============================] - 24s 1s/step - loss: 0.3779 - accuracy: 0.8413 - val_loss: 0.3034 - val_accuracy: 0.9570\nEpoch 9/25\n20/20 [==============================] - 24s 1s/step - loss: 0.3606 - accuracy: 0.8548 - val_loss: 0.1023 - val_accuracy: 0.9892\nEpoch 10/25\n20/20 [==============================] - 25s 1s/step - loss: 0.3112 - accuracy: 0.8885 - val_loss: 0.0824 - val_accuracy: 0.9704\nEpoch 11/25\n20/20 [==============================] - 23s 1s/step - loss: 0.2367 - accuracy: 0.9123 - val_loss: 0.0402 - val_accuracy: 1.0000\nEpoch 12/25\n20/20 [==============================] - 24s 1s/step - loss: 0.2761 - accuracy: 0.8881 - val_loss: 0.0964 - val_accuracy: 0.9677\nEpoch 13/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1760 - accuracy: 0.9425 - val_loss: 0.0490 - val_accuracy: 0.9839\nEpoch 14/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1682 - accuracy: 0.9413 - val_loss: 0.0308 - val_accuracy: 1.0000\nEpoch 15/25\n20/20 [==============================] - 24s 1s/step - loss: 0.5112 - accuracy: 0.9290 - val_loss: 0.1094 - val_accuracy: 0.9677\nEpoch 16/25\n20/20 [==============================] - 24s 1s/step - loss: 0.0964 - accuracy: 0.9671 - val_loss: 0.0342 - val_accuracy: 0.9839\nEpoch 17/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1105 - accuracy: 0.9619 - val_loss: 0.0258 - val_accuracy: 0.9839\nEpoch 18/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1135 - accuracy: 0.9611 - val_loss: 0.0655 - val_accuracy: 0.9704\nEpoch 19/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1538 - accuracy: 0.9468 - val_loss: 0.0531 - val_accuracy: 0.9812\nEpoch 20/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1299 - accuracy: 0.9552 - val_loss: 0.1140 - val_accuracy: 0.9704\nEpoch 21/25\n20/20 [==============================] - 24s 1s/step - loss: 0.1222 - accuracy: 0.9579 - val_loss: 0.0575 - val_accuracy: 0.9731\nEpoch 22/25\n20/20 [==============================] - 25s 1s/step - loss: 0.0998 - accuracy: 0.9659 - val_loss: 0.2046 - val_accuracy: 0.9113\nEpoch 23/25\n20/20 [==============================] - 24s 1s/step - loss: 0.0806 - accuracy: 0.9706 - val_loss: 0.0560 - val_accuracy: 0.9785\nEpoch 24/25\n20/20 [==============================] - 25s 1s/step - loss: 0.0994 - accuracy: 0.9635 - val_loss: 0.0117 - val_accuracy: 1.0000\nEpoch 25/25\n20/20 [==============================] - 24s 1s/step - loss: 0.0878 - accuracy: 0.9698 - val_loss: 0.0353 - val_accuracy: 0.9785\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
e7277edff2cbdcef7d47963588691bfe3e89a3bc
14,943
ipynb
Jupyter Notebook
PR_Final_Project.ipynb
Sumanthchowdary97/Email-Spam-Classification
08b06d830b3710e1fbf865e82c0a6692865a0051
[ "Apache-2.0" ]
null
null
null
PR_Final_Project.ipynb
Sumanthchowdary97/Email-Spam-Classification
08b06d830b3710e1fbf865e82c0a6692865a0051
[ "Apache-2.0" ]
null
null
null
PR_Final_Project.ipynb
Sumanthchowdary97/Email-Spam-Classification
08b06d830b3710e1fbf865e82c0a6692865a0051
[ "Apache-2.0" ]
null
null
null
36.896296
194
0.505989
[ [ [ "Installing necessary libraries", "_____no_output_____" ] ], [ [ "!pip install pandas\n!pip install numpy\n!pip install chardet\n!pip install sklearn\n!pip install gensim", "Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (1.1.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2.8.1)\nRequirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from pandas) (1.19.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (1.19.5)\nRequirement already satisfied: chardet in /usr/local/lib/python3.7/dist-packages (3.0.4)\nRequirement already satisfied: sklearn in /usr/local/lib/python3.7/dist-packages (0.0)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sklearn) (0.22.2.post1)\nRequirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.19.5)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.0.1)\nRequirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.4.1)\nRequirement already satisfied: gensim in /usr/local/lib/python3.7/dist-packages (3.6.0)\nRequirement already satisfied: smart-open>=1.2.1 in /usr/local/lib/python3.7/dist-packages (from gensim) (5.0.0)\nRequirement already satisfied: six>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from gensim) (1.15.0)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.7/dist-packages (from gensim) (1.19.5)\nRequirement already satisfied: scipy>=0.18.1 in /usr/local/lib/python3.7/dist-packages (from gensim) (1.4.1)\n" ] ], [ [ "Making necessary imports\n\n", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport chardet as cd\nfrom gensim import parsing\nfrom sklearn import naive_bayes, svm\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, confusion_matrix\nfrom sklearn.feature_extraction.text import CountVectorizer", "_____no_output_____" ] ], [ [ "Reading from CSV file and Preprocessing the data", "_____no_output_____" ] ], [ [ "with open('data.csv', 'rb') as file:\n data = cd.detect(file.read())\n\ndataframe = pd.read_csv('data.csv', encoding = data['encoding'])\ndataframe = dataframe.drop([\"Unnamed: 2\", \"Unnamed: 3\", \"Unnamed: 4\"], axis=1)\ndataframe['v1'] = dataframe.v1.map({'ham':0, 'spam':1})\n\ndataset = dataframe.values\n\nprint(\"The first ten rows are: \\n\")\nprint(dataframe.head(10))\n\nnp.random.shuffle(dataset)\n\nX = dataset[:,1]\nY = dataset[:,0]\nY = Y.astype('int')\n\nfor i in range(X.shape[0]):\n X[i] = parsing.stem_text(X[i].lower())\n\nvectorizer = CountVectorizer()\nX_processed = vectorizer.fit_transform(X)", "The first ten rows are: \n\n v1 v2\n0 0 Go until jurong point, crazy.. Available only ...\n1 0 Ok lar... Joking wif u oni...\n2 1 Free entry in 2 a wkly comp to win FA Cup fina...\n3 0 U dun say so early hor... U c already then say...\n4 0 Nah I don't think he goes to usf, he lives aro...\n5 1 FreeMsg Hey there darling it's been 3 week's n...\n6 0 Even my brother is not like to speak with me. ...\n7 0 As per your request 'Melle Melle (Oru Minnamin...\n8 1 WINNER!! As a valued network customer you have...\n9 1 Had your mobile 11 months or more? U R entitle...\n" ] ], [ [ "Splitting the dataset into Training data and Test data", "_____no_output_____" ] ], [ [ "X_train = X_processed[0:4500,:]\nY_train = Y[0:4500]\nX_test = X_processed[4500:,:]\nY_test = Y[4500:]\n\nprint(\"The shape of training data is :\", X_train.shape)\nprint(\"The shape of test data is :\", X_test.shape)", "The shape of training data is : (4500, 8265)\nThe shape of test data is : (1072, 8265)\n" ] ], [ [ "Function to print various metrics of different models", "_____no_output_____" ] ], [ [ "def scores(Y_actual, Y_predicted, model):\n if model == 1:\n print(\"\\nThe metrics of Naive Bayes classifier are :\")\n if model == 2:\n print(\"\\nThe metrics of Support Vector Classifier are:\")\n print(\"\\nAccuracy score is :\" , accuracy_score(Y_actual, Y_predicted))\n print(\"\\nPrecision score is :\" , precision_score(Y_actual, Y_predicted))\n print(\"\\nRecall score is :\" , recall_score(Y_actual, Y_predicted))\n print(\"\\nROC AUC score is :\" , roc_auc_score(Y_actual,Y_predicted))\n print(\"\\nConfusion Matrix : \\n\")\n print(confusion_matrix(Y_actual, Y_predicted))\n print(\"\\n\")", "_____no_output_____" ] ], [ [ "Fitting the Multinomial Naive Bayes model and SVM", "_____no_output_____" ] ], [ [ "bayes_classifier = naive_bayes.MultinomialNB()\nsvm_classifier = svm.SVC(C = 1000)\n\nbayes_classifier.fit(X_train, Y_train)\nsvm_classifier.fit(X_train, Y_train)\n\nY_predicted_bayes = bayes_classifier.predict(X_test)\nY_predicted_svm = svm_classifier.predict(X_test)\n\nscores(Y_test, Y_predicted_bayes, 1)\nscores(Y_test, Y_predicted_svm, 2)", "\nThe metrics of Naive Bayes classifier are :\n\nAccuracy score is : 0.9841417910447762\n\nPrecision score is : 0.9411764705882353\n\nRecall score is : 0.9343065693430657\n\nROC AUC score is : 0.9628752098052227\n\nConfusion Matrix : \n\n[[927 8]\n [ 9 128]]\n\n\n\nThe metrics of Support Vector Classifier are:\n\nAccuracy score is : 0.9850746268656716\n\nPrecision score is : 1.0\n\nRecall score is : 0.8832116788321168\n\nROC AUC score is : 0.9416058394160585\n\nConfusion Matrix : \n\n[[935 0]\n [ 16 121]]\n\n\n" ] ], [ [ "Function to check new messages and predict as Ham/Spam", "_____no_output_____" ] ], [ [ "def test(message):\n message = message.lower()\n message = parsing.stem_text(message)\n message_vector = vectorizer.transform([message])\n\n if bayes_classifier.predict(message_vector) == 1:\n bayes_result = \"Spam\" \n else:\n bayes_result = \"Ham\"\n \n if svm_classifier.predict(message_vector) == 1:\n svm_result = \"Spam\"\n else:\n svm_result = \"Ham\"\n\n print(\"According to Bayes Classifier the message is :\", bayes_result)\n print(\"According to SVM Classifier the message is :\", svm_result)\n\n\nmessage1 = \"Avail exciting discounts. Click to know more.\"\nprint(\"The message is : \", message1)\ntest(message1)\n\nmessage2 = \"Claim your free rewards. Get upto $100 off.\"\nprint(\"\\nThe message is : \", message2)\ntest(message2)\n\nmessage3 = \"Have you seen the latest \\Red Barbie\\? It comes with all of K ****B****!\"\nprint(\"\\nThe message is : \", message3)\ntest(message3)\n\nmessage4 = \"Find your special one on www.areyouunique.co.uk\"\nprint(\"\\nThe message is : \", message4)\ntest(message4)\n\nmessage5 = \"Congrats! Special cinema pass for 10 is yours. call 09061209465 now! C Suprman V, Avngrs III, Wondrwmn 2, etc all 4 FREE! bx420-ip4-5we. 150pm. Dont miss out! \"\nprint(\"\\nThe message is : \", message5)\ntest(message5)\n\nmessage6 = \"Safely straiten teeth 100% from home!. Exciting rewards as well. Call 1234567890 to knw mre. $$$$$ click $$$$\"\nprint(\"\\nThe message is : \", message6)\ntest(message6)", "The message is : Avail exciting discounts. Click to know more.\nAccording to Bayes Classifier the message is : Ham\nAccording to SVM Classifier the message is : Ham\n\nThe message is : Claim your free rewards. Get upto $100 off.\nAccording to Bayes Classifier the message is : Spam\nAccording to SVM Classifier the message is : Spam\n\nThe message is : Have you seen the latest \\Red Barbie\\? It comes with all of K ****B****!\nAccording to Bayes Classifier the message is : Ham\nAccording to SVM Classifier the message is : Ham\n\nThe message is : Find your special one on www.areyouunique.co.uk\nAccording to Bayes Classifier the message is : Spam\nAccording to SVM Classifier the message is : Spam\n\nThe message is : Congrats! Special cinema pass for 10 is yours. call 09061209465 now! C Suprman V, Avngrs III, Wondrwmn 2, etc all 4 FREE! bx420-ip4-5we. 150pm. Dont miss out! \nAccording to Bayes Classifier the message is : Spam\nAccording to SVM Classifier the message is : Spam\n\nThe message is : Safely straiten teeth 100% from home!. Exciting rewards as well. Call 1234567890 to knw mre. $$$$$ click $$$$\nAccording to Bayes Classifier the message is : Spam\nAccording to SVM Classifier the message is : Ham\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e727a6b5737e5d0bc01180594aaaf6c647bfc4a0
1,901
ipynb
Jupyter Notebook
code/massSpring.ipynb
hrayatnia/pycon
cba9b8236db228a74bef32cae55a3a190c77f29f
[ "BSD-2-Clause" ]
1
2016-05-20T15:55:46.000Z
2016-05-20T15:55:46.000Z
code/massSpring.ipynb
hrayatnia/pycon
cba9b8236db228a74bef32cae55a3a190c77f29f
[ "BSD-2-Clause" ]
null
null
null
code/massSpring.ipynb
hrayatnia/pycon
cba9b8236db228a74bef32cae55a3a190c77f29f
[ "BSD-2-Clause" ]
null
null
null
21.602273
72
0.502367
[ [ [ "%pylab\nfrom scipy.integrate import odeint\n\ndef MassSpring(state,t):\n # unpack the state vector\n x = state[0]\n xd = state[1]\n # these are our constants\n k = 2.5 # Newtons per metre\n m = 1.5 # Kilograms\n g = 9.8 # metres per second\n # compute acceleration xdd\n xdd = ((-k*x)/m) + g\n # return the two state derivatives\n return [xd, xdd]\n\nstate0 = [0.0, 0.0]\nt = arange(0.0, 10.0, 0.1)\nstate = odeint(MassSpring, state0, t)\nplot(t, state)\nxlabel('TIME (sec)')\nylabel('STATES')\ntitle('Mass-Spring System')\nlegend(('$x$ (m)', '$\\dot{x}$ (m/sec)'))", "Using matplotlib backend: MacOSX\nPopulating the interactive namespace from numpy and matplotlib\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
e727b06e8a5fcff6f1d40b1f9112e2d3b3134f83
327,074
ipynb
Jupyter Notebook
notebooks/03_Data_understanding_v1.0.ipynb
Ankith06/Enterprise-Data-Science
686c867c9d26939083275c77533c4616dcfd43a9
[ "FTL" ]
null
null
null
notebooks/03_Data_understanding_v1.0.ipynb
Ankith06/Enterprise-Data-Science
686c867c9d26939083275c77533c4616dcfd43a9
[ "FTL" ]
null
null
null
notebooks/03_Data_understanding_v1.0.ipynb
Ankith06/Enterprise-Data-Science
686c867c9d26939083275c77533c4616dcfd43a9
[ "FTL" ]
null
null
null
45.540796
240
0.489495
[ [ [ "import subprocess\nimport os\n\nimport pandas as pd\n\nimport requests\nfrom bs4 import BeautifulSoup\n\nimport json", "_____no_output_____" ] ], [ [ "# Standard process in data science\n!['CRISP_DM'](../reports/figures/CRISP_DM.png)\n\nData science project starts with the understanding of business problem and by defining objectives of project. The above process diagram is known as **CR**oss **I**ndustry **S**tandard **P**rocess for **D**ata-**M**ining (CRISP_DM).\n\n# Data Understanding\n\nData can be uploaded in python environment by three ways by using different recources from internet.\n\n1. **Data scraping** : get data from webpage e.g, Robert-Koch-Institute (RKI) data https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html\n2. **GitHub** : get data in CSV file format from genuine resource such as GitHub e.g, John Hopkins data https://github.com/CSSEGISandData/COVID-19.git\n3. **API** : use of API service for retriving data \nhttps://npgeo-corona-npgeo-de.hub.arcgis.com\n\n## 1. GitHub CSV data : John Hopkins \n\nClone and pull data to `..\\data\\raw` from github into project folder https://github.com/CSSEGISandData/COVID-19.git\n", "_____no_output_____" ] ], [ [ "# automization of git pull for retriving new data at each time when notebook will be executed\ngit_pull = subprocess.Popen( \"git pull\" , \n cwd = os.path.dirname( '../data/raw/COVID-19/' ), \n shell = True, \n stdout = subprocess.PIPE, \n stderr = subprocess.PIPE)\n(out, error) = git_pull.communicate()\n\n\nprint(\"Error : \" + str(error)) \nprint(\"out : \" + str(out))", "Error : b''\nout : b'Already up to date.\\n'\n" ], [ "# read csv file for time-series data\nfilepath= '../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'\npd_raw= pd.read_csv(filepath)\n\npd_raw.head()", "_____no_output_____" ] ], [ [ "## 2. Webscrapping : RKI ", "_____no_output_____" ] ], [ [ "page = requests.get(\"https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html\") # get webpage\nsoup = BeautifulSoup(page.content, 'html.parser') # get page content ", "_____no_output_____" ], [ "# scrap table data from page content into a list \nhtml_table= soup.find('table') # find the table in the page content\nall_rows= html_table.find_all('tr')\n\nfinal_data_list= []\nfor pos,rows in enumerate(all_rows):\n col_list= [each_col.get_text(strip= True) for each_col in rows.find_all('td')] #td for row element\n final_data_list.append(col_list)", "_____no_output_____" ], [ "# convert list into DataFrame with proper labling\npd_daily=pd.DataFrame(final_data_list).dropna().rename(columns={0:'state', \n 1:'cases',\n 2:'changes',\n 3:'cases_per_100k',\n 4:'fatal',\n 5:'comment'})\n\npd_daily.head()", "_____no_output_____" ] ], [ [ "## 3. API call", "_____no_output_____" ] ], [ [ "## data request for Germany\ndata=requests.get('https://services7.arcgis.com/mOBPykOjAyBO2ZKk/arcgis/rest/services/Coronaf%C3%A4lle_in_den_Bundesl%C3%A4ndern/FeatureServer/0/query?where=1%3D1&outFields=*&outSR=4326&f=json')", "_____no_output_____" ], [ "json_object=json.loads(data.content) \ntype(json_object)", "_____no_output_____" ], [ "json_object.keys()", "_____no_output_____" ], [ "json_object['features'][0]", "_____no_output_____" ], [ "data_list=[]\nfor pos, each_dict in enumerate(json_object['features'][:]):\n data_list.append(each_dict['attributes'])\n\npd_data_list=pd.DataFrame(data_list)\npd_data_list.head()", "_____no_output_____" ], [ "# save data into csv file\ndirectory = '../data/raw/NPGEO'\nif not os.path.exists(directory):\n os.mkdir(directory)\n\npd_data_list.to_csv('../data/raw/NPGEO/GER_state_data.csv',sep=';')", "_____no_output_____" ] ], [ [ "### API access via REST service, e.g. USA data\n\nREST API from opensource www.smartable.ai", "_____no_output_____" ] ], [ [ "# US for full list\nheaders = {\n 'Cache-Control': 'no-cache',\n 'Subscription-Key': '418eef22cf6748ae9a941547f6b4cb6b',\n}\n\nresponse = requests.get('https://api.smartable.ai/coronavirus/stats/US', headers=headers)\nprint(response)", "<Response [200]>\n" ], [ "# import content in string format\nUS_dict=json.loads(response.content) \nprint(json.dumps(US_dict,indent=2))", "{\n \"location\": {\n \"long\": -95.712891,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": null,\n \"county\": null,\n \"isoCode\": \"US\",\n \"lat\": 37.09024\n },\n \"updatedDateTime\": \"2020-08-22T10:35:04.5019288Z\",\n \"stats\": {\n \"totalConfirmedCases\": 5728621,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 186976,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 2796278,\n \"newlyRecoveredCases\": 0,\n \"history\": [\n {\n \"date\": \"2020-01-22T00:00:00\",\n \"confirmed\": 1,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-23T00:00:00\",\n \"confirmed\": 1,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-24T00:00:00\",\n \"confirmed\": 2,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-25T00:00:00\",\n \"confirmed\": 2,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-26T00:00:00\",\n \"confirmed\": 5,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-27T00:00:00\",\n \"confirmed\": 5,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-28T00:00:00\",\n \"confirmed\": 5,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-29T00:00:00\",\n \"confirmed\": 5,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-30T00:00:00\",\n \"confirmed\": 5,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-01-31T00:00:00\",\n \"confirmed\": 7,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-01T00:00:00\",\n \"confirmed\": 8,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-02T00:00:00\",\n \"confirmed\": 8,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-03T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-04T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-05T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-06T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-07T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-08T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 0\n },\n {\n \"date\": \"2020-02-09T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-10T00:00:00\",\n \"confirmed\": 11,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-11T00:00:00\",\n \"confirmed\": 12,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-12T00:00:00\",\n \"confirmed\": 12,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-13T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-14T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-15T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-16T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-17T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-18T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-19T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-20T00:00:00\",\n \"confirmed\": 13,\n \"deaths\": 0,\n \"recovered\": 3\n },\n {\n \"date\": \"2020-02-21T00:00:00\",\n \"confirmed\": 15,\n \"deaths\": 0,\n \"recovered\": 5\n },\n {\n \"date\": \"2020-02-22T00:00:00\",\n \"confirmed\": 15,\n \"deaths\": 0,\n \"recovered\": 5\n },\n {\n \"date\": \"2020-02-23T00:00:00\",\n \"confirmed\": 15,\n \"deaths\": 0,\n \"recovered\": 5\n },\n {\n \"date\": \"2020-02-24T00:00:00\",\n \"confirmed\": 51,\n \"deaths\": 0,\n \"recovered\": 5\n },\n {\n \"date\": \"2020-02-25T00:00:00\",\n \"confirmed\": 51,\n \"deaths\": 0,\n \"recovered\": 6\n },\n {\n \"date\": \"2020-02-26T00:00:00\",\n \"confirmed\": 57,\n \"deaths\": 0,\n \"recovered\": 6\n },\n {\n \"date\": \"2020-02-27T00:00:00\",\n \"confirmed\": 58,\n \"deaths\": 0,\n \"recovered\": 6\n },\n {\n \"date\": \"2020-02-28T00:00:00\",\n \"confirmed\": 60,\n \"deaths\": 0,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-02-29T00:00:00\",\n \"confirmed\": 68,\n \"deaths\": 1,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-01T00:00:00\",\n \"confirmed\": 74,\n \"deaths\": 1,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-02T00:00:00\",\n \"confirmed\": 98,\n \"deaths\": 6,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-03T00:00:00\",\n \"confirmed\": 118,\n \"deaths\": 7,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-04T00:00:00\",\n \"confirmed\": 149,\n \"deaths\": 11,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-05T00:00:00\",\n \"confirmed\": 217,\n \"deaths\": 12,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-06T00:00:00\",\n \"confirmed\": 262,\n \"deaths\": 14,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-07T00:00:00\",\n \"confirmed\": 402,\n \"deaths\": 17,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-08T00:00:00\",\n \"confirmed\": 518,\n \"deaths\": 21,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-09T00:00:00\",\n \"confirmed\": 583,\n \"deaths\": 22,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-10T00:00:00\",\n \"confirmed\": 768,\n \"deaths\": 28,\n \"recovered\": 7\n },\n {\n \"date\": \"2020-03-11T00:00:00\",\n \"confirmed\": 1165,\n \"deaths\": 32,\n \"recovered\": 11\n },\n {\n \"date\": \"2020-03-12T00:00:00\",\n \"confirmed\": 1758,\n \"deaths\": 41,\n \"recovered\": 12\n },\n {\n \"date\": \"2020-03-13T00:00:00\",\n \"confirmed\": 2354,\n \"deaths\": 50,\n \"recovered\": 13\n },\n {\n \"date\": \"2020-03-14T00:00:00\",\n \"confirmed\": 3068,\n \"deaths\": 60,\n \"recovered\": 16\n },\n {\n \"date\": \"2020-03-15T00:00:00\",\n \"confirmed\": 3773,\n \"deaths\": 69,\n \"recovered\": 17\n },\n {\n \"date\": \"2020-03-16T00:00:00\",\n \"confirmed\": 4760,\n \"deaths\": 92,\n \"recovered\": 17\n },\n {\n \"date\": \"2020-03-17T00:00:00\",\n \"confirmed\": 6579,\n \"deaths\": 114,\n \"recovered\": 18\n },\n {\n \"date\": \"2020-03-18T00:00:00\",\n \"confirmed\": 9385,\n \"deaths\": 147,\n \"recovered\": 108\n },\n {\n \"date\": \"2020-03-19T00:00:00\",\n \"confirmed\": 14298,\n \"deaths\": 208,\n \"recovered\": 108\n },\n {\n \"date\": \"2020-03-20T00:00:00\",\n \"confirmed\": 19853,\n \"deaths\": 270,\n \"recovered\": 147\n },\n {\n \"date\": \"2020-03-21T00:00:00\",\n \"confirmed\": 26880,\n \"deaths\": 345,\n \"recovered\": 147\n },\n {\n \"date\": \"2020-03-22T00:00:00\",\n \"confirmed\": 35171,\n \"deaths\": 470,\n \"recovered\": 178\n },\n {\n \"date\": \"2020-03-23T00:00:00\",\n \"confirmed\": 46343,\n \"deaths\": 583,\n \"recovered\": 179\n },\n {\n \"date\": \"2020-03-24T00:00:00\",\n \"confirmed\": 55095,\n \"deaths\": 798,\n \"recovered\": 181\n },\n {\n \"date\": \"2020-03-25T00:00:00\",\n \"confirmed\": 69006,\n \"deaths\": 1050,\n \"recovered\": 303\n },\n {\n \"date\": \"2020-03-26T00:00:00\",\n \"confirmed\": 85946,\n \"deaths\": 1297,\n \"recovered\": 303\n },\n {\n \"date\": \"2020-03-27T00:00:00\",\n \"confirmed\": 104517,\n \"deaths\": 1712,\n \"recovered\": 302\n },\n {\n \"date\": \"2020-03-28T00:00:00\",\n \"confirmed\": 124284,\n \"deaths\": 2186,\n \"recovered\": 302\n },\n {\n \"date\": \"2020-03-29T00:00:00\",\n \"confirmed\": 156447,\n \"deaths\": 3026,\n \"recovered\": 809\n },\n {\n \"date\": \"2020-03-30T00:00:00\",\n \"confirmed\": 175671,\n \"deaths\": 3616,\n \"recovered\": 1182\n },\n {\n \"date\": \"2020-03-31T00:00:00\",\n \"confirmed\": 198293,\n \"deaths\": 4446,\n \"recovered\": 1182\n },\n {\n \"date\": \"2020-04-01T00:00:00\",\n \"confirmed\": 231150,\n \"deaths\": 5655,\n \"recovered\": 9163\n },\n {\n \"date\": \"2020-04-02T00:00:00\",\n \"confirmed\": 260012,\n \"deaths\": 6565,\n \"recovered\": 9639\n },\n {\n \"date\": \"2020-04-03T00:00:00\",\n \"confirmed\": 291984,\n \"deaths\": 7658,\n \"recovered\": 10231\n },\n {\n \"date\": \"2020-04-04T00:00:00\",\n \"confirmed\": 326014,\n \"deaths\": 8990,\n \"recovered\": 15461\n },\n {\n \"date\": \"2020-04-05T00:00:00\",\n \"confirmed\": 351829,\n \"deaths\": 10144,\n \"recovered\": 17991\n },\n {\n \"date\": \"2020-04-06T00:00:00\",\n \"confirmed\": 382136,\n \"deaths\": 11480,\n \"recovered\": 20269\n },\n {\n \"date\": \"2020-04-07T00:00:00\",\n \"confirmed\": 413234,\n \"deaths\": 13379,\n \"recovered\": 22699\n },\n {\n \"date\": \"2020-04-08T00:00:00\",\n \"confirmed\": 445495,\n \"deaths\": 15279,\n \"recovered\": 24247\n },\n {\n \"date\": \"2020-04-09T00:00:00\",\n \"confirmed\": 479603,\n \"deaths\": 17142,\n \"recovered\": 26270\n },\n {\n \"date\": \"2020-04-10T00:00:00\",\n \"confirmed\": 515526,\n \"deaths\": 19256,\n \"recovered\": 28820\n },\n {\n \"date\": \"2020-04-11T00:00:00\",\n \"confirmed\": 543804,\n \"deaths\": 21076,\n \"recovered\": 31305\n },\n {\n \"date\": \"2020-04-12T00:00:00\",\n \"confirmed\": 571225,\n \"deaths\": 22576,\n \"recovered\": 40755\n },\n {\n \"date\": \"2020-04-13T00:00:00\",\n \"confirmed\": 596278,\n \"deaths\": 24102,\n \"recovered\": 42597\n },\n {\n \"date\": \"2020-04-14T00:00:00\",\n \"confirmed\": 625145,\n \"deaths\": 26533,\n \"recovered\": 48186\n },\n {\n \"date\": \"2020-04-15T00:00:00\",\n \"confirmed\": 654363,\n \"deaths\": 28946,\n \"recovered\": 50605\n },\n {\n \"date\": \"2020-04-16T00:00:00\",\n \"confirmed\": 686592,\n \"deaths\": 31201,\n \"recovered\": 53452\n },\n {\n \"date\": \"2020-04-17T00:00:00\",\n \"confirmed\": 722071,\n \"deaths\": 37588,\n \"recovered\": 56444\n },\n {\n \"date\": \"2020-04-18T00:00:00\",\n \"confirmed\": 750231,\n \"deaths\": 39598,\n \"recovered\": 62973\n },\n {\n \"date\": \"2020-04-19T00:00:00\",\n \"confirmed\": 774930,\n \"deaths\": 41168,\n \"recovered\": 68179\n },\n {\n \"date\": \"2020-04-20T00:00:00\",\n \"confirmed\": 802816,\n \"deaths\": 42923,\n \"recovered\": 70051\n },\n {\n \"date\": \"2020-04-21T00:00:00\",\n \"confirmed\": 829174,\n \"deaths\": 45485,\n \"recovered\": 73056\n },\n {\n \"date\": \"2020-04-22T00:00:00\",\n \"confirmed\": 857325,\n \"deaths\": 48061,\n \"recovered\": 75521\n },\n {\n \"date\": \"2020-04-23T00:00:00\",\n \"confirmed\": 891798,\n \"deaths\": 50464,\n \"recovered\": 79419\n },\n {\n \"date\": \"2020-04-24T00:00:00\",\n \"confirmed\": 922573,\n \"deaths\": 52621,\n \"recovered\": 95892\n },\n {\n \"date\": \"2020-04-25T00:00:00\",\n \"confirmed\": 960846,\n \"deaths\": 54544,\n \"recovered\": 120798\n },\n {\n \"date\": \"2020-04-26T00:00:00\",\n \"confirmed\": 987326,\n \"deaths\": 55606,\n \"recovered\": 121414\n },\n {\n \"date\": \"2020-04-27T00:00:00\",\n \"confirmed\": 1010055,\n \"deaths\": 57045,\n \"recovered\": 141126\n },\n {\n \"date\": \"2020-04-28T00:00:00\",\n \"confirmed\": 1034876,\n \"deaths\": 59280,\n \"recovered\": 144114\n },\n {\n \"date\": \"2020-04-29T00:00:00\",\n \"confirmed\": 1062196,\n \"deaths\": 61757,\n \"recovered\": 149133\n },\n {\n \"date\": \"2020-04-30T00:00:00\",\n \"confirmed\": 1091856,\n \"deaths\": 63811,\n \"recovered\": 157459\n },\n {\n \"date\": \"2020-05-01T00:00:00\",\n \"confirmed\": 1122304,\n \"deaths\": 65409,\n \"recovered\": 165445\n },\n {\n \"date\": \"2020-05-02T00:00:00\",\n \"confirmed\": 1155405,\n \"deaths\": 67320,\n \"recovered\": 173518\n },\n {\n \"date\": \"2020-05-03T00:00:00\",\n \"confirmed\": 1182702,\n \"deaths\": 68655,\n \"recovered\": 178402\n },\n {\n \"date\": \"2020-05-04T00:00:00\",\n \"confirmed\": 1206964,\n \"deaths\": 69990,\n \"recovered\": 186956\n },\n {\n \"date\": \"2020-05-05T00:00:00\",\n \"confirmed\": 1230584,\n \"deaths\": 71932,\n \"recovered\": 193112\n },\n {\n \"date\": \"2020-05-06T00:00:00\",\n \"confirmed\": 1255681,\n \"deaths\": 74401,\n \"recovered\": 205023\n },\n {\n \"date\": \"2020-05-07T00:00:00\",\n \"confirmed\": 1284638,\n \"deaths\": 76520,\n \"recovered\": 208570\n },\n {\n \"date\": \"2020-05-08T00:00:00\",\n \"confirmed\": 1312925,\n \"deaths\": 78172,\n \"recovered\": 214564\n },\n {\n \"date\": \"2020-05-09T00:00:00\",\n \"confirmed\": 1337554,\n \"deaths\": 79654,\n \"recovered\": 228549\n },\n {\n \"date\": \"2020-05-10T00:00:00\",\n \"confirmed\": 1358181,\n \"deaths\": 80433,\n \"recovered\": 246732\n },\n {\n \"date\": \"2020-05-11T00:00:00\",\n \"confirmed\": 1375717,\n \"deaths\": 81525,\n \"recovered\": 252193\n },\n {\n \"date\": \"2020-05-12T00:00:00\",\n \"confirmed\": 1397741,\n \"deaths\": 83166,\n \"recovered\": 293535\n },\n {\n \"date\": \"2020-05-13T00:00:00\",\n \"confirmed\": 1419500,\n \"deaths\": 85002,\n \"recovered\": 306223\n },\n {\n \"date\": \"2020-05-14T00:00:00\",\n \"confirmed\": 1444540,\n \"deaths\": 87074,\n \"recovered\": 314352\n },\n {\n \"date\": \"2020-05-15T00:00:00\",\n \"confirmed\": 1472405,\n \"deaths\": 88782,\n \"recovered\": 314717\n },\n {\n \"date\": \"2020-05-16T00:00:00\",\n \"confirmed\": 1495650,\n \"deaths\": 89966,\n \"recovered\": 325976\n },\n {\n \"date\": \"2020-05-17T00:00:00\",\n \"confirmed\": 1515502,\n \"deaths\": 90797,\n \"recovered\": 333012\n },\n {\n \"date\": \"2020-05-18T00:00:00\",\n \"confirmed\": 1537106,\n \"deaths\": 91732,\n \"recovered\": 342391\n },\n {\n \"date\": \"2020-05-19T00:00:00\",\n \"confirmed\": 1557333,\n \"deaths\": 93255,\n \"recovered\": 347665\n },\n {\n \"date\": \"2020-05-20T00:00:00\",\n \"confirmed\": 1579950,\n \"deaths\": 94621,\n \"recovered\": 356864\n },\n {\n \"date\": \"2020-05-21T00:00:00\",\n \"confirmed\": 1607049,\n \"deaths\": 95963,\n \"recovered\": 367530\n },\n {\n \"date\": \"2020-05-22T00:00:00\",\n \"confirmed\": 1630659,\n \"deaths\": 97391,\n \"recovered\": 386672\n },\n {\n \"date\": \"2020-05-23T00:00:00\",\n \"confirmed\": 1651797,\n \"deaths\": 99282,\n \"recovered\": 430358\n },\n {\n \"date\": \"2020-05-24T00:00:00\",\n \"confirmed\": 1671325,\n \"deaths\": 99874,\n \"recovered\": 435133\n },\n {\n \"date\": \"2020-05-25T00:00:00\",\n \"confirmed\": 1690869,\n \"deaths\": 100427,\n \"recovered\": 447879\n },\n {\n \"date\": \"2020-05-26T00:00:00\",\n \"confirmed\": 1709413,\n \"deaths\": 101132,\n \"recovered\": 462359\n },\n {\n \"date\": \"2020-05-27T00:00:00\",\n \"confirmed\": 1727850,\n \"deaths\": 102545,\n \"recovered\": 472245\n },\n {\n \"date\": \"2020-05-28T00:00:00\",\n \"confirmed\": 1750118,\n \"deaths\": 103784,\n \"recovered\": 480785\n },\n {\n \"date\": \"2020-05-29T00:00:00\",\n \"confirmed\": 1774599,\n \"deaths\": 105053,\n \"recovered\": 501068\n },\n {\n \"date\": \"2020-05-30T00:00:00\",\n \"confirmed\": 1798137,\n \"deaths\": 105884,\n \"recovered\": 516735\n },\n {\n \"date\": \"2020-05-31T00:00:00\",\n \"confirmed\": 1818309,\n \"deaths\": 106587,\n \"recovered\": 581058\n },\n {\n \"date\": \"2020-06-01T00:00:00\",\n \"confirmed\": 1840296,\n \"deaths\": 107377,\n \"recovered\": 596397\n },\n {\n \"date\": \"2020-06-02T00:00:00\",\n \"confirmed\": 1859948,\n \"deaths\": 108364,\n \"recovered\": 626378\n },\n {\n \"date\": \"2020-06-03T00:00:00\",\n \"confirmed\": 1880073,\n \"deaths\": 109360,\n \"recovered\": 668642\n },\n {\n \"date\": \"2020-06-04T00:00:00\",\n \"confirmed\": 1901999,\n \"deaths\": 110432,\n \"recovered\": 691854\n },\n {\n \"date\": \"2020-06-05T00:00:00\",\n \"confirmed\": 1942344,\n \"deaths\": 111532,\n \"recovered\": 717816\n },\n {\n \"date\": \"2020-06-06T00:00:00\",\n \"confirmed\": 1964506,\n \"deaths\": 112237,\n \"recovered\": 730749\n },\n {\n \"date\": \"2020-06-07T00:00:00\",\n \"confirmed\": 1983815,\n \"deaths\": 112647,\n \"recovered\": 739526\n },\n {\n \"date\": \"2020-06-08T00:00:00\",\n \"confirmed\": 2001749,\n \"deaths\": 113268,\n \"recovered\": 751115\n },\n {\n \"date\": \"2020-06-09T00:00:00\",\n \"confirmed\": 2020468,\n \"deaths\": 114136,\n \"recovered\": 766370\n },\n {\n \"date\": \"2020-06-10T00:00:00\",\n \"confirmed\": 2041321,\n \"deaths\": 115150,\n \"recovered\": 785051\n },\n {\n \"date\": \"2020-06-11T00:00:00\",\n \"confirmed\": 2063943,\n \"deaths\": 116037,\n \"recovered\": 792284\n },\n {\n \"date\": \"2020-06-12T00:00:00\",\n \"confirmed\": 2087696,\n \"deaths\": 116563,\n \"recovered\": 817350\n },\n {\n \"date\": \"2020-06-13T00:00:00\",\n \"confirmed\": 2115147,\n \"deaths\": 117476,\n \"recovered\": 829041\n },\n {\n \"date\": \"2020-06-14T00:00:00\",\n \"confirmed\": 2134689,\n \"deaths\": 117786,\n \"recovered\": 842671\n },\n {\n \"date\": \"2020-06-15T00:00:00\",\n \"confirmed\": 2153980,\n \"deaths\": 118180,\n \"recovered\": 862334\n },\n {\n \"date\": \"2020-06-16T00:00:00\",\n \"confirmed\": 2179503,\n \"deaths\": 118982,\n \"recovered\": 875322\n },\n {\n \"date\": \"2020-06-17T00:00:00\",\n \"confirmed\": 2204892,\n \"deaths\": 119784,\n \"recovered\": 890053\n },\n {\n \"date\": \"2020-06-18T00:00:00\",\n \"confirmed\": 2233651,\n \"deaths\": 120476,\n \"recovered\": 902154\n },\n {\n \"date\": \"2020-06-19T00:00:00\",\n \"confirmed\": 2266421,\n \"deaths\": 121174,\n \"recovered\": 926602\n },\n {\n \"date\": \"2020-06-20T00:00:00\",\n \"confirmed\": 2298951,\n \"deaths\": 121767,\n \"recovered\": 943308\n },\n {\n \"date\": \"2020-06-21T00:00:00\",\n \"confirmed\": 2324676,\n \"deaths\": 122187,\n \"recovered\": 950679\n },\n {\n \"date\": \"2020-06-22T00:00:00\",\n \"confirmed\": 2355713,\n \"deaths\": 122549,\n \"recovered\": 973046\n },\n {\n \"date\": \"2020-06-23T00:00:00\",\n \"confirmed\": 2391498,\n \"deaths\": 123380,\n \"recovered\": 990169\n },\n {\n \"date\": \"2020-06-24T00:00:00\",\n \"confirmed\": 2425103,\n \"deaths\": 124061,\n \"recovered\": 1009965\n },\n {\n \"date\": \"2020-06-25T00:00:00\",\n \"confirmed\": 2468799,\n \"deaths\": 126653,\n \"recovered\": 1021495\n },\n {\n \"date\": \"2020-06-26T00:00:00\",\n \"confirmed\": 2512791,\n \"deaths\": 127324,\n \"recovered\": 1037234\n },\n {\n \"date\": \"2020-06-27T00:00:00\",\n \"confirmed\": 2555991,\n \"deaths\": 127868,\n \"recovered\": 1049520\n },\n {\n \"date\": \"2020-06-28T00:00:00\",\n \"confirmed\": 2598087,\n \"deaths\": 128233,\n \"recovered\": 1061476\n },\n {\n \"date\": \"2020-06-29T00:00:00\",\n \"confirmed\": 2640787,\n \"deaths\": 128590,\n \"recovered\": 1084812\n },\n {\n \"date\": \"2020-06-30T00:00:00\",\n \"confirmed\": 2685931,\n \"deaths\": 129899,\n \"recovered\": 1110744\n },\n {\n \"date\": \"2020-07-01T00:00:00\",\n \"confirmed\": 2735854,\n \"deaths\": 130536,\n \"recovered\": 1131444\n },\n {\n \"date\": \"2020-07-02T00:00:00\",\n \"confirmed\": 2792006,\n \"deaths\": 131166,\n \"recovered\": 1157471\n },\n {\n \"date\": \"2020-07-03T00:00:00\",\n \"confirmed\": 2844579,\n \"deaths\": 131892,\n \"recovered\": 1201542\n },\n {\n \"date\": \"2020-07-04T00:00:00\",\n \"confirmed\": 2889815,\n \"deaths\": 132130,\n \"recovered\": 1226135\n },\n {\n \"date\": \"2020-07-05T00:00:00\",\n \"confirmed\": 2943264,\n \"deaths\": 132397,\n \"recovered\": 1249866\n },\n {\n \"date\": \"2020-07-06T00:00:00\",\n \"confirmed\": 2988818,\n \"deaths\": 132742,\n \"recovered\": 1289732\n },\n {\n \"date\": \"2020-07-07T00:00:00\",\n \"confirmed\": 3040467,\n \"deaths\": 133368,\n \"recovered\": 1318933\n },\n {\n \"date\": \"2020-07-08T00:00:00\",\n \"confirmed\": 3108494,\n \"deaths\": 134732,\n \"recovered\": 1356209\n },\n {\n \"date\": \"2020-07-09T00:00:00\",\n \"confirmed\": 3169373,\n \"deaths\": 135652,\n \"recovered\": 1389481\n },\n {\n \"date\": \"2020-07-10T00:00:00\",\n \"confirmed\": 3238202,\n \"deaths\": 136466,\n \"recovered\": 1422416\n },\n {\n \"date\": \"2020-07-11T00:00:00\",\n \"confirmed\": 3300650,\n \"deaths\": 137146,\n \"recovered\": 1451893\n },\n {\n \"date\": \"2020-07-12T00:00:00\",\n \"confirmed\": 3352439,\n \"deaths\": 137786,\n \"recovered\": 1478369\n },\n {\n \"date\": \"2020-07-13T00:00:00\",\n \"confirmed\": 3429613,\n \"deaths\": 138322,\n \"recovered\": 1509937\n },\n {\n \"date\": \"2020-07-14T00:00:00\",\n \"confirmed\": 3492990,\n \"deaths\": 139212,\n \"recovered\": 1559894\n },\n {\n \"date\": \"2020-07-15T00:00:00\",\n \"confirmed\": 3565766,\n \"deaths\": 140236,\n \"recovered\": 1604460\n },\n {\n \"date\": \"2020-07-16T00:00:00\",\n \"confirmed\": 3642585,\n \"deaths\": 141179,\n \"recovered\": 1637461\n },\n {\n \"date\": \"2020-07-17T00:00:00\",\n \"confirmed\": 3708932,\n \"deaths\": 142036,\n \"recovered\": 1697409\n },\n {\n \"date\": \"2020-07-18T00:00:00\",\n \"confirmed\": 3776352,\n \"deaths\": 142841,\n \"recovered\": 1731126\n },\n {\n \"date\": \"2020-07-19T00:00:00\",\n \"confirmed\": 3840443,\n \"deaths\": 143260,\n \"recovered\": 1758065\n },\n {\n \"date\": \"2020-07-20T00:00:00\",\n \"confirmed\": 3901296,\n \"deaths\": 143781,\n \"recovered\": 1805778\n },\n {\n \"date\": \"2020-07-21T00:00:00\",\n \"confirmed\": 3968161,\n \"deaths\": 144879,\n \"recovered\": 1841299\n },\n {\n \"date\": \"2020-07-22T00:00:00\",\n \"confirmed\": 4036805,\n \"deaths\": 145806,\n \"recovered\": 1886392\n },\n {\n \"date\": \"2020-07-23T00:00:00\",\n \"confirmed\": 4106851,\n \"deaths\": 147130,\n \"recovered\": 1931504\n },\n {\n \"date\": \"2020-07-24T00:00:00\",\n \"confirmed\": 4182405,\n \"deaths\": 155973,\n \"recovered\": 1978140\n },\n {\n \"date\": \"2020-07-25T00:00:00\",\n \"confirmed\": 4248812,\n \"deaths\": 156848,\n \"recovered\": 2011669\n },\n {\n \"date\": \"2020-07-26T00:00:00\",\n \"confirmed\": 4301115,\n \"deaths\": 157247,\n \"recovered\": 2038358\n },\n {\n \"date\": \"2020-07-27T00:00:00\",\n \"confirmed\": 4356901,\n \"deaths\": 157554,\n \"recovered\": 2083174\n },\n {\n \"date\": \"2020-07-28T00:00:00\",\n \"confirmed\": 4427183,\n \"deaths\": 160232,\n \"recovered\": 2131080\n },\n {\n \"date\": \"2020-07-29T00:00:00\",\n \"confirmed\": 4495980,\n \"deaths\": 161608,\n \"recovered\": 2188010\n },\n {\n \"date\": \"2020-07-30T00:00:00\",\n \"confirmed\": 4562227,\n \"deaths\": 162878,\n \"recovered\": 2226151\n },\n {\n \"date\": \"2020-07-31T00:00:00\",\n \"confirmed\": 4629847,\n \"deaths\": 164200,\n \"recovered\": 2266172\n },\n {\n \"date\": \"2020-08-01T00:00:00\",\n \"confirmed\": 4688244,\n \"deaths\": 165212,\n \"recovered\": 2299775\n },\n {\n \"date\": \"2020-08-02T00:00:00\",\n \"confirmed\": 4740773,\n \"deaths\": 165585,\n \"recovered\": 2316928\n },\n {\n \"date\": \"2020-08-03T00:00:00\",\n \"confirmed\": 4786980,\n \"deaths\": 166024,\n \"recovered\": 2381819\n },\n {\n \"date\": \"2020-08-04T00:00:00\",\n \"confirmed\": 4844988,\n \"deaths\": 167551,\n \"recovered\": 2415441\n },\n {\n \"date\": \"2020-08-05T00:00:00\",\n \"confirmed\": 4898922,\n \"deaths\": 168756,\n \"recovered\": 2472261\n },\n {\n \"date\": \"2020-08-06T00:00:00\",\n \"confirmed\": 4957593,\n \"deaths\": 169884,\n \"recovered\": 2507721\n },\n {\n \"date\": \"2020-08-07T00:00:00\",\n \"confirmed\": 5018132,\n \"deaths\": 171182,\n \"recovered\": 2546043\n },\n {\n \"date\": \"2020-08-08T00:00:00\",\n \"confirmed\": 5075195,\n \"deaths\": 172153,\n \"recovered\": 2566894\n },\n {\n \"date\": \"2020-08-09T00:00:00\",\n \"confirmed\": 5119701,\n \"deaths\": 172617,\n \"recovered\": 2592970\n },\n {\n \"date\": \"2020-08-10T00:00:00\",\n \"confirmed\": 5170558,\n \"deaths\": 173149,\n \"recovered\": 2642786\n },\n {\n \"date\": \"2020-08-11T00:00:00\",\n \"confirmed\": 5226606,\n \"deaths\": 174556,\n \"recovered\": 2681385\n },\n {\n \"date\": \"2020-08-12T00:00:00\",\n \"confirmed\": 5279551,\n \"deaths\": 175767,\n \"recovered\": 2723885\n },\n {\n \"date\": \"2020-08-13T00:00:00\",\n \"confirmed\": 5333897,\n \"deaths\": 177465,\n \"recovered\": 2766406\n },\n {\n \"date\": \"2020-08-14T00:00:00\",\n \"confirmed\": 5393253,\n \"deaths\": 178660,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-15T00:00:00\",\n \"confirmed\": 5456135,\n \"deaths\": 181035,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-16T00:00:00\",\n \"confirmed\": 5498246,\n \"deaths\": 181543,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-17T00:00:00\",\n \"confirmed\": 5526153,\n \"deaths\": 181915,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-18T00:00:00\",\n \"confirmed\": 5581485,\n \"deaths\": 183389,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-19T00:00:00\",\n \"confirmed\": 5627904,\n \"deaths\": 184812,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-20T00:00:00\",\n \"confirmed\": 5680098,\n \"deaths\": 185873,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-21T00:00:00\",\n \"confirmed\": 5728621,\n \"deaths\": 186976,\n \"recovered\": 2796278\n },\n {\n \"date\": \"2020-08-22T00:00:00\",\n \"confirmed\": 5728621,\n \"deaths\": 186976,\n \"recovered\": 2796278\n }\n ],\n \"breakdowns\": [\n {\n \"location\": {\n \"long\": 144.793731,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Guam\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 13.444304\n },\n \"totalConfirmedCases\": 32,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -66.590149,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Puerto Rico\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 18.220833\n },\n \"totalConfirmedCases\": 30281,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 457,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 391,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -64.896335,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"U.S. Virgin Islands\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 18.335765\n },\n \"totalConfirmedCases\": 17,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 0,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -86.902298,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Alabama\",\n \"county\": null,\n \"isoCode\": \"US-AL\",\n \"lat\": 32.318231\n },\n \"totalConfirmedCases\": 118598,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2070,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 41523,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -154.493062,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Alaska\",\n \"county\": null,\n \"isoCode\": \"US-AK\",\n \"lat\": 63.588753\n },\n \"totalConfirmedCases\": 5453,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 33,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 1192,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -111.093731,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Arizona\",\n \"county\": null,\n \"isoCode\": \"US-AZ\",\n \"lat\": 34.048928\n },\n \"totalConfirmedCases\": 196927,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 4704,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 27029,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -91.831833,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Arkansas\",\n \"county\": null,\n \"isoCode\": \"US-AR\",\n \"lat\": 35.20105\n },\n \"totalConfirmedCases\": 56682,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 667,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 45446,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -119.417932,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"California\",\n \"county\": null,\n \"isoCode\": \"US-CA\",\n \"lat\": 36.778261\n },\n \"totalConfirmedCases\": 660812,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 12383,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 247019,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -105.782067,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Colorado\",\n \"county\": null,\n \"isoCode\": \"US-CO\",\n \"lat\": 39.550051\n },\n \"totalConfirmedCases\": 54791,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1920,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 20090,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -73.087749,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Connecticut\",\n \"county\": null,\n \"isoCode\": \"US-CT\",\n \"lat\": 41.603221\n },\n \"totalConfirmedCases\": 51540,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 4462,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 30015,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": 139.638,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Cruise Ship: Diamond Princess\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 35.4437\n },\n \"totalConfirmedCases\": 46,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 0,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -122.6655,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Cruise Ship: Grand Princess\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 37.6489\n },\n \"totalConfirmedCases\": 21,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 0,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -75.52767,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Delaware\",\n \"county\": null,\n \"isoCode\": \"US-DE\",\n \"lat\": 38.910832\n },\n \"totalConfirmedCases\": 16770,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 600,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 8613,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -77.0369,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"District of Columbia\",\n \"county\": null,\n \"isoCode\": \"US-DC\",\n \"lat\": 38.9072\n },\n \"totalConfirmedCases\": 13469,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 602,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 10416,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -81.515754,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Florida\",\n \"county\": null,\n \"isoCode\": \"US-FL\",\n \"lat\": 27.664827\n },\n \"totalConfirmedCases\": 593291,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 17598,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 53305,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -82.907123,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Georgia\",\n \"county\": null,\n \"isoCode\": \"US-GA\",\n \"lat\": 32.157435\n },\n \"totalConfirmedCases\": 250272,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 5003,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 37946,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -155.665857,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Hawaii\",\n \"county\": null,\n \"isoCode\": \"US-HI\",\n \"lat\": 19.898682\n },\n \"totalConfirmedCases\": 6075,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 46,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 1756,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -114.742041,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Idaho\",\n \"county\": null,\n \"isoCode\": \"US-ID\",\n \"lat\": 44.068202\n },\n \"totalConfirmedCases\": 29515,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 307,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 10369,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -89.398528,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Illinois\",\n \"county\": null,\n \"isoCode\": \"US-IL\",\n \"lat\": 40.633125\n },\n \"totalConfirmedCases\": 218439,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 8100,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 146699,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -85.602364,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Indiana\",\n \"county\": null,\n \"isoCode\": \"US-IN\",\n \"lat\": 40.551217\n },\n \"totalConfirmedCases\": 92776,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 3407,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 59073,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -93.097702,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Iowa\",\n \"county\": null,\n \"isoCode\": \"US-IA\",\n \"lat\": 41.878003\n },\n \"totalConfirmedCases\": 55617,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1032,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 39943,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -98.484246,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Kansas\",\n \"county\": null,\n \"isoCode\": \"US-KS\",\n \"lat\": 39.011902\n },\n \"totalConfirmedCases\": 37677,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 438,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 21063,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -84.270018,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Kentucky\",\n \"county\": null,\n \"isoCode\": \"US-KY\",\n \"lat\": 37.839333\n },\n \"totalConfirmedCases\": 44445,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 890,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 9021,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -92.145024,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Louisiana\",\n \"county\": null,\n \"isoCode\": \"US-LA\",\n \"lat\": 31.244823\n },\n \"totalConfirmedCases\": 142085,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 4693,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 103512,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -69.445469,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Maine\",\n \"county\": null,\n \"isoCode\": \"US-ME\",\n \"lat\": 45.253783\n },\n \"totalConfirmedCases\": 4293,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 129,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 3604,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -76.641271,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Maryland\",\n \"county\": null,\n \"isoCode\": \"US-MD\",\n \"lat\": 39.045755\n },\n \"totalConfirmedCases\": 102312,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 3669,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 5986,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -71.382437,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Massachusetts\",\n \"county\": null,\n \"isoCode\": \"US-MA\",\n \"lat\": 42.407211\n },\n \"totalConfirmedCases\": 133705,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 9028,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 100486,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -85.602364,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Michigan\",\n \"county\": null,\n \"isoCode\": \"US-MI\",\n \"lat\": 44.314844\n },\n \"totalConfirmedCases\": 104651,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 6648,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 63636,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -94.6859,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Minnesota\",\n \"county\": null,\n \"isoCode\": \"US-MN\",\n \"lat\": 46.729553\n },\n \"totalConfirmedCases\": 68172,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1803,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 56659,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -89.398528,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Mississippi\",\n \"county\": null,\n \"isoCode\": \"US-MS\",\n \"lat\": 32.354668\n },\n \"totalConfirmedCases\": 76323,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2215,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 49836,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -91.831833,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Missouri\",\n \"county\": null,\n \"isoCode\": \"US-MO\",\n \"lat\": 37.964253\n },\n \"totalConfirmedCases\": 73856,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1521,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 9997,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -110.362566,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Montana\",\n \"county\": null,\n \"isoCode\": \"US-MT\",\n \"lat\": 46.879682\n },\n \"totalConfirmedCases\": 6247,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 90,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 4011,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -99.901813,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Nebraska\",\n \"county\": null,\n \"isoCode\": \"US-NE\",\n \"lat\": 41.492537\n },\n \"totalConfirmedCases\": 31768,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 384,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 22004,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -116.419389,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Nevada\",\n \"county\": null,\n \"isoCode\": \"US-NV\",\n \"lat\": 38.80261\n },\n \"totalConfirmedCases\": 64604,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1193,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 26011,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -71.572395,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"New Hampshire\",\n \"county\": null,\n \"isoCode\": \"US-NH\",\n \"lat\": 43.193852\n },\n \"totalConfirmedCases\": 7073,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 428,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 6218,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -74.405661,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"New Jersey\",\n \"county\": null,\n \"isoCode\": \"US-NJ\",\n \"lat\": 40.058324\n },\n \"totalConfirmedCases\": 194738,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 16012,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 155415,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -105.032363,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"New Mexico\",\n \"county\": null,\n \"isoCode\": \"US-NM\",\n \"lat\": 34.97273\n },\n \"totalConfirmedCases\": 25872,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 742,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 10182,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -74.217933,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"New York\",\n \"county\": null,\n \"isoCode\": \"US-NY\",\n \"lat\": 43.299428\n },\n \"totalConfirmedCases\": 458462,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 33648,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 350527,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -79.0193,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"North Carolina\",\n \"county\": null,\n \"isoCode\": \"US-NC\",\n \"lat\": 35.759573\n },\n \"totalConfirmedCases\": 152478,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2544,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 116969,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -101.002012,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"North Dakota\",\n \"county\": null,\n \"isoCode\": \"US-ND\",\n \"lat\": 47.551493\n },\n \"totalConfirmedCases\": 9477,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 132,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 7066,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": 145.75,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Northern Mariana Islands\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 15.183333\n },\n \"totalConfirmedCases\": 68,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 4,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 11,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -82.907123,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Ohio\",\n \"county\": null,\n \"isoCode\": \"US-OH\",\n \"lat\": 40.417287\n },\n \"totalConfirmedCases\": 113097,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 3961,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 84904,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -97.092877,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Oklahoma\",\n \"county\": null,\n \"isoCode\": \"US-OK\",\n \"lat\": 35.007752\n },\n \"totalConfirmedCases\": 51763,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 717,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 39282,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -120.554201,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Oregon\",\n \"county\": null,\n \"isoCode\": \"US-OR\",\n \"lat\": 43.804133\n },\n \"totalConfirmedCases\": 24437,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 421,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 4355,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -77.194525,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Pennsylvania\",\n \"county\": null,\n \"isoCode\": \"US-PA\",\n \"lat\": 41.203322\n },\n \"totalConfirmedCases\": 132461,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 7650,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 95901,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -71.477429,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Rhode Island\",\n \"county\": null,\n \"isoCode\": \"US-RI\",\n \"lat\": 41.580095\n },\n \"totalConfirmedCases\": 21289,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2045,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 1953,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -81.163725,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"South Carolina\",\n \"county\": null,\n \"isoCode\": \"US-SC\",\n \"lat\": 33.836081\n },\n \"totalConfirmedCases\": 111610,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2462,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 42730,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -99.901813,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"South Dakota\",\n \"county\": null,\n \"isoCode\": \"US-SD\",\n \"lat\": 43.969515\n },\n \"totalConfirmedCases\": 10890,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 158,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 8773,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -86.580447,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Tennessee\",\n \"county\": null,\n \"isoCode\": \"US-TN\",\n \"lat\": 35.517491\n },\n \"totalConfirmedCases\": 144076,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1782,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 91323,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -99.901813,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Texas\",\n \"county\": null,\n \"isoCode\": \"US-TX\",\n \"lat\": 31.968599\n },\n \"totalConfirmedCases\": 597436,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 11509,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 383717,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -111.093731,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Utah\",\n \"county\": null,\n \"isoCode\": \"US-UT\",\n \"lat\": 39.32098\n },\n \"totalConfirmedCases\": 48983,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 392,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 36679,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -72.577841,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Vermont\",\n \"county\": null,\n \"isoCode\": \"US-VT\",\n \"lat\": 44.558803\n },\n \"totalConfirmedCases\": 1541,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 59,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 1321,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -78.656894,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Virginia\",\n \"county\": null,\n \"isoCode\": \"US-VA\",\n \"lat\": 37.431573\n },\n \"totalConfirmedCases\": 110805,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 2440,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 13477,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -120.740139,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Washington\",\n \"county\": null,\n \"isoCode\": \"US-WA\",\n \"lat\": 47.751074\n },\n \"totalConfirmedCases\": 73046,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1944,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 23404,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -80.454903,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"West Virginia\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": 38.597626\n },\n \"totalConfirmedCases\": 9093,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 172,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 6144,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -88.787868,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Wisconsin\",\n \"county\": null,\n \"isoCode\": \"US-WI\",\n \"lat\": 43.78444\n },\n \"totalConfirmedCases\": 69341,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 1077,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 54181,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": -107.290284,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Wyoming\",\n \"county\": null,\n \"isoCode\": \"US-WY\",\n \"lat\": 43.075968\n },\n \"totalConfirmedCases\": 3528,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 61,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 2641,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": null,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Veteran Affairs\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": null\n },\n \"totalConfirmedCases\": 6692,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 413,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": null,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"US Military\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": null\n },\n \"totalConfirmedCases\": 6213,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 26,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 1947,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": null,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Navajo Nation\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": null\n },\n \"totalConfirmedCases\": 1540,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 58,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": null,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Federal Prisons\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": null\n },\n \"totalConfirmedCases\": 1047,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 26,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 507,\n \"newlyRecoveredCases\": 0\n },\n {\n \"location\": {\n \"long\": null,\n \"countryOrRegion\": \"United States\",\n \"provinceOrState\": \"Wuhan Repatriated\",\n \"county\": null,\n \"isoCode\": null,\n \"lat\": null\n },\n \"totalConfirmedCases\": 3,\n \"newlyConfirmedCases\": 0,\n \"totalDeaths\": 0,\n \"newDeaths\": 0,\n \"totalRecoveredCases\": 0,\n \"newlyRecoveredCases\": 0\n }\n ]\n }\n}\n" ] ], [ [ "#### Individual States US", "_____no_output_____" ] ], [ [ "# overview of one state\nUS_dict['stats']['breakdowns'][0]", "_____no_output_____" ], [ "# retrive data for each state into list\nfull_list_US_country=[]\nfor pos,each_dict in enumerate (US_dict['stats']['breakdowns'][:]):\n flatten_dict=each_dict['location']\n flatten_dict.update(dict(list(US_dict['stats']['breakdowns'][pos].items())[1: 7]) \n )\n full_list_US_country.append(flatten_dict)\n\n# convert list into DataFrame\npd_list_US_country = pd.DataFrame(full_list_US_country)\npd_list_US_country.head()", "_____no_output_____" ], [ "# save data into csv file\ndirectory = '../data/raw/SMARTABLE'\nif not os.path.exists(directory):\n os.mkdir(directory)\n \npd_list_US_country.to_csv('../data/raw/SMARTABLE/full_list_US_country.csv',sep=';',index=False)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e727b19af2db8c1c068af86a0551f1a3878c17a8
22,833
ipynb
Jupyter Notebook
Learning_Exceptions.ipynb
RuslanBrilenkov/Python_Tutorials
010d02c06e31c4d7747656e538c738e34267b4f3
[ "MIT" ]
null
null
null
Learning_Exceptions.ipynb
RuslanBrilenkov/Python_Tutorials
010d02c06e31c4d7747656e538c738e34267b4f3
[ "MIT" ]
null
null
null
Learning_Exceptions.ipynb
RuslanBrilenkov/Python_Tutorials
010d02c06e31c4d7747656e538c738e34267b4f3
[ "MIT" ]
null
null
null
35.95748
932
0.569746
[ [ [ "# Author: Ruslan Brilenkov\n## References: _docs.python.org_ and _realpython.com_", "_____no_output_____" ], [ "# In this notebook, I will go through the basics of errors and exceptions in Python", "_____no_output_____" ], [ "### There are (at least) three distinguishable kinds of errors: _name errors_ , _syntax errors_ and _exceptions_.", "_____no_output_____" ], [ "# _Name Errors_\n\n## Probably, the first most common type of errors, especially for the beginners.\n### Here are few examples of code which will give the Name Error:", "_____no_output_____" ] ], [ [ "print('Hello world') # --> does not give an error", "Hello world\n" ], [ "grint('Hello world') # --> gives a Syntax error because there is no function called \"grint\"", "_____no_output_____" ] ], [ [ "# _Syntax Errors_\n\n## Probably, the second most common type of errors.\n### Here are few examples of code which will give the Syntax Error:", "_____no_output_____" ] ], [ [ "print('Hello world' # suppose, you forgot to close parentheses", "_____no_output_____" ], [ "print('Hello world) # or you did not close the quote", "_____no_output_____" ] ], [ [ "## The parser repeats the offending line and displays a little 'arrow' pointing at the earliest point in the line where the error was detected. \n## It can give a clue about what kind of error it is and where it happened", "_____no_output_____" ], [ "# _Exceptions_\n\n## Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. \n\n## _Errors detected during execution are called exceptions_ and are not unconditionally fatal: we will learn how to handle them in Python programs. Most exceptions are not handled by programs.\n\n## Let's produce several exceptions, namely, ZeroDivisionError, NameError and TypeError.", "_____no_output_____" ] ], [ [ "10/0 # gives an error, because we are not allowed to divide by zero", "_____no_output_____" ], [ "10 + undefined_variable # --> gives an error because we are using an undefined variable", "_____no_output_____" ], [ "'1' + 4 # --> gives an error because we cannot perform such an poeration on quantities with different types\n# in this case, string and integer", "_____no_output_____" ] ], [ [ "# So, how to handle exceptions?", "_____no_output_____" ], [ "## When you run a block of code where you think an exception/error might occur, the way to handle it is to use a _try-catch_ block.\n\n## In simple words, after the word _try:_ we run a code. If it fails, we either raise or catch an exception.\n\n$$\ntry:\n ...\n #run the code which might have some problem/error/excpetion\nexcept:\n ...\n #what to do in case of the exception\n$$\n\n### Let's demonstrate it by calling a simple function of division one variable by another. We will on purpose _divide a number by zero_ to create an error - and we will catch it", "_____no_output_____" ] ], [ [ "\n# defining a division function\ndef div(x, y):\n result = x / y\n return result", "_____no_output_____" ], [ "# calling a division function wthout any error\ndiv(5,6)", "_____no_output_____" ], [ "# Making an error on purpose by dividing by zero\ndiv(5,0)", "_____no_output_____" ] ], [ [ "## we can modify the code to catch the errors with a _try-catch_ block.", "_____no_output_____" ] ], [ [ "# defining a division function (with modification)\ndef div(x, y):\n try:\n result = x / y\n except:\n print('exception happened')\n result = None\n \n return result", "_____no_output_____" ], [ "div(5,0)", "exception happened\n" ] ], [ [ "# Important!\n# If you are using a try-catch block _you should always provide some information about an exception_ in case the exception happens.\n\n## It is possible to use a command _to skip the_ block. The program continues executing the next lines of codes etc without giving any notice or message to a user.\n\n$$\ntry:\n ...\n #run the code which might have some problem/error/excpetion\nexcept:\n pass\n$$\n\n## However, it is a _terrible practice_, since noeither, you, nor user or debugger of the program can see that something went wrong in case it did.\n\n## _Be careful with skipping (passing) the blocks of code, especially those which can indicate on the error!_\n\n# _Solution: be specific!_", "_____no_output_____" ] ], [ [ "def div(x, y):\n try:\n result = x / y\n except:\n print('Division by zero is not allowed! - my message')\n result = None\n \n return result\n\ndiv(5,0)", "Division by zero is not allowed! - my message\n" ] ], [ [ "# Using only _except:_ key word would catch all kinds of exceptions. \n## _There are plenty different types of exceptions._ If you expect that the specific part of the code would give some error, it might be wise to catch them specifically. \n\n## For a dedicated reader, here you can check for all sorts of [Build-in Exceptions](https://docs.python.org/3/library/exceptions.html)", "_____no_output_____" ] ], [ [ "# To catch all of the exceptions and to print the error message use Exception key word.\ndef div(x, y):\n try:\n result = x / y\n except Exception as e:\n print(e)\n print('Division by zero is not allowed! - my message')\n result = None\n \n return result\n# calling a function which would return an error\ndiv(5,0)", "division by zero\nDivision by zero is not allowed! - my message\n" ] ], [ [ "## Or you might be really specific with the exception which occured", "_____no_output_____" ] ], [ [ "# To catch all of the exceptions and to print the error message use Exception key word.\ndef div(x, y):\n result = None\n try:\n result = x / y\n except ZeroDivisionError as e:\n print(e)\n except AttributeError as e:\n print(e)\n except IndexError as e:\n print(e)\n # ...\n # etc. ... as many exceptions as you would like to catch ... \n return result\n\n# calling a function which would return an error\ndiv(5,0) # --> raises ZeroDivisionError", "division by zero\n" ] ], [ [ "# What about checking the code by raising the error/exception manually?\n\n## The _raise_ statement allows the programmer _to force a specified exception to occur_. For example:", "_____no_output_____" ] ], [ [ "raise NameError('HiThere')", "_____no_output_____" ] ], [ [ "## You have to provide either an exception instance or an exception class (a class that derives from Exception). \n## To instantiate (call) an exception class, simply call its constructor without argument, such as", "_____no_output_____" ] ], [ [ "raise NameError", "_____no_output_____" ] ], [ [ "## There is a possibility to re-raise an exception if we do not want to handle it but simply to know whether the exception was raised or not", "_____no_output_____" ] ], [ [ "try:\n raise NameError('HiThere')\nexcept NameError:\n print('An exception walked by!')\n raise\n", "An exception walked by!\n" ] ], [ [ "# What if we want to run the program/code despite the possible exceptions?\n\n## Then, my dear sir, this part of the tutorial is just for you! \n## The try statement has another optional clause which is intended to define clean-up actions that must _be executed under all circumstances._\n## The _finally:_ clause. It will be executed right before the try statement is completed. For example", "_____no_output_____" ] ], [ [ "try:\n raise KeyboardInterrupt # or any other possible exception/error\nfinally:\n print('Goodbye, world!')", "Goodbye, world!\n" ] ], [ [ "## as we can see from above, despite an exception we still could continue the code", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e727b4f3cf4d5865d3ce92a574d1f9faeec0f6a8
18,422
ipynb
Jupyter Notebook
01_presentacion/01_presentacion.ipynb
sebastiandres/mat281_modulo05
eae310892d93ccbdd20f05a579c920e71868434d
[ "MIT" ]
null
null
null
01_presentacion/01_presentacion.ipynb
sebastiandres/mat281_modulo05
eae310892d93ccbdd20f05a579c920e71868434d
[ "MIT" ]
null
null
null
01_presentacion/01_presentacion.ipynb
sebastiandres/mat281_modulo05
eae310892d93ccbdd20f05a579c920e71868434d
[ "MIT" ]
null
null
null
35.223709
387
0.439312
[ [ [ "# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "# MAT281\n\n## Aplicaciones de la Matemática en la Ingeniería\n\nPuedes ejecutar este jupyter notebook de manera interactiva:\n\n[![Binder](../shared/images/jupyter_binder.png)](https://mybinder.org/v2/gh/sebastiandres/mat281_m05_proyectos/master?filepath=01_presentacion/01_presentacion.ipynb)\n\n[![Colab](../shared/images/jupyter_colab.png)](https://colab.research.google.com/github/sebastiandres/mat281_m05_proyectos/blob/master//01_presentacion/01_presentacion.ipynb)", "_____no_output_____" ], [ "## Proyecto de Data Science\n### Reglas\n\nEl siguiente proyecto debe realizarse de manera individual. No es aceptable bajo ninguna circunstancia copiar y pegar código de otros estudiantes. Pueden consultar referencias en internet (stackoverflow, por ejemplo), pero deben indicar sus fuentes cuando corresponda y evitar copiar y pegar texto y código. Todo el código debe ser ejecutable y las fórmulas en markdown y latex.\n\nNombre: [COLOCAR NOMBRE ACA]\n\nRol: [COLOCAR ROL ACA]", "_____no_output_____" ], [ "### Fechas\n\nEl proyecto considera tres clases (14, 18 y 21 de diciembre, 2018) en las cuales el estudiante debe utilizar los conocimientos y herramientas aprendidos en el curso. \n\n__Al final de cada clase__, el estudiante debe enviar un archivo zip nombrado como __proyecto\\_mat281\\_{YYYYMMDD}\\_{NOMBRE_ESTUDIANTE}.zip__ que contenga el .ipynb y la carpeta con los datos. Como siempre, debe ser enviado a [email protected] y [email protected].", "_____no_output_____" ], [ "### Cada vez más cerca del mundo real\n\nEl proyecto consiste de tres fases:\n* __Preprocesamiento__\n* __Modelamiento__\n* __Visualizaciones__\n\nSe dispone de un set de datos común para todos, correspondiente a datos de estudiantes en dos escuelas portuguesas [(fuente)](https://archive.ics.uci.edu/ml/datasets/student+performance). En la carpeta `data` se encuentran los archivos `student-mat.csv` y `student-por.csv`, mientras que `student.txt` posee la descripción de los datos y atributos.", "_____no_output_____" ], [ "## Objetivos de la primera clase\n\nEl estudiante debe analizar los datos del archivo `student-por.csv`, realizar análisis de datos y pre-procesamiento, para finalmente entregar una propuesta de modelamiento de estos datos.\n\nPor ejemplo, realizar predicción de X variable utilizando el algoritmo Z.\n\n__Importante:__ Idealmente no se debe repetir el modelamiento entre los estudiantes de MAT281.", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport pandas as pd\nimport altair as alt\n\npd.set_option(\"display.max_columns\",999)", "_____no_output_____" ], [ "stud_por = pd.read_csv(os.path.join(\"data\", \"student-por.csv\"), sep=\";\")\nstud_por.head()", "_____no_output_____" ] ], [ [ "Para ver la descripción de los atributos podemos usar la ya clásica celda mágica con comandos de `bash`.", "_____no_output_____" ] ], [ [ "%%bash\ncat data/student.txt", "# Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:\n1 school - student's school (binary: \"GP\" - Gabriel Pereira or \"MS\" - Mousinho da Silveira)\n2 sex - student's sex (binary: \"F\" - female or \"M\" - male)\n3 age - student's age (numeric: from 15 to 22)\n4 address - student's home address type (binary: \"U\" - urban or \"R\" - rural)\n5 famsize - family size (binary: \"LE3\" - less or equal to 3 or \"GT3\" - greater than 3)\n6 Pstatus - parent's cohabitation status (binary: \"T\" - living together or \"A\" - apart)\n7 Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)\n8 Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)\n9 Mjob - mother's job (nominal: \"teacher\", \"health\" care related, civil \"services\" (e.g. administrative or police), \"at_home\" or \"other\")\n10 Fjob - father's job (nominal: \"teacher\", \"health\" care related, civil \"services\" (e.g. administrative or police), \"at_home\" or \"other\")\n11 reason - reason to choose this school (nominal: close to \"home\", school \"reputation\", \"course\" preference or \"other\")\n12 guardian - student's guardian (nominal: \"mother\", \"father\" or \"other\")\n13 traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)\n14 studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)\n15 failures - number of past class failures (numeric: n if 1<=n<3, else 4)\n16 schoolsup - extra educational support (binary: yes or no)\n17 famsup - family educational support (binary: yes or no)\n18 paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)\n19 activities - extra-curricular activities (binary: yes or no)\n20 nursery - attended nursery school (binary: yes or no)\n21 higher - wants to take higher education (binary: yes or no)\n22 internet - Internet access at home (binary: yes or no)\n23 romantic - with a romantic relationship (binary: yes or no)\n24 famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)\n25 freetime - free time after school (numeric: from 1 - very low to 5 - very high)\n26 goout - going out with friends (numeric: from 1 - very low to 5 - very high)\n27 Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)\n28 Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)\n29 health - current health status (numeric: from 1 - very bad to 5 - very good)\n30 absences - number of school absences (numeric: from 0 to 93)\n\n# these grades are related with the course subject, Math or Portuguese:\n31 G1 - first period grade (numeric: from 0 to 20)\n31 G2 - second period grade (numeric: from 0 to 20)\n32 G3 - final grade (numeric: from 0 to 20, output target)\n\nAdditional note: there are several (382) students that belong to both datasets . \nThese students can be identified by searching for identical attributes\nthat characterize each student, as shown in the annexed R file.\n" ] ], [ [ "## Análisis de Datos", "_____no_output_____" ], [ "## Pre-procesamiento", "_____no_output_____" ], [ "## Propuesto de Modelamiento", "_____no_output_____" ], [ "A partir de los datos se propone ...", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e727d9452dedbc61405c2829119ffdb9285182c6
1,996
ipynb
Jupyter Notebook
Untitled1.ipynb
norazahwahab/python-training
04baac8c583489ca5e82ca8b97d886a5396d67f2
[ "Apache-2.0" ]
null
null
null
Untitled1.ipynb
norazahwahab/python-training
04baac8c583489ca5e82ca8b97d886a5396d67f2
[ "Apache-2.0" ]
null
null
null
Untitled1.ipynb
norazahwahab/python-training
04baac8c583489ca5e82ca8b97d886a5396d67f2
[ "Apache-2.0" ]
null
null
null
32.193548
742
0.568136
[ [ [ "L=[4,6,8,9,11]\nPrint(L[-1])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e727e728f1e6f0fcdd91445af5701331c7830fc4
145,406
ipynb
Jupyter Notebook
.ipynb_checkpoints/PSSMCluster-checkpoint.ipynb
JudoWill/ResearchNotebooks
35796f7ef07361eb2926c8770e623f4e9d48ab96
[ "MIT" ]
1
2019-02-03T03:45:29.000Z
2019-02-03T03:45:29.000Z
PSSMCluster.ipynb
JudoWill/ResearchNotebooks
35796f7ef07361eb2926c8770e623f4e9d48ab96
[ "MIT" ]
null
null
null
PSSMCluster.ipynb
JudoWill/ResearchNotebooks
35796f7ef07361eb2926c8770e623f4e9d48ab96
[ "MIT" ]
null
null
null
167.711649
55,739
0.844477
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e727f0204afd60144725c9f4fcd94057b9184f20
20,048
ipynb
Jupyter Notebook
examples/nlp/ipynb/text_generation_fnet.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
1,542
2020-05-06T20:23:07.000Z
2022-03-31T15:25:03.000Z
examples/nlp/ipynb/text_generation_fnet.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
625
2020-05-07T10:21:15.000Z
2022-03-31T17:19:35.000Z
examples/nlp/ipynb/text_generation_fnet.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
1,616
2020-05-07T06:28:33.000Z
2022-03-31T13:35:35.000Z
35.736185
157
0.567538
[ [ [ "# Text Generation using FNet\n\n**Author:** [Darshan Deshpande](https://twitter.com/getdarshan)<br>\n**Date created:** 2021/10/05<br>\n**Last modified:** 2021/10/05<br>\n**Description:** FNet transformer for text generation in Keras.", "_____no_output_____" ], [ "## Introduction\n\nThe original transformer implementation (Vaswani et al., 2017) was one of the major\nbreakthroughs in Natural Language Processing, giving rise to important architectures such BERT and GPT.\nHowever, the drawback of these architectures is\nthat the self-attention mechanism they use is computationally expensive. The FNet\narchitecture proposes to replace this self-attention attention with a leaner mechanism:\na Fourier transformation-based linear mixer for input tokens.\n\nThe FNet model was able to achieve 92-97% of BERT's accuracy while training 80% faster on\nGPUs and almost 70% faster on TPUs. This type of design provides an efficient and small\nmodel size, leading to faster inference times.\n\nIn this example, we will implement and train this architecture on the Cornell Movie\nDialog corpus to show the applicability of this model to text generation.", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport os\nimport re\n\n# Defining hyperparameters\n\nVOCAB_SIZE = 8192\nMAX_SAMPLES = 50000\nBUFFER_SIZE = 20000\nMAX_LENGTH = 40\nEMBED_DIM = 256\nLATENT_DIM = 512\nNUM_HEADS = 8\nBATCH_SIZE = 64", "_____no_output_____" ] ], [ [ "## Loading data\n\nWe will be using the Cornell Dialog Corpus. We will parse the movie conversations into\nquestions and answers sets.", "_____no_output_____" ] ], [ [ "path_to_zip = keras.utils.get_file(\n \"cornell_movie_dialogs.zip\",\n origin=\"http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip\",\n extract=True,\n)\n\npath_to_dataset = os.path.join(\n os.path.dirname(path_to_zip), \"cornell movie-dialogs corpus\"\n)\npath_to_movie_lines = os.path.join(path_to_dataset, \"movie_lines.txt\")\npath_to_movie_conversations = os.path.join(path_to_dataset, \"movie_conversations.txt\")\n\n\ndef load_conversations():\n # Helper function for loading the conversation splits\n id2line = {}\n with open(path_to_movie_lines, errors=\"ignore\") as file:\n lines = file.readlines()\n for line in lines:\n parts = line.replace(\"\\n\", \"\").split(\" +++$+++ \")\n id2line[parts[0]] = parts[4]\n\n inputs, outputs = [], []\n with open(path_to_movie_conversations, \"r\") as file:\n lines = file.readlines()\n for line in lines:\n parts = line.replace(\"\\n\", \"\").split(\" +++$+++ \")\n # get conversation in a list of line ID\n conversation = [line[1:-1] for line in parts[3][1:-1].split(\", \")]\n for i in range(len(conversation) - 1):\n inputs.append(id2line[conversation[i]])\n outputs.append(id2line[conversation[i + 1]])\n if len(inputs) >= MAX_SAMPLES:\n return inputs, outputs\n return inputs, outputs\n\n\nquestions, answers = load_conversations()\n\n# Splitting training and validation sets\n\ntrain_dataset = tf.data.Dataset.from_tensor_slices((questions[:40000], answers[:40000]))\nval_dataset = tf.data.Dataset.from_tensor_slices((questions[40000:], answers[40000:]))", "_____no_output_____" ] ], [ [ "### Preprocessing and Tokenization", "_____no_output_____" ] ], [ [ "\ndef preprocess_text(sentence):\n sentence = tf.strings.lower(sentence)\n # Adding a space between the punctuation and the last word to allow better tokenization\n sentence = tf.strings.regex_replace(sentence, r\"([?.!,])\", r\" \\1 \")\n # Replacing multiple continuous spaces with a single space\n sentence = tf.strings.regex_replace(sentence, r\"\\s\\s+\", \" \")\n # Replacing non english words with spaces\n sentence = tf.strings.regex_replace(sentence, r\"[^a-z?.!,]+\", \" \")\n sentence = tf.strings.strip(sentence)\n sentence = tf.strings.join([\"[start]\", sentence, \"[end]\"], separator=\" \")\n return sentence\n\n\nvectorizer = layers.TextVectorization(\n VOCAB_SIZE,\n standardize=preprocess_text,\n output_mode=\"int\",\n output_sequence_length=MAX_LENGTH,\n)\n\n# We will adapt the vectorizer to both the questions and answers\n# This dataset is batched to parallelize and speed up the process\nvectorizer.adapt(tf.data.Dataset.from_tensor_slices((questions + answers)).batch(128))", "_____no_output_____" ] ], [ [ "### Tokenizing and padding sentences using `TextVectorization`", "_____no_output_____" ] ], [ [ "\ndef vectorize_text(inputs, outputs):\n inputs, outputs = vectorizer(inputs), vectorizer(outputs)\n # One extra padding token to the right to match the output shape\n outputs = tf.pad(outputs, [[0, 1]])\n return (\n {\"encoder_inputs\": inputs, \"decoder_inputs\": outputs[:-1]},\n {\"outputs\": outputs[1:]},\n )\n\n\ntrain_dataset = train_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)\nval_dataset = val_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)\n\ntrain_dataset = (\n train_dataset.cache()\n .shuffle(BUFFER_SIZE)\n .batch(BATCH_SIZE)\n .prefetch(tf.data.AUTOTUNE)\n)\nval_dataset = val_dataset.cache().batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)", "_____no_output_____" ] ], [ [ "## Creating the FNet Encoder\n\nThe FNet paper proposes a replacement for the standard attention mechanism used by the\nTransformer architecture (Vaswani et al., 2017).\n\n![Architecture](https://i.imgur.com/rLg47qU.png)\n\nThe outputs of the FFT layer are complex numbers. To avoid dealing with complex layers,\nonly the real part (the magnitude) is extracted.\n\nThe dense layers that follow the Fourier transformation act as convolutions applied on\nthe frequency domain.", "_____no_output_____" ] ], [ [ "\nclass FNetEncoder(layers.Layer):\n def __init__(self, embed_dim, dense_dim, **kwargs):\n super(FNetEncoder, self).__init__(**kwargs)\n self.embed_dim = embed_dim\n self.dense_dim = dense_dim\n self.dense_proj = keras.Sequential(\n [\n layers.Dense(dense_dim, activation=\"relu\"),\n layers.Dense(embed_dim),\n ]\n )\n self.layernorm_1 = layers.LayerNormalization()\n self.layernorm_2 = layers.LayerNormalization()\n\n def call(self, inputs):\n # Casting the inputs to complex64\n inp_complex = tf.cast(inputs, tf.complex64)\n # Projecting the inputs to the frequency domain using FFT2D and\n # extracting the real part of the output\n fft = tf.math.real(tf.signal.fft2d(inp_complex))\n proj_input = self.layernorm_1(inputs + fft)\n proj_output = self.dense_proj(proj_input)\n return self.layernorm_2(proj_input + proj_output)\n", "_____no_output_____" ] ], [ [ "## Creating the Decoder\n\nThe decoder architecture remains the same as the one proposed by (Vaswani et al., 2017)\nin the original transformer architecture, consisting of an embedding, positional\nencoding, two masked multihead attention layers and finally the dense output layers.\nThe architecture that follows is taken from\n[Deep Learning with Python, second edition, chapter 11](https://www.manning.com/books/deep-learning-with-python-second-edition).", "_____no_output_____" ] ], [ [ "\nclass PositionalEmbedding(layers.Layer):\n def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs):\n super(PositionalEmbedding, self).__init__(**kwargs)\n self.token_embeddings = layers.Embedding(\n input_dim=vocab_size, output_dim=embed_dim\n )\n self.position_embeddings = layers.Embedding(\n input_dim=sequence_length, output_dim=embed_dim\n )\n self.sequence_length = sequence_length\n self.vocab_size = vocab_size\n self.embed_dim = embed_dim\n\n def call(self, inputs):\n length = tf.shape(inputs)[-1]\n positions = tf.range(start=0, limit=length, delta=1)\n embedded_tokens = self.token_embeddings(inputs)\n embedded_positions = self.position_embeddings(positions)\n return embedded_tokens + embedded_positions\n\n def compute_mask(self, inputs, mask=None):\n return tf.math.not_equal(inputs, 0)\n\n\nclass FNetDecoder(layers.Layer):\n def __init__(self, embed_dim, latent_dim, num_heads, **kwargs):\n super(FNetDecoder, self).__init__(**kwargs)\n self.embed_dim = embed_dim\n self.latent_dim = latent_dim\n self.num_heads = num_heads\n self.attention_1 = layers.MultiHeadAttention(\n num_heads=num_heads, key_dim=embed_dim\n )\n self.attention_2 = layers.MultiHeadAttention(\n num_heads=num_heads, key_dim=embed_dim\n )\n self.dense_proj = keras.Sequential(\n [\n layers.Dense(latent_dim, activation=\"relu\"),\n layers.Dense(embed_dim),\n ]\n )\n self.layernorm_1 = layers.LayerNormalization()\n self.layernorm_2 = layers.LayerNormalization()\n self.layernorm_3 = layers.LayerNormalization()\n self.supports_masking = True\n\n def call(self, inputs, encoder_outputs, mask=None):\n causal_mask = self.get_causal_attention_mask(inputs)\n if mask is not None:\n padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype=\"int32\")\n padding_mask = tf.minimum(padding_mask, causal_mask)\n\n attention_output_1 = self.attention_1(\n query=inputs, value=inputs, key=inputs, attention_mask=causal_mask\n )\n out_1 = self.layernorm_1(inputs + attention_output_1)\n\n attention_output_2 = self.attention_2(\n query=out_1,\n value=encoder_outputs,\n key=encoder_outputs,\n attention_mask=padding_mask,\n )\n out_2 = self.layernorm_2(out_1 + attention_output_2)\n\n proj_output = self.dense_proj(out_2)\n return self.layernorm_3(out_2 + proj_output)\n\n def get_causal_attention_mask(self, inputs):\n input_shape = tf.shape(inputs)\n batch_size, sequence_length = input_shape[0], input_shape[1]\n i = tf.range(sequence_length)[:, tf.newaxis]\n j = tf.range(sequence_length)\n mask = tf.cast(i >= j, dtype=\"int32\")\n mask = tf.reshape(mask, (1, input_shape[1], input_shape[1]))\n mult = tf.concat(\n [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)],\n axis=0,\n )\n return tf.tile(mask, mult)\n\n\ndef create_model():\n encoder_inputs = keras.Input(shape=(None,), dtype=\"int32\", name=\"encoder_inputs\")\n x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(encoder_inputs)\n encoder_outputs = FNetEncoder(EMBED_DIM, LATENT_DIM)(x)\n encoder = keras.Model(encoder_inputs, encoder_outputs)\n decoder_inputs = keras.Input(shape=(None,), dtype=\"int32\", name=\"decoder_inputs\")\n encoded_seq_inputs = keras.Input(\n shape=(None, EMBED_DIM), name=\"decoder_state_inputs\"\n )\n x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(decoder_inputs)\n x = FNetDecoder(EMBED_DIM, LATENT_DIM, NUM_HEADS)(x, encoded_seq_inputs)\n x = layers.Dropout(0.5)(x)\n decoder_outputs = layers.Dense(VOCAB_SIZE, activation=\"softmax\")(x)\n decoder = keras.Model(\n [decoder_inputs, encoded_seq_inputs], decoder_outputs, name=\"outputs\"\n )\n decoder_outputs = decoder([decoder_inputs, encoder_outputs])\n fnet = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs, name=\"fnet\")\n return fnet\n", "_____no_output_____" ] ], [ [ "## Creating and Training the model", "_____no_output_____" ] ], [ [ "fnet = create_model()\nfnet.compile(\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"])", "_____no_output_____" ] ], [ [ "Here, the `epochs` parameter is set to a single epoch, but in practice the model will take around\n**20-30 epochs** of training to start outputting comprehensible sentences. Although accuracy\nis not a good measure for this task, we will use it just to get a hint of the improvement\nof the network.", "_____no_output_____" ] ], [ [ "fnet.fit(train_dataset, epochs=1, validation_data=val_dataset)", "_____no_output_____" ] ], [ [ "## Performing inference", "_____no_output_____" ] ], [ [ "VOCAB = vectorizer.get_vocabulary()\n\n\ndef decode_sentence(input_sentence):\n # Mapping the input sentence to tokens and adding start and end tokens\n tokenized_input_sentence = vectorizer(\n tf.constant(\"[start] \" + preprocess_text(input_sentence) + \" [end]\")\n )\n # Initializing the initial sentence consisting of only the start token.\n tokenized_target_sentence = tf.expand_dims(VOCAB.index(\"[start]\"), 0)\n decoded_sentence = \"\"\n\n for i in range(MAX_LENGTH):\n # Get the predictions\n predictions = fnet.predict(\n {\n \"encoder_inputs\": tf.expand_dims(tokenized_input_sentence, 0),\n \"decoder_inputs\": tf.expand_dims(\n tf.pad(\n tokenized_target_sentence,\n [[0, MAX_LENGTH - tf.shape(tokenized_target_sentence)[0]]],\n ),\n 0,\n ),\n }\n )\n # Calculating the token with maximum probability and getting the corresponding word\n sampled_token_index = tf.argmax(predictions[0, i, :])\n sampled_token = VOCAB[sampled_token_index.numpy()]\n # If sampled token is the end token then stop generating and return the sentence\n if tf.equal(sampled_token_index, VOCAB.index(\"[end]\")):\n break\n decoded_sentence += sampled_token + \" \"\n tokenized_target_sentence = tf.concat(\n [tokenized_target_sentence, [sampled_token_index]], 0\n )\n\n return decoded_sentence\n\n\ndecode_sentence(\"Where have you been all this time?\")", "_____no_output_____" ] ], [ [ "## Conclusion\n\nThis example shows how to train and perform inference using the FNet model.\nFor getting insight into the architecture or for further reading, you can refer to:\n\n1. [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824v3)\n(Lee-Thorp et al., 2021)\n2. [Attention Is All You Need](https://arxiv.org/abs/1706.03762v5) (Vaswani et al.,\n2017)\n\nThanks to François Chollet for his Keras example on\n[English-to-Spanish translation with a sequence-to-sequence Transformer](https://keras.io/examples/nlp/neural_machine_translation_with_transformer/)\nfrom which the decoder implementation was extracted.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e727f265f6bcfce338b7d5c104d61971f67e9035
909,884
ipynb
Jupyter Notebook
examples/Docking_example.ipynb
OptiMaL-PSE-Lab/DeepDock
9d0f4dc1862477e5acfd2da0d6e5142900f7045f
[ "MIT" ]
66
2021-09-15T13:46:19.000Z
2022-03-07T05:49:57.000Z
examples/Docking_example.ipynb
cmielke-vir/DeepDock
f504438192a6aee594279c69dd4ccaab67eae616
[ "MIT" ]
6
2021-12-07T13:49:44.000Z
2022-03-18T10:58:55.000Z
examples/Docking_example.ipynb
cmielke-vir/DeepDock
f504438192a6aee594279c69dd4ccaab67eae616
[ "MIT" ]
18
2021-12-05T03:33:07.000Z
2022-03-08T01:03:58.000Z
1,446.556439
386,895
0.531395
[ [ [ "# DeepDock example", "_____no_output_____" ], [ "This notebook is an example of how to do molecular docking using DeepDock. We show how to prepare the target to to generate a mesh file and how to use it for docking.", "_____no_output_____" ], [ "### Target mesh generation", "_____no_output_____" ], [ "Load required libraries for target mesh generation", "_____no_output_____" ] ], [ [ "from deepdock.prepare_target.computeTargetMesh import compute_inp_surface\nimport trimesh", "RDKit WARNING: [12:35:45] Enabling RDKit 2019.09.1 jupyter extensions\n" ] ], [ [ "Copy data from deepdock installation folder (deepdock/data)", "_____no_output_____" ] ], [ [ "import deepdock \ncopy_pdb = 'cp '+deepdock.__path__[0]+'/../data/1z6e_protein.pdb .'\ncopy_mol2 = 'cp '+deepdock.__path__[0]+'/../data/1z6e_ligand.mol2 .'\n!$copy_pdb\n!$copy_mol2", "_____no_output_____" ] ], [ [ "Define the file with the PDB of the target and the mol2 file of the co-crystalized ligand. ", "_____no_output_____" ] ], [ [ "basename = '1z6e_protein'\ntarget_filename = basename+'.pdb' \nligand_filename = '1z6e_ligand.mol2'", "_____no_output_____" ] ], [ [ "Compute the target mesh. This will create a .ply file that contains the mesh information and a pdb that defines a the binding site around the co-crystalized ligand", "_____no_output_____" ] ], [ [ "compute_inp_surface(target_filename, ligand_filename, dist_threshold=10)", "_____no_output_____" ] ], [ [ "You can visualize the target mesh with the following commands", "_____no_output_____" ] ], [ [ "mesh = trimesh.load_mesh(basename+'.ply')\nprint('Number of nodes: ', len(mesh.vertices))\nmesh.show()", "Number of nodes: 743\n" ] ], [ [ "### Binding conformation prediction usign DeepDock", "_____no_output_____" ], [ "Load required libraries for docking", "_____no_output_____" ] ], [ [ "from rdkit import Chem\nimport deepdock\nfrom deepdock.models import *\nfrom deepdock.DockingFunction import dock_compound, get_random_conformation\n\nimport numpy as np\nimport torch\n# set the random seeds for reproducibility\nnp.random.seed(123)\ntorch.cuda.manual_seed_all(123)", "_____no_output_____" ] ], [ [ "Define the DeepDock model to use", "_____no_output_____" ] ], [ [ "device = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nligand_model = LigandNet(28, residual_layers=10, dropout_rate=0.10)\ntarget_model = TargetNet(4, residual_layers=10, dropout_rate=0.10)\nmodel = DeepDock(ligand_model, target_model, hidden_dim=64, n_gaussians=10, dropout_rate=0.10, dist_threhold=7.).to(device)", "_____no_output_____" ] ], [ [ "Load pretrained model.", "_____no_output_____" ] ], [ [ "checkpoint = torch.load(deepdock.__path__[0]+'/../Trained_models/DeepDock_pdbbindv2019_13K_minTestLoss.chk', map_location=torch.device(device))\nmodel.load_state_dict(checkpoint['model_state_dict']) ", "_____no_output_____" ] ], [ [ "Define the path to the .ply file containing the target mesh and load the ligand as RDKIT ligand. Molecule can be loaded from any file type or smiles as long it is readable by RDKIT. It is recommended to start from a 3D molecule with all isomers defined. If starting from a SMILES the code will generate 3D coordinates and minimize them usign MMFF94. \n\n**Important:** As with every docking software, DeepDock is very susceptible to ligand preparation. In general we recommend to use the same ligand preparation steps that were used to prepare the training data, in this case PDBbind. If that is not possible, at lease use mol2 produced by Corina for compound containg chemical groups with conjugated bonds (eg. carboxylates, phosphates, sulphates, etc) since bond deffinition can change and can have great impact on results.\n", "_____no_output_____" ] ], [ [ "target_ply = '1z6e_protein.ply'\nreal_mol = Chem.MolFromMol2File('1z6e_ligand.mol2',sanitize=False, cleanupSubstructures=False)", "_____no_output_____" ] ], [ [ "The following compound computes the ligand binding conformation usign DeepDock. It returns the optimized molecule and the results from differential evolution algorithm", "_____no_output_____" ] ], [ [ "opt_mol, init_mol, result = dock_compound(real_mol, target_ply, model, dist_threshold=3., popsize=150, seed=123, device=device)\nprint(result)\nopt_mol", " fun: array([-258.85778592])\n jac: array([-1.69022309e+02, 1.71047907e+02, -1.14398853e+02, 5.30523039e+01,\n 6.62718094e+00, -3.20072161e+01, 8.05214995e+01, 1.03612922e+02,\n 9.26440066e+01, 1.17157697e+00, -4.24169684e+01, -2.28344220e+01,\n 9.42156930e-01, 1.19297283e-01])\n message: 'Optimization terminated successfully.'\n nfev: 49853\n nit: 316\n num_MixOfGauss: 517\n num_atoms: 38\n num_rotbonds: 8\n rotbonds: [(14, 15, 7, 8), (22, 9, 21, 4), (23, 22, 9, 21), (8, 17, 20, 0), (7, 19, 21, 4), (29, 28, 25, 24), (28, 32, 34, 35), (32, 34, 35, 36)]\n success: True\n x: array([-0.99374445, -2.56988637, -3.12112576, 10.60745797, 27.68033189,\n 56.43378115, -1.93031307, 0.15773386, -0.30088624, -1.9763511 ,\n -0.61640145, 0.54661285, 2.04593077, 0.16918374])\n" ] ], [ [ "**Results** \\\nfun: negative Score of the predicted molecule \\\njac: \\\nmessage: Indicates if optimization finished correctly \\\nnfev: 42666 \\\nnit: number of iterations used to converge \\\nnum_MixOfGauss: Number of mixture of guassians used to score the final molecule \\\nnum_atoms: Number of heavy atoms in theligand \\\nnum_rotbonds: Number of rotatable bonds in the ligand \\\nrotbonds: Rotatable bond in the ligand \\\nsuccess: True or Fail depending if the optimization converged \\\nx: Final values to transform original molecule into the oenn with predicted binding conformation", "_____no_output_____" ], [ "You can visualize the bound ligand to the target using py3Dmol. This cell show code to visualze the predicted molecule (gray) and the co-crystalized molecule (red) and the starting conformation used by DeepDock (blue). ", "_____no_output_____" ] ], [ [ "import py3Dmol\np = py3Dmol.view(width=400,height=400)\nwith open('1z6e_protein.pdb') as ifile:\n target = \"\".join([x for x in ifile])\np.addModelsAsFrames(target)\np.addModel(Chem.MolToMolBlock(real_mol),'sdf')\np.addModel(Chem.MolToMolBlock(opt_mol),'sdf')\np.addModel(Chem.MolToMolBlock(init_mol),'sdf')\n\ntar = {'model':0}\nlig_real = {'model':1}\nlig_opt = {'model':2}\nlig_init = {'model':3}\n\np.setStyle(tar, {'cartoon': {'color':'gray'}})\n#p.addSurface(py3Dmol.VDW,{'opacity':0.7,'color':'white'}, res)\np.setStyle(lig_real,{'stick':{'color':'red'}})\np.setStyle(lig_init,{'stick':{'color':'blue'}})\np.setStyle(lig_opt,{'stick':{}})\n\np.zoomTo()\np.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e728145878811b41230f88a3fc230c791373863e
14,814
ipynb
Jupyter Notebook
demos/03_dataloader.ipynb
mszhanyi/PyMultiprocessDemo
95bf108ab589db50dc212684f7f6173ddd4c9a06
[ "MIT" ]
1
2021-11-26T09:20:58.000Z
2021-11-26T09:20:58.000Z
demos/03_dataloader.ipynb
mszhanyi/PyMultiprocessDemo
95bf108ab589db50dc212684f7f6173ddd4c9a06
[ "MIT" ]
4
2021-02-25T14:28:19.000Z
2021-03-01T03:43:52.000Z
demos/03_dataloader.ipynb
mszhanyi/pymultiprocessdemo
95bf108ab589db50dc212684f7f6173ddd4c9a06
[ "MIT" ]
null
null
null
76.360825
1,961
0.646956
[ [ [ "import torch\nimport math", "_____no_output_____" ], [ "class MyIterableDataset(torch.utils.data.IterableDataset):\n def __init__(self, start, end):\n super(MyIterableDataset).__init__()\n assert end > start, \"this example code onlyl works with end >= start\"\n self.start = start \n self.end = end\n\n def __iter__(self):\n worker_info = torch.utils.data.get_worker_info()\n if worker_info is None:\n iter_start = self.start\n iter_end = self.end\n else:\n per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))\n worker_id = worker_info.id\n iter_start = self.start + worker_id * per_worker\n iter_end = min(iter_start + per_worker, self.end)\n return iter(range(iter_start, iter_end))\n", "_____no_output_____" ] ], [ [ "worker_init_fn is used to do something at the begining of each worker.", "_____no_output_____" ] ], [ [ "def worker_init_fn(id):\n print(sys.modules[\"__main__\"].__dict__)", "_____no_output_____" ], [ "if __name__ == \"__main__\":\n ds = MyIterableDataset(start=0, end=7)\n print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn)))", "_____no_output_____" ] ], [ [ "But the same code can work with script", "_____no_output_____" ] ], [ [ "# see the script\nwith open('ptdl.py', 'r') as text:\n textfile = text.read()\n print(textfile)", "import torch\nimport math\nimport sys\n\nclass MyIterableDataset(torch.utils.data.IterableDataset):\n def __init__(self, start, end):\n super(MyIterableDataset).__init__()\n assert end > start, \"this example code onlyl works with end >= start\"\n self.start = start \n self.end = end\n\n def __iter__(self):\n worker_info = torch.utils.data.get_worker_info()\n if worker_info is None:\n iter_start = self.start\n iter_end = self.end\n else:\n per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))\n worker_id = worker_info.id\n iter_start = self.start + worker_id * per_worker\n iter_end = min(iter_start + per_worker, self.end)\n return iter(range(iter_start, iter_end))\n\ndef worker_init_fn(id):\n print(sys.modules[\"__main__\"].__dict__)\n \nif __name__ == \"__main__\":\n ds = MyIterableDataset(start=0, end=7)\n print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn)))\n #print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=lambda: print(\"not work in Windows\"))))\n" ], [ "# run multiple worker with script\nfrom subprocess import Popen, PIPE\nprocess = Popen([\"python\", \"ptdl.py\"])", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7281fb4dfd7ff29a3269f9a76a6781d350444e9
129,495
ipynb
Jupyter Notebook
Examples/Kovners_approximation.ipynb
Jfeatherstone/ColorGlass
f242541df614a8eea97c43d3480c779e92660ebb
[ "MIT" ]
null
null
null
Examples/Kovners_approximation.ipynb
Jfeatherstone/ColorGlass
f242541df614a8eea97c43d3480c779e92660ebb
[ "MIT" ]
null
null
null
Examples/Kovners_approximation.ipynb
Jfeatherstone/ColorGlass
f242541df614a8eea97c43d3480c779e92660ebb
[ "MIT" ]
null
null
null
229.19469
34,328
0.897934
[ [ [ "### Checking Kovner's approximation for four Wilson line correlators in momentum space \n\n$$ \\langle U_{ab} (k) U_{cd} (p) \\rangle = \\frac{(2\\pi)^2 \\delta^{(2)}(k+p) D(k) }{N_c^2-1} \\delta_{ac} \\delta_{bd} $$\n\nThe following combinations \n\n* A\n* B\n* C", "_____no_output_____" ] ], [ [ "# If you haven't formally installed the toolbox, you will need\n# this line to get proper access to the functions\nimport sys\nsys.path.append('../')\n\n# Import our functions\nimport cgc\n\n# To make the evaluation responsive and more user friendly\n# Can be installed with: pip install progressbar2\nimport progressbar\n\n# Import numpy and matplotlib for plotting/other utility\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport numpy as np\nfrom scipy.fft import fft2\n\n# Make our plots nice and large\nplt.rcParams[\"figure.dpi\"] = 120\nplt.rcParams[\"font.size\"] = 12", "_____no_output_____" ], [ "M = 2 # Number of configurations \nN = 256 # Lattice size\nL = 32 # S_\\perp = L^2 is the area of the target\ndelta = L/N \nmu = 1 # defines the units in the calculation\n\n# Collecting and storing configurations of the adjoint Wilson line in real space\nU_list = np.zeros([M, N, N, 8, 8], dtype=np.complex128) # The wilson line is an 8x8 matrix at each lattice point (for SU(3))\n\n# This step may take a while, so we'll use a progress bar to see how far along we are\n# We'll use this several times later on as well\nprogressBar = progressbar.ProgressBar(max_value=M)\n\n# Generate M targets and calculate their Wilson lines\nfor i in range(M):\n progressBar.update(i)\n nucleus = cgc.Nucleus(3, N, delta, mu, M=0.1, Ny=100)\n U_list[i] = nucleus.adjointWilsonLine()\n\n# Now we calculate the adjoint Wilson line in momentum space\n# by taking the fourier transform of our previous result\nU_fft_list = delta**2 * fft2(U_list, axes=(0,1),\n workers=-1, # This will parallelize the computation to all available threads\n )\nprogressBar.update(M)", "100% (2 of 2) |##########################| Elapsed Time: 0:01:30 ETA: 00:00:00" ] ], [ [ "### A. Dipole combination \n\n$$ \\langle tr [ U^+(k_1) U(k_1) ] tr [ U^+(k_2) U(k_2) ] \\rangle \n\\approx \n(N_c^2-1)^2 S^2_\\perp D(k_1) D(k_2)\n+ (2\\pi)^2 \\delta(k_1-k_2) S_\\perp D^2(k_1) + (2\\pi)^2 \\delta(k_1+k_2) S_\\perp D^2(k_1)\n$$\nHere the leading contribution is proportional to $S_\\perp^2$ and to the fourth power of $N_c$. This contribution appears in the two-gluon production of the dilute-dense approximation. It yields the uncorrelated production. ", "_____no_output_____" ] ], [ [ "# Seems like this is testing the discretization error on the lattice\n# Not sure if we need it\nfrom random import uniform\n# We randomly sample momenta\nk2x = np.random.uniform(-1/delta, 1/delta)\nk2x = uniform(-1/delta, 1/delta)\nk2y = uniform(-1/delta, 1/delta)\n\ni_k2 = int(N*np.arcsin(abs(k2x)/2.0*delta)/np.pi + 0.5)\nj_k2 = int(N*np.arcsin(abs(k2y)/2.0*delta)/np.pi + 0.5)\n\nif(k2x<0): \n i_k2 = N-i_k2\n \nif(k2y<0): \n j_k2 = N-j_k2\n\nprint (\"Input momentum: \",k2x,k2y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k2)/delta*np.sign(np.sin(2*np.pi/N * i_k2)), 2*np.sin(np.pi/N * j_k2)/delta*np.sign(np.sin(2*np.pi/N * j_k2)))", "_____no_output_____" ], [ "\nFourUCorrRe = [] \nD1 = [] \nD2 = [] \nFourUCorrIm = [] \nk = []\n\nj_k1=0\nfor i_k1 in range(N):\n #for j_k1 in range(N):\n kx = 2*np.sin(np.pi/N * i_k1)/delta*np.sign(np.sin(2*np.pi/N * i_k1))\n ky = 2*np.sin(np.pi/N * j_k1)/delta*np.sign(np.sin(2*np.pi/N * j_k1)) \n C = 0.0\n d1 = 0.0\n d2 = 0.0 \n for m in range(M):\n d1 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k1,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m])/8/L**2\n d2 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m])/8/L**2 \n C += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k1,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m]) * np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m]) \n FourUCorrRe.append(np.real(C)/M)\n D1.append(np.real(d1)/M)\n D2.append(np.real(d2)/M)\n FourUCorrIm.append(np.imag(C)/M)\n k.append(kx)\n\nplt.plot(k,FourUCorrRe,'.',label=r'$\\langle tr [ U^+(k_1) U(k_1) ] tr [ U^+(k_2) U(k_2) ] \\rangle $')\nplt.plot(k,8**2*L**4*np.array(D1)*np.array(D2),'.',label=r'$(N_c^2-1)^2 S^2_\\perp D(k_1) D(k_2)$')\nplt.yscale('log')\nplt.legend(loc=4)\n\n", "Input momentum: -7.215046179364441 7.575023796619709\nApproximate momentum on the lattice: -7.19378127447371 7.542347789215962\n" ] ], [ [ "### B. Quadrupole combination \n\n$$ \\langle tr [ U^+(k_1-p) U(k_1) U^+(k_2) U(k_2-p) ] \\rangle \n\\approx \n(2\\pi)^2 \\delta^{(2)}(p) (N_c^2-1) S_\\perp D(k_1) D(k_2)\n+ (2\\pi)^2 \\delta(k_1-k_2) S_\\perp D(k_1-p)D(k_1) + (2\\pi)^2 \\delta(k_1+k_2-p) S_\\perp D(k_1)D(k_2)\n$$", "_____no_output_____" ] ], [ [ "from random import uniform \nk2x = uniform(-1/delta, 1/delta)\nk2y = uniform(-1/delta, 1/delta)\n\ni_k2 = int(N*np.arcsin(abs(k2x)/2.0*delta)/np.pi + 0.5)\nj_k2 = int(N*np.arcsin(abs(k2y)/2.0*delta)/np.pi + 0.5)\n\nif(k2x<0): \n i_k2 = N-i_k2\n \nif(k2y<0): \n j_k2 = N-j_k2\n\nprint (\"Input momentum: \",k2x,k2y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k2)/delta*np.sign(np.sin(2*np.pi/N * i_k2)), 2*np.sin(np.pi/N * j_k2)/delta*np.sign(np.sin(2*np.pi/N * j_k2)))\n\n#print(k2x,k2y,i_k2,j_k2)\n\nFourUCorrRe = [] \nD1 = [] \nD2 = [] \nFourUCorrIm = [] \nk = []\n\nj_k1=0\nfor i_k1 in range(N):\n #for j_k1 in range(N):\n kx = 2*np.sin(np.pi/N * i_k1)/delta*np.sign(np.sin(2*np.pi/N * i_k1))\n ky = 2*np.sin(np.pi/N * j_k1)/delta*np.sign(np.sin(2*np.pi/N * j_k1)) \n C = 0.0\n d1 = 0.0\n d2 = 0.0 \n for m in range(M):\n d1 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k1,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m])/8/L**2\n d2 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m])/8/L**2 \n C += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k1,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m]@np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m]) \n FourUCorrRe.append(np.real(C)/M)\n D1.append(np.real(d1)/M)\n D2.append(np.real(d2)/M)\n FourUCorrIm.append(np.imag(C)/M)\n k.append(kx)\n\nplt.plot(k,FourUCorrRe,'.',label=r'$\\langle tr [ U^+(k_1) U(k_1) U^+(k_2) U(k_2) ] \\rangle $')\nplt.plot(k,8*L**4*np.array(D1)*np.array(D2),'.',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$')\nplt.yscale('log')\nplt.legend(loc=4)\n", "Input momentum: 6.762362043189107 -0.09868436702876338\nApproximate momentum on the lattice: 6.840881494884513 -0.1963446125715201\n" ], [ "from random import uniform \n\nk1x = uniform(-1/delta, 1/delta)\nk1y = uniform(-1/delta, 1/delta)\n\nk1y = 0\n\ni_k1 = int(N*np.arcsin(abs(k1x)/2.0*delta)/np.pi + 0.5)\nj_k1 = int(N*np.arcsin(abs(k1y)/2.0*delta)/np.pi + 0.5)\n\nk2x = uniform(-1/delta, 1/delta)\nk2y = uniform(-1/delta, 1/delta)\n\nk2y = 0 \n\ni_k2 = int(N*np.arcsin(abs(k2x)/2.0*delta)/np.pi + 0.5)\nj_k2 = int(N*np.arcsin(abs(k2y)/2.0*delta)/np.pi + 0.5)\n\n\nif(k1x<0): \n i_k1 = N-i_k1\n \nif(k1y<0): \n j_k1 = N-j_k1\n\nif(k2x<0): \n i_k2 = N-i_k2\n \nif(k2y<0): \n j_k2 = N-j_k2\n\nprint (\"Input momentum: \",k1x,k1y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k1)/delta*np.sign(np.sin(2*np.pi/N * i_k1)), 2*np.sin(np.pi/N * j_k1)/delta*np.sign(np.sin(2*np.pi/N * j_k1)))\n \n \nprint (\"Input momentum: \",k2x,k2y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k2)/delta*np.sign(np.sin(2*np.pi/N * i_k2)), 2*np.sin(np.pi/N * j_k2)/delta*np.sign(np.sin(2*np.pi/N * j_k2)))\n\n\nprint(k1x+k2x)\n#print(k2x,k2y,i_k2,j_k2)\n\nFourUCorrRe = [] \nD1 = [] \nD2 = [] \nFourUCorrIm = [] \nk = []\n\n#j_k1=0\nfor i_p1 in range(N):\n #for j_k1 in range(N):\n px = 2*np.sin(np.pi/N * i_p1)/delta*np.sign(np.sin(2*np.pi/N * i_p1))\n \n C = 0.0\n d1 = 0.0\n d2 = 0.0 \n for m in range(M):\n d1 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k1,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m])/8/L**2\n d2 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m])/8/L**2 \n C += np.trace(np.conjugate(np.transpose(U_fft[:,:,(i_k1-i_p1)%N,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m]@np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,(i_k2-i_p1)%N,j_k2,m]) \n FourUCorrRe.append(np.real(C)/M)\n D1.append(np.real(d1)/M)\n D2.append(np.real(d2)/M)\n FourUCorrIm.append(np.imag(C)/M)\n k.append(px)\n\n#plt.plot(k,8*L**4*np.array(D1)*np.array(D2),'-',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n#plt.plot(k,L**4*np.array(D1)*np.array(D2),'-',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n\nplt.plot([k1x+k2x],[(L**4*np.array(D1)*np.array(D2))[0]],'x')\nplt.plot([0.0],[(8*L**4*np.array(D1)*np.array(D2))[4]],'x',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n\n\nplt.plot(k,FourUCorrRe,'+',label=r'$\\langle tr [ U^+(k_1) U(k_1) U^+(k_2) U(k_2) ] \\rangle $')\n\nplt.xlim(-max(abs(k1x+k2x)*1.2,5),max(abs(k1x+k2x)*1.2,5))\n\n#plt.yscale('log')\n#plt.legend(loc=1)\n", "Input momentum: 1.1605248774166785 0\nApproximate momentum on the lattice: 1.1770330175946788 0.0\nInput momentum: -2.3182396623474038 0\nApproximate momentum on the lattice: -2.347687591285789 0.0\n-1.1577147849307252\n" ], [ "from random import uniform \n\nk1x = uniform(-1/delta, 1/delta)\nk1y = uniform(-1/delta, 1/delta)\n\n\ni_k1 = int(N*np.arcsin(abs(k1x)/2.0*delta)/np.pi + 0.5)\nj_k1 = int(N*np.arcsin(abs(k1y)/2.0*delta)/np.pi + 0.5)\n\nk2x = k1x\nk2y = k1y\n\ni_k2 = int(N*np.arcsin(abs(k2x)/2.0*delta)/np.pi + 0.5)\nj_k2 = int(N*np.arcsin(abs(k2y)/2.0*delta)/np.pi + 0.5)\n\n\nif(k1x<0): \n i_k1 = N-i_k1\n \nif(k1y<0): \n j_k1 = N-j_k1\n\nif(k2x<0): \n i_k2 = N-i_k2\n \nif(k2y<0): \n j_k2 = N-j_k2\n\nprint (\"Input momentum: \",k1x,k1y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k1)/delta*np.sign(np.sin(2*np.pi/N * i_k1)), 2*np.sin(np.pi/N * j_k1)/delta*np.sign(np.sin(2*np.pi/N * j_k1)))\n \n \nprint (\"Input momentum: \",k2x,k2y)\nprint (\"Approximate momentum on the lattice: \", 2*np.sin(np.pi/N * i_k2)/delta*np.sign(np.sin(2*np.pi/N * i_k2)), 2*np.sin(np.pi/N * j_k2)/delta*np.sign(np.sin(2*np.pi/N * j_k2)))\n\n\nprint(k1x+k2x)\n#print(k2x,k2y,i_k2,j_k2)\n\nFourUCorrRe = [] \nD1 = [] \nD2 = [] \nFourUCorrIm = [] \nk = []\n\n#j_k1=0\nfor i_p1 in range(N):\n #for j_k1 in range(N):\n px = 2*np.sin(np.pi/N * i_p1)/delta*np.sign(np.sin(2*np.pi/N * i_p1))\n \n C = 0.0\n d1 = 0.0\n d2 = 0.0 \n for m in range(M):\n d1 += np.trace(np.conjugate(np.transpose(U_fft[:,:,(i_k1-i_p1)%N,j_k1,m]))@U_fft[:,:,(i_k1-i_p1)%N,j_k1,m])/8/L**2\n d2 += np.trace(np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,i_k2,j_k2,m])/8/L**2 \n C += np.trace(np.conjugate(np.transpose(U_fft[:,:,(i_k1-i_p1)%N,j_k1,m]))@U_fft[:,:,i_k1,j_k1,m]@np.conjugate(np.transpose(U_fft[:,:,i_k2,j_k2,m]))@U_fft[:,:,(i_k2-i_p1)%N,j_k2,m]) \n FourUCorrRe.append(np.real(C)/M)\n D1.append(np.real(d1)/M)\n D2.append(np.real(d2)/M)\n FourUCorrIm.append(np.imag(C)/M)\n k.append(px)\n\n#plt.plot(k,8*L**4*np.array(D1)*np.array(D2),'-',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n\nplt.plot(k,8*L**4*np.array(D1)*np.array(D2),'*',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n\n\nplt.plot(k,FourUCorrRe,'+',label=r'$\\langle tr [ U^+(k_1) U(k_1) U^+(k_2) U(k_2) ] \\rangle $')\n\nplt.plot([0.0],[(2*8*L**4*np.array(D1)*np.array(D2))[0]],'x',label=r'$(N_c^2-1) S^2_\\perp D(k_1) D(k_2)$',alpha=0.8)\n\nplt.yscale('log')\nplt.xlim((-10,10))\n", "Input momentum: -3.095909055674614 0.31918275917183614\nApproximate momentum on the lattice: -3.1214451522580573 0.3926596563665966\nInput momentum: -3.095909055674614 0.31918275917183614\nApproximate momentum on the lattice: -3.1214451522580573 0.3926596563665966\n-6.191818111349228\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e72825dbde065600ce3f23d255491a37af76852c
103,325
ipynb
Jupyter Notebook
Math/Signals.ipynb
cas10ph/MachineLearning
91c34d954b4e084ff0423d5ef2cd767ee2cdb6e3
[ "MIT" ]
40
2019-06-14T02:07:12.000Z
2022-01-15T07:41:42.000Z
Math/Signals.ipynb
cas10ph/MachineLearning
91c34d954b4e084ff0423d5ef2cd767ee2cdb6e3
[ "MIT" ]
null
null
null
Math/Signals.ipynb
cas10ph/MachineLearning
91c34d954b4e084ff0423d5ef2cd767ee2cdb6e3
[ "MIT" ]
68
2018-12-19T04:15:29.000Z
2022-02-17T05:18:43.000Z
239.733179
34,112
0.905938
[ [ [ "# Fourier Transform\n\nThe [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform) transforms any periodic function in time domain into an equivalent series in frequency domain.\n\n$$f(t) = \\sum_{n=1}^{\\infty} a_0 + a_n sin(n \\omega) + b_n cos(n \\omega)$$\n\nGiven a set of finite time domain samples, the Fourier transform can be computed using the [Discrete Fourier Transform (DFT)](https://en.wikipedia.org/wiki/Discrete_Fourier_transform) or the [Fast Fourier Transform (FFT)](https://en.wikipedia.org/wiki/Fast_Fourier_transform). The DFT and FFT differ only in computational complexity. The complexity of the DFT is $O(N^2)$, compared to $O(N log N)$ for the FFT, where N is the sample size.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.fftpack\nimport scipy.signal\nimport pandas as pd", "_____no_output_____" ], [ "# Number of samples\nN = 240\n# Sample rate in Hz\nfs = 1200\n# Total time in seconds\nT = N / fs\nt = np.linspace(0.0, T, N)\nf = 60\nw = 2 * np.pi * f\ny = np.sin(w * t) + 0.3*np.sin(3 * w * t)\n\nyf = scipy.fftpack.fft(y)\nmag = 2.0/N * np.abs(yf[:N//2])\nxf = np.linspace(0.0, fs/2.0, N//2)\n\nplt.figure(figsize=(12, 4))\nplt.subplot(121)\nplt.plot(t[:50], y[:50])\nplt.title(\"Time Domain\")\nplt.subplot(122)\nplt.title(\"Frequency Domain\")\nplt.plot(xf, mag)\nplt.suptitle('Fourier Transform')\nplt.show()\n\npeaks = (-mag).argsort()[:2]\nfreqs = peaks * fs/N\nd = {'Frequency': freqs, 'Magnitude': [mag[i] for i in peaks]}\ndf = pd.DataFrame(data=d)\ndf", "_____no_output_____" ], [ "# Number of samples\nN = 1000\n# Sample rate in Hz\nfs = 1000\n# Total time in seconds\nT = N / fs\nt = np.linspace(0.0, T, N)\n\nf = 50\nw = 2 * np.pi * f\ny = scipy.signal.square(w * t)\nyf = scipy.fftpack.fft(y)\nmag = 2.0/N * np.abs(yf[:N//2])\nxf = np.linspace(0.0, fs/2.0, N//2)\n\nplt.figure(figsize=(12, 4))\nplt.subplot(121)\nplt.plot(t[:100], y[:100])\nplt.title(\"Time Domain\")\nplt.subplot(122)\nplt.title(\"Frequency Domain\")\nplt.plot(xf, mag)\nplt.suptitle('Fourier Transform')\nplt.show()\n\npeaks = (-mag).argsort()[:4]\nfreqs = peaks * fs/N\nd = {'Frequency': freqs, 'Magnitude': [mag[i] for i in peaks]}\ndf = pd.DataFrame(data=d)\ndf", "_____no_output_____" ], [ "# Number of samples\nN = 1000\n# Sample rate in Hz\nfs = 1000\n# Total time in seconds\nT = N / fs\nt = np.linspace(0.0, T, N)\n\nf = 50\nw = 2 * np.pi * f\ny = scipy.signal.sawtooth(w * t, 1)\nyf = scipy.fftpack.fft(y)\nmag = 2.0/N * np.abs(yf[:N//2])\nxf = np.linspace(0.0, fs/2.0, N//2)\n\nplt.figure(figsize=(12, 4))\nplt.subplot(121)\nplt.plot(t[:100], y[:100])\nplt.title(\"Time Domain\")\nplt.subplot(122)\nplt.title(\"Frequency Domain\")\nplt.plot(xf, mag)\nplt.suptitle('Fourier Transform')\nplt.show()\n\npeaks = (-mag).argsort()[:6]\nfreqs = peaks * fs/N\nd = {'Frequency': freqs, 'Magnitude': [mag[i] for i in peaks]}\ndf = pd.DataFrame(data=d)\ndf", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7282d02b4d4d4286c2115ea5ad31eb836b931e7
189,602
ipynb
Jupyter Notebook
Twitter analysis/.ipynb_checkpoints/wrangle_act-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
ad09ee76788338e94b97c32a3e3beb9ec43d3c5f
[ "MIT" ]
1
2021-08-03T20:57:56.000Z
2021-08-03T20:57:56.000Z
Twitter analysis/.ipynb_checkpoints/wrangle_act-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
ad09ee76788338e94b97c32a3e3beb9ec43d3c5f
[ "MIT" ]
null
null
null
Twitter analysis/.ipynb_checkpoints/wrangle_act-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
ad09ee76788338e94b97c32a3e3beb9ec43d3c5f
[ "MIT" ]
null
null
null
45.435418
17,892
0.573122
[ [ [ "pip install tweepy", "Collecting tweepy\n Downloading tweepy-3.8.0-py2.py3-none-any.whl (28 kB)\nRequirement already satisfied: requests>=2.11.1 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from tweepy) (2.22.0)\nRequirement already satisfied: PySocks>=1.5.7 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from tweepy) (1.7.1)\nCollecting requests-oauthlib>=0.7.0\n Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)\nRequirement already satisfied: six>=1.10.0 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from tweepy) (1.14.0)\nRequirement already satisfied: idna<2.9,>=2.5 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from requests>=2.11.1->tweepy) (2.8)\nRequirement already satisfied: certifi>=2017.4.17 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from requests>=2.11.1->tweepy) (2019.11.28)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from requests>=2.11.1->tweepy) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/siddharth/anaconda3/lib/python3.7/site-packages (from requests>=2.11.1->tweepy) (1.25.8)\nCollecting oauthlib>=3.0.0\n Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB)\n\u001b[K |████████████████████████████████| 147 kB 1.7 MB/s eta 0:00:01\n\u001b[?25hInstalling collected packages: oauthlib, requests-oauthlib, tweepy\nSuccessfully installed oauthlib-3.1.0 requests-oauthlib-1.3.0 tweepy-3.8.0\nNote: you may need to restart the kernel to use updated packages.\n" ], [ "import tweepy\nimport requests\nimport pandas as pd\nimport json\nimport time\nimport math\nimport re\n\n## importing all the libraries required", "_____no_output_____" ], [ "import tweepy\nfrom tweepy import OAuthHandler\nimport json\nfrom timeit import default_timer as timer\n\n# Query Twitter API for each tweet in the Twitter archive and save JSON in a text file\n# These are hidden to comply with Twitter's API terms and conditions\nimport tweepy\n\nconsumer_key = ''\nconsumer_secret = ''\naccess_token = ''\naccess_secret = ''\n\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_secret)\n\napi = tweepy.API(auth)\n\n# NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES:\n# df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to\n# change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv\n# NOTE TO REVIEWER: this student had mobile verification issues so the following\n# Twitter API code was sent to this student from a Udacity instructor\n# Tweet IDs for which to gather additional data via Twitter's API\ntweet_ids = df.tweet_id.values\nlen(tweet_ids)\n\n# Query Twitter's API for JSON data for each tweet ID in the Twitter archive\ncount = 0\nfails_dict = {}\nstart = timer()\n# Save each tweet's returned JSON as a new line in a .txt file\nwith open('tweet_json.txt', 'w') as outfile:\n # This loop will likely take 20-30 minutes to run because of Twitter's rate limit\n for tweet_id in tweet_ids:\n count += 1\n print(str(count) + \": \" + str(tweet_id))\n try:\n tweet = api.get_status(tweet_id, tweet_mode='extended')\n print(\"Success\")\n json.dump(tweet._json, outfile)\n outfile.write('\\n')\n except tweepy.TweepError as e:\n print(\"Fail\")\n fails_dict[tweet_id] = e\n pass\nend = timer()\nprint(end - start)\nprint(fails_dict)\n\n## getting access to twitter developer.", "_____no_output_____" ], [ "# Download tweet image predictions TSV using the Requests library and write it to image_predictions.tsv\nurl = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv'\nresponse = requests.get(url)\nwith open('image_predictions.tsv', mode='wb') as file:\n file.write(response.content)\n \n# Import the tweet image predictions TSV file into a DataFrame\nimg_df = pd.read_csv('image_predictions-3.tsv', sep='\\t')", "_____no_output_____" ], [ "df_list = []\nwith open('tweet-json.txt') as json_file:\n for line in json_file:\n status = json.loads(line)\n \n df_list.append({'tweet_id': status['id'],\n 'retweet_count': status['retweet_count'],\n 'favorite_count': status['favorite_count'],\n 'display_text_range': status['display_text_range']\n })\n\n# Create a DataFrame with tweet ID, retweet count, favorite count and display_text_range\nstatus_df = pd.DataFrame(df_list, columns = ['tweet_id', 'retweet_count', 'favorite_count', 'display_text_range'])", "_____no_output_____" ], [ "status_df.head()", "_____no_output_____" ], [ "df_twit = pd.read_csv('twitter-archive-enhanced.csv')\ndf_twit.head()", "_____no_output_____" ], [ "df_img = pd.read_csv('image-predictions-3.tsv',sep='\\t')\ndf_img.head()", "_____no_output_____" ], [ "df_twit.head(5)", "_____no_output_____" ], [ "df_twit.info()\n\n## getting to know about the datset", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tweet_id 2356 non-null int64 \n 1 in_reply_to_status_id 78 non-null float64\n 2 in_reply_to_user_id 78 non-null float64\n 3 timestamp 2356 non-null object \n 4 source 2356 non-null object \n 5 text 2356 non-null object \n 6 retweeted_status_id 181 non-null float64\n 7 retweeted_status_user_id 181 non-null float64\n 8 retweeted_status_timestamp 181 non-null object \n 9 expanded_urls 2297 non-null object \n 10 rating_numerator 2356 non-null int64 \n 11 rating_denominator 2356 non-null int64 \n 12 name 2356 non-null object \n 13 doggo 2356 non-null object \n 14 floofer 2356 non-null object \n 15 pupper 2356 non-null object \n 16 puppo 2356 non-null object \ndtypes: float64(4), int64(3), object(10)\nmemory usage: 313.0+ KB\n" ], [ "df_twit.isnull().sum()", "_____no_output_____" ], [ "df_twit.name.value_counts().sort_index(ascending=True)", "_____no_output_____" ], [ "df_twit.text", "_____no_output_____" ], [ "df_twit.source", "_____no_output_____" ], [ "df_twit.columns", "_____no_output_____" ], [ "df_twit[df_twit['rating_numerator'] < 10 ].shape\n", "_____no_output_____" ], [ "df_twit[df_twit['rating_denominator'] != 10 ].shape\n", "_____no_output_____" ], [ "df_twit['expanded_urls'].isnull().sum()", "_____no_output_____" ], [ "status_df", "_____no_output_____" ], [ "df_img.head()", "_____no_output_____" ], [ "df_twit.duplicated().sum()", "_____no_output_____" ], [ "df_img.isnull().sum()", "_____no_output_____" ], [ "df_img.shape", "_____no_output_____" ], [ "df_twit[df_twit['name'].str.len() <3]", "_____no_output_____" ] ], [ [ "## Quality\n\n### Twitter archived table\n\n1.in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id \n,retweeted_status_timestamp contains lot of NULL values\n\n2.name of the dogs have lot of none value and 'a,the,this,an',etc... as text.\n\n3.text column has unstructured long texts\n\n4.sources contains html tags instead of just sources\n\n5.All four column that is doggo', 'floofer', 'pupper', 'puppo', that indicate stage are none for many dogs.\n\n6.Some rating_numerator has value less than 10\n\n7.Some rating_denominator dont have value as 10\n\n8.Some of data are duplicates.\n\n\n\n## Tidiness\n\n1.'doggo', 'floofer', 'pupper', 'puppo' must be merged to 'stage' column\n\n2.\"breed\" column should be added in df_twit table; its values based on p1_conf and p1_dog columns of df_img (image \npredictions) table\n\n3.retweet_count and favorite_count columns from status_df (tweet status) table should be joined with df_twit table\n", "_____no_output_____" ], [ "\n### Quality\n\n### 1.in_reply_to_status_id,in_reply_to_user_id,retweeted_status_id,retweeted_status_user_id ,retweeted_status_timestamp contains lot of NULL values\n\n#### Define\n\nThese columns arent required and can be deleted\n\n#### Code\n", "_____no_output_____" ] ], [ [ "df_twit.columns", "_____no_output_____" ], [ "df_twit.drop(['in_reply_to_status_id','in_reply_to_user_id'],axis=1,inplace=True)", "_____no_output_____" ], [ "df_twit.drop(['retweeted_status_user_id','retweeted_status_timestamp','retweeted_status_id'],axis=1,inplace=True)", "_____no_output_____" ] ], [ [ "#### Test", "_____no_output_____" ] ], [ [ "df_twit.isnull().sum()", "_____no_output_____" ] ], [ [ "### 2.name of the dogs have lot of none value and 'a,the,this,an',etc... as text.\n\n#### Define\n\nWherever names are none and there are names like a,the,etc all needs to be set as none so that it help us later to get insight about the common names our daogs have.\n\n#### Code", "_____no_output_____" ] ], [ [ "df_twit['name']", "_____no_output_____" ], [ "df_twit['name'].isnull().sum()", "_____no_output_____" ], [ "df_twit['name'].value_counts()", "_____no_output_____" ], [ "df_twit[\"name\"].replace({\"a\": \"None\"}, inplace=True)", "_____no_output_____" ], [ "df_twit['name'].value_counts()", "_____no_output_____" ], [ "df_twit[\"name\"].replace({\"an\": \"None\",\"the\":\"None\",\"this\":\"None\"}, inplace=True)", "_____no_output_____" ] ], [ [ "#### Test", "_____no_output_____" ] ], [ [ "df_twit['name'].value_counts()", "_____no_output_____" ] ], [ [ "### 3.text column has unstructured long texts\n\n#### Define:\n\nMany texts in this column arent structured properly and need to be cleaned.The links and ratings at the end needs to be removed.\n\n#### Code:", "_____no_output_____" ] ], [ [ "df_twit['text'].isnull().sum()", "_____no_output_____" ], [ "df_twit['text'] = df_twit['text'].map(lambda x: str(x)[:-23])", "_____no_output_____" ] ], [ [ "### Test", "_____no_output_____" ] ], [ [ "df_twit['text'].iloc[0]", "_____no_output_____" ] ], [ [ "### 4.sources contains html tags instead of just sources\n\n#### Define:\n\nWe dont want tags for urls instead the sources alone.\n\n#### Code:", "_____no_output_____" ] ], [ [ "df_twit.source = df_twit.source.str.replace(r'<(?:a\\b[^>]*>|/a>)', '')", "_____no_output_____" ] ], [ [ "#### Test", "_____no_output_____" ] ], [ [ "df_twit.source", "_____no_output_____" ], [ "df_twit.source.value_counts()", "_____no_output_____" ] ], [ [ "### 5.All four column that is doggo', 'floofer', 'pupper', 'puppo', that indicate stage are none for many dogs.\n\n#### Define:\nAtleast one stage needs to be there in any one of the 4 column , it cant be more than one or none at all.\n\n#### Code:\n", "_____no_output_____" ] ], [ [ "for i, row in df_twit[((df_twit.doggo != 'None') & (df_twit.floofer != 'None'))\n | ((df_twit.doggo != 'None') & (df_twit.puppo != 'None'))].iterrows():\n print('%s %s\\n'%(row.tweet_id, row.text))", "855851453814013952 Here's a puppo participating in the #ScienceMarch. Cleverly disguising her own doggo agenda. 13/10 would keep the planet habitable for \n\n854010172552949760 At first I thought this was a shy doggo, but it's actually a Rare Canadian Floofer Owl. Amateurs would confuse the two. 11/10 only send dogs \n\n" ], [ "# based on the above texts, doggo should be set as None for both the records\ndf_twit['doggo'][df_twit.tweet_id.isin([855851453814013952, 854010172552949760])] = 'None'", "/home/siddharth/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \n" ], [ "df_twit['pupper'][(df_twit.doggo != 'None') & (df_twit.pupper != 'None')] = 'None'", "/home/siddharth/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "#### Test", "_____no_output_____" ] ], [ [ "len(df_twit[((df_twit.doggo != 'None') & (df_twit.pupper != 'None'))\n | ((df_twit.doggo != 'None') & (df_twit.floofer != 'None')) \n | ((df_twit.doggo != 'None') & (df_twit.puppo != 'None'))])", "_____no_output_____" ] ], [ [ "### 6.Some rating_numerator has value less than 10\n\n#### Define:\nFor all those records whose rating numerator is either less than or equal to 10 or greater than 10 but has a very high value, check if the text contains any fraction whose denominator is 10. If it does, update the rating_numerator with the numerator value of this fraction.\n\n#### Code:", "_____no_output_____" ] ], [ [ "df_twit['rating_numerator'].value_counts().sort_index()", "_____no_output_____" ], [ "df_twit[\"rating_numerator\"].replace({1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0}, inplace=True)", "_____no_output_____" ], [ "df_twit[df_twit[\"rating_numerator\"] > 15] = 0", "_____no_output_____" ] ], [ [ "#### Test", "_____no_output_____" ] ], [ [ "df_twit['rating_numerator'].value_counts().sort_index()", "_____no_output_____" ] ], [ [ "### 7.Some rating_denominator dont have value as 10\n\n#### Define: \nMake all rating dinominator as 10\n\n#### Code:", "_____no_output_____" ] ], [ [ "df_twit[\"rating_dinominator\"] = 10", "_____no_output_____" ] ], [ [ "#### Test:", "_____no_output_____" ] ], [ [ "df_twit['rating_dinominator'].value_counts().sort_index()", "_____no_output_____" ] ], [ [ "### 8.Some of data are duplicates.\n\n#### Define: \nRemove the duplicated rows.\n\n#### Code:\n", "_____no_output_____" ] ], [ [ "df_twit = df_twit.drop_duplicates()", "_____no_output_____" ] ], [ [ "#### Test:", "_____no_output_____" ] ], [ [ "df_twit.duplicated().sum()", "_____no_output_____" ] ], [ [ "### Tidiness\n\n### 1.'doggo', 'floofer', 'pupper', 'puppo' must be merged to 'stage' column\n\n#### Define:\nAll these needs to be merged together into one column for better visualization.\n\n#### Code:", "_____no_output_____" ] ], [ [ "df_twit['stage'] = df_twit[['doggo', 'floofer', 'pupper', 'puppo']].max(axis=1)", "_____no_output_____" ], [ "df_twit.stage = df_twit.stage.astype('category')", "_____no_output_____" ], [ "df_twit.drop(['doggo', 'floofer', 'pupper', 'puppo'], axis=1, inplace=True)", "_____no_output_____" ] ], [ [ "#### Test:", "_____no_output_____" ] ], [ [ "df_twit.stage.value_counts()", "_____no_output_____" ] ], [ [ "### 2.\"breed\" column should be added in df_twit table; its values based on p1_conf and p1_dog columns of df_img (image predictions) table\n\n### 3.retweet_count and favorite_count columns from status_df (tweet status) table should be joined with df_twit table\n\n#### Code: ", "_____no_output_____" ] ], [ [ "df_twit['breed'] = 'None'\ndf_twit['retweet_count'] = 0\ndf_twit['favorite_count'] = 0", "_____no_output_____" ], [ "df_twit.columns", "_____no_output_____" ], [ "df_twit.favorite_count", "_____no_output_____" ], [ "status_df.drop(['display_text_range'],axis=1,inplace=True)", "_____no_output_____" ], [ "status_df", "_____no_output_____" ], [ "df_merge = pd.merge(df_twit, status_df, on='tweet_id')", "_____no_output_____" ], [ "df_merge.head()", "_____no_output_____" ], [ "df_merge.drop(['retweet_count_x','favorite_count_x'],axis=1,inplace=True)", "_____no_output_____" ], [ "df_merge.rename(columns = {'retweet_count_y':'retweet_count','favorite_count_y':'favorite_count'}, inplace = True) ", "_____no_output_____" ], [ "df_img.head()", "_____no_output_____" ], [ "df_img.drop(['jpg_url','img_num','p1_conf','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],axis=1,inplace=True)", "_____no_output_____" ], [ "df_img", "_____no_output_____" ], [ "df_img.drop(df_img[df_img['p1_dog'] == False].index, inplace=True)", "_____no_output_____" ], [ "df_img", "_____no_output_____" ], [ "df_merge.head()", "_____no_output_____" ], [ "df_merge.to_csv('twitter_archive_master.csv', encoding='utf-8', index=False)", "_____no_output_____" ] ], [ [ "#### Test:", "_____no_output_____" ] ], [ [ "df = pd.read_csv('twitter_archive_master.csv')\ndf.head()", "_____no_output_____" ], [ "df.stage.value_counts()", "_____no_output_____" ] ], [ [ "## Analysis:", "_____no_output_____" ] ], [ [ "df.info()\nimport seaborn as sns", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2328 entries, 0 to 2327\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tweet_id 2328 non-null int64 \n 1 timestamp 2328 non-null object\n 2 source 2328 non-null object\n 3 text 2322 non-null object\n 4 expanded_urls 2275 non-null object\n 5 rating_numerator 2328 non-null int64 \n 6 rating_denominator 2328 non-null int64 \n 7 name 2328 non-null object\n 8 rating_dinominator 2328 non-null int64 \n 9 stage 2328 non-null object\n 10 retweet_count 2328 non-null int64 \n 11 favorite_count 2328 non-null int64 \ndtypes: int64(6), object(6)\nmemory usage: 218.4+ KB\n" ] ], [ [ "Analysis of the largest tweet based on th year", "_____no_output_____" ] ], [ [ "pd.DatetimeIndex(df['timestamp']).year.value_counts().plot(kind='bar');", "_____no_output_____" ] ], [ [ "### 1.Here we can see that year 2016 had the highest number of tweets followed by 2015 and 2017", "_____no_output_____" ] ], [ [ "df.source.value_counts().plot(kind='barh');", "_____no_output_____" ] ], [ [ "### 2.Most of the tweets have come from mobile devices through the twitter app.", "_____no_output_____" ] ], [ [ "df['rating_numerator'].value_counts().sort_index().plot(kind='bar')", "_____no_output_____" ] ], [ [ "### 3.Highest ratings is 12 that is more than 500 tweets. Lowest being 15.", "_____no_output_____" ] ], [ [ "names = df.drop(df[df['name'] == 'None'].index) \nnames.name.value_counts().iloc[:10].plot(kind='barh')", "_____no_output_____" ] ], [ [ "### 4.Charlie is the highest used name for dogs .", "_____no_output_____" ] ], [ [ "df_img.p1.value_counts().iloc[:10].plot(kind='barh')", "_____no_output_____" ] ], [ [ "### 5.Golden retriever is the highest breed that came up in tweets followed by labrador.", "_____no_output_____" ] ], [ [ "retweet = df.retweet_count.mean()\nfav = df.favorite_count.mean()\n", "_____no_output_____" ], [ "fav", "_____no_output_____" ], [ "retweet", "_____no_output_____" ] ], [ [ "### 6.Mean retweet count is 3181 and favored count is 8122", "_____no_output_____" ] ], [ [ "\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7282d7f2437129c34d642fde077274336b47d03
18,948
ipynb
Jupyter Notebook
8-2-Word2vec-(gensim).ipynb
abalvet/hands-on-nltk-tutorial
e51a56c7612ed98600845c892b784cca0d87e3a8
[ "MIT" ]
4
2021-04-23T05:27:13.000Z
2022-02-07T16:09:13.000Z
8-2-Word2vec-(gensim).ipynb
kpradyumna095/hands-on-nltk-tutorial
e51a56c7612ed98600845c892b784cca0d87e3a8
[ "MIT" ]
36
2020-09-28T16:47:04.000Z
2020-12-14T17:42:53.000Z
8-2-Word2vec-(gensim).ipynb
kpradyumna095/hands-on-nltk-tutorial
e51a56c7612ed98600845c892b784cca0d87e3a8
[ "MIT" ]
1
2021-01-01T01:47:41.000Z
2021-01-01T01:47:41.000Z
53.374648
9,410
0.655373
[ [ [ "# Word2Vec (gensim)", "_____no_output_____" ], [ "Based on [Dive Into NLTK, Part X: Play with Word2Vec Models based on NLTK Corpus by TextMiner](http://textminingonline.com/dive-into-nltk-part-x-play-with-word2vec-models-based-on-nltk-corpus)", "_____no_output_____" ], [ "## 1. Exploring the `gutenburg` corpus", "_____no_output_____" ], [ "Project Gutenberg (PG) is a volunteer effort to digitize and archive cultural works. Most of the items in its collection are full texts of public domain books.", "_____no_output_____" ] ], [ [ "from nltk.corpus import gutenberg\ngutenberg.readme().replace('\\n', ' ')", "_____no_output_____" ], [ "gutenberg.fileids()", "_____no_output_____" ], [ "bible_kjv_sents = gutenberg.sents('bible-kjv.txt')\nlen(bible_kjv_sents)", "_____no_output_____" ] ], [ [ "## 2. Implementing Word2Vec", "_____no_output_____" ] ], [ [ "from string import punctuation\n\ndiscard_punctuation_and_lowercased_sents = [[word.lower() for word in sent if word not in punctuation and word.isalpha()] \n for sent in bible_kjv_sents]\ndiscard_punctuation_and_lowercased_sents[3]", "_____no_output_____" ], [ "from gensim.models import Word2Vec\n\nbible_kjv_word2vec_model = word2vec.Word2Vec(discard_punctuation_and_lowercased_sents, min_count=5, size=200)\nbible_kjv_word2vec_model.save('bible_word2vec_gensim')\n# model = Word2Vec.load(fname) # To load a model\nword_vectors = bible_kjv_word2vec_model.wv\ndel bible_kjv_word2vec_model # When we finish training the model, we can only delete it and keep the word vectors.\nword_vectors.save_word2vec_format('bible_word2vec_org', 'bible_word2vec_vocabulary')\nlen(word_vectors.vocab)", "_____no_output_____" ], [ "word_vectors.most_similar(['god']) # Most similar as in closest in the word graph. Word2vec is essentially about proportions of word occurrences in relations holding in general over large corpora of text. Consider word analogy ‘man is to woman as king is to X’ which was famously demonstrated in word2vec. The algorithm is able to come up with an answer queen, almost magically by simple vector differences. The main idea, called distributional hypothesis, is that similar words appear in similar contexts of words around them.", "_____no_output_____" ], [ "word_vectors.most_similar(['heaven'], topn=3)", "_____no_output_____" ], [ "word_vectors.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)", "_____no_output_____" ], [ "# The `_cosmul` variant uses a slightly-different comparison when using multiple positive/negative examples (such as when asking about analogies). One paper has shown it does better:\nword_vectors.most_similar_cosmul(positive=['woman', 'king'], negative=['man'], topn=1)", "_____no_output_____" ], [ "word_vectors.similarity('lord', 'god')", "_____no_output_____" ], [ "word_vectors.doesnt_match(\"lord god salvation food spirit\".split())", "_____no_output_____" ], [ "# Probability of a text under the model\n# bible_kjv_word2vec_model.score([\"In the beginning\".split()]) # Doesn't work for 2 reasons: 1. I deleted the model. 2. It has only been implemented for models made with certain arguments as of the time of writing.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e72830feb74084aaa7551710c5c3cb7e70e6c11d
833,229
ipynb
Jupyter Notebook
scratch/EDA.ipynb
pezpet/capstone
bd2c3a680aa0ebb41d9c2d8cb458e38d112527bf
[ "CC0-1.0" ]
null
null
null
scratch/EDA.ipynb
pezpet/capstone
bd2c3a680aa0ebb41d9c2d8cb458e38d112527bf
[ "CC0-1.0" ]
null
null
null
scratch/EDA.ipynb
pezpet/capstone
bd2c3a680aa0ebb41d9c2d8cb458e38d112527bf
[ "CC0-1.0" ]
null
null
null
1,717.997938
816,514
0.95467
[ [ [ "# EDA", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Data", "_____no_output_____" ] ], [ [ "data = pd.read_csv('./data/merged_data.csv')\ndata.drop(columns='Unnamed: 0', inplace=True)\ndata.head()", "_____no_output_____" ] ], [ [ "## Pairplot", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(8, 12))\nsns.pairplot(\n data=data[\n [\n \"wave_height\",\n \"dominant_period\",\n \"avg_period\",\n \"water_temp\",\n \"dominant_wave_direction_sin\",\n \"dominant_wave_direction_cos\",\n \"wind_speed\",\n \"gust_speed\",\n \"wind_direction_sin\",\n \"wind_direction_cos\",\n \"score\",\n ]\n ],\n corner=True,\n);\n", "_____no_output_____" ], [ "data.corr()['score'].sort_values()", "_____no_output_____" ], [ "data.corr()['score'].sort_values(ascending=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e72842a5546cb02c8518d46157fb251186d8f02e
86,060
ipynb
Jupyter Notebook
sst/.ipynb_checkpoints/03_Eval_Attr-checkpoint.ipynb
xszheng2020/memorization
6270df8db388922fc35d6cd7b23112e74fbbe1f6
[ "Apache-2.0" ]
4
2022-03-16T12:05:47.000Z
2022-03-28T12:21:36.000Z
sst/03_Eval_Attr.ipynb
xszheng2020/memorization
6270df8db388922fc35d6cd7b23112e74fbbe1f6
[ "Apache-2.0" ]
null
null
null
sst/03_Eval_Attr.ipynb
xszheng2020/memorization
6270df8db388922fc35d6cd7b23112e74fbbe1f6
[ "Apache-2.0" ]
null
null
null
42.103718
533
0.483604
[ [ [ "import os", "_____no_output_____" ], [ "import pickle", "_____no_output_____" ], [ "random_seed = 42", "_____no_output_____" ], [ "import random\nrandom.seed(random_seed)\nimport numpy as np\nnp.random.seed(random_seed)\n\nimport pandas as pd\npd.set_option('display.max_rows', 512)\npd.set_option('display.max_columns', 512)\npd.set_option('display.max_colwidth', -1)", "_____no_output_____" ], [ "import csv", "_____no_output_____" ], [ "import matplotlib\nimport matplotlib.pyplot as plt\nfrom IPython.display import set_matplotlib_formats\n\n%matplotlib inline\nset_matplotlib_formats('svg')", "_____no_output_____" ], [ "import torch\nimport torch.nn as nn\nfrom torch.nn import functional as F\n\nfrom torch.utils.data import Dataset", "_____no_output_____" ], [ "torch.manual_seed(random_seed)\ntorch.backends.cudnn.deterministic = True\ntorch.backends.cudnn.benchmark = False", "_____no_output_____" ], [ "from tqdm import tqdm", "_____no_output_____" ], [ "from sklearn.metrics import classification_report\nfrom sklearn.metrics import mean_squared_error\n", "_____no_output_____" ], [ "from scipy import stats", "_____no_output_____" ], [ "def get_prob(label, prob):\n return prob[label]", "_____no_output_____" ], [ "output_collections_list = []\nfor idx in range(0, 6000+1, 1000):\n with open(\"saved/score_42/{}.pkl\".format(idx), \"rb\") as handle:\n output_collections = pickle.load(handle) \n# print(output_collections[0]['prob'])\n print(len(output_collections))\n \n if idx==0:\n output_collections_list = output_collections\n else:\n output_collections_list += output_collections\nlen(output_collections_list) ", "1000\n1000\n1000\n1000\n1000\n1000\n920\n" ], [ "data = []\nfor i in output_collections_list:\n data.append([i['index'], \n i['influence_prime'], \n i['influence'], \n i['diff'], i['theta'], \n i['sentence'], i['label'], i['prediction'], i['prob'],\n i['tokens'], i['attributions'],\n ])\n\ndf = pd.DataFrame(data, columns=['sample_index', \n 'influence_prime', \n 'influence', \n 'diff', 'theta', \n 'sentence', 'label', 'prediction', 'prob', \n 'tokens', 'attributions'\n ])", "_____no_output_____" ], [ "df_sorted = df.sort_values(by=['influence'], ascending=False) ", "_____no_output_____" ], [ "df_sorted.head()", "_____no_output_____" ], [ "output_collections_list = []\nfor idx in range(10, 90+1, 10):\n with open(\"saved/eval_attr/mem/{}/0.pkl\".format(idx), \"rb\") as handle:\n output_collections = pickle.load(handle) \n \n if idx==0:\n output_collections_list = output_collections\n else:\n output_collections_list += output_collections\nlen(output_collections_list) ", "_____no_output_____" ], [ "data = []\nfor i in output_collections_list:\n data.append([i['index'], \n i['influence'], \n i['sentence'], i['label'], i['prediction'], i['prob'],\n ])\n\ndf_0 = pd.DataFrame(data, columns=['sample_index', \n 'influence', \n 'sentence', 'label', 'prediction', 'prob', \n ])\n", "_____no_output_____" ], [ "output_collections_list = []\nfor idx in range(10, 90+1, 10):\n with open(\"saved/eval_attr/random_0/{}/0.pkl\".format(idx), \"rb\") as handle:\n output_collections = pickle.load(handle) \n \n if idx==0:\n output_collections_list = output_collections\n else:\n output_collections_list += output_collections\nlen(output_collections_list) ", "_____no_output_____" ], [ "data = []\nfor i in output_collections_list:\n data.append([i['index'], \n i['influence'], \n i['sentence'], i['label'], i['prediction'], i['prob'],\n ])\n\ndf_1 = pd.DataFrame(data, columns=['sample_index', \n 'influence', \n 'sentence', 'label', 'prediction', 'prob', \n ])\n", "_____no_output_____" ], [ "output_collections_list = []\nfor idx in range(10, 90+1, 10):\n with open(\"saved/eval_attr/random_2/{}/0.pkl\".format(idx), \"rb\") as handle:\n output_collections = pickle.load(handle) \n \n if idx==0:\n output_collections_list = output_collections\n else:\n output_collections_list += output_collections\nlen(output_collections_list) ", "_____no_output_____" ], [ "data = []\nfor i in output_collections_list:\n data.append([i['index'], \n i['influence'], \n i['sentence'], i['label'], i['prediction'], i['prob'],\n ])\n\ndf_2 = pd.DataFrame(data, columns=['sample_index', \n 'influence', \n 'sentence', 'label', 'prediction', 'prob', \n ])\n", "_____no_output_____" ], [ "output_collections_list = []\nfor idx in range(10, 90+1, 10):\n with open(\"saved/eval_attr/random/{}/0.pkl\".format(idx), \"rb\") as handle:\n output_collections = pickle.load(handle) \n \n if idx==0:\n output_collections_list = output_collections\n else:\n output_collections_list += output_collections\nlen(output_collections_list) ", "_____no_output_____" ], [ "data = []\nfor i in output_collections_list:\n data.append([i['index'], \n i['influence'], \n i['sentence'], i['label'], i['prediction'], i['prob'],\n ])\n\ndf_3 = pd.DataFrame(data, columns=['sample_index', \n 'influence', \n 'sentence', 'label', 'prediction', 'prob', \n ])\n", "_____no_output_____" ], [ "(df_0['influence'] < df_3['influence']).sum()", "_____no_output_____" ], [ "results_0 = []\nfor i in range(9):\n start = i*692\n end = (i+1)*692\n results_0.append(((df_sorted.head(692)['influence'].values - df_0.iloc[start: end]['influence'].values)/df_sorted.head(692)['influence'].values).sum() / 692 * 100)\nresults_0", "_____no_output_____" ], [ "results_1 = []\nfor i in range(9):\n start = i*692\n end = (i+1)*692\n results_1.append(((df_sorted.head(692)['influence'].values - df_1.iloc[start: end]['influence'].values)/df_sorted.head(692)['influence'].values).sum() / 692 * 100)\nresults_1", "_____no_output_____" ], [ "results_2 = []\nfor i in range(9):\n start = i*692\n end = (i+1)*692\n results_2.append(((df_sorted.head(692)['influence'].values - df_2.iloc[start: end]['influence'].values)/df_sorted.head(692)['influence'].values).sum() / 692 * 100)\nresults_2", "_____no_output_____" ], [ "results_3 = []\nfor i in range(9):\n start = i*692\n end = (i+1)*692\n results_3.append(((df_sorted.head(692)['influence'].values - df_3.iloc[start: end]['influence'].values)/df_sorted.head(692)['influence'].values).sum() / 692 * 100)\nresults_3", "_____no_output_____" ], [ "matplotlib.rcParams.update({'font.size': 24})\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))\n\nx = range(10, 100, 10)\n\nresults = list((np.array(results_1)+np.array(results_2)+np.array(results_3))/3)\n\nax.errorbar(x, results, \n yerr=None, \n linestyle='-', \n fmt='o', label='random attribution', color='tab:green', capsize=5, \n# alpha=0.6\n )\n\nax.errorbar(x, results_0, \n yerr=None, \n linestyle='-', \n fmt='o', label='mem attribution', color='tab:orange', capsize=5, \n# alpha=0.6\n )\n\nax.legend(fontsize=20)\n\n\n\nax.grid(True)\n\nax.set_xlabel(\"Fraction Removed (%)\")\nax.set_xticks(x)\n\nax.set_ylabel(\"Reduction Rate (%)\")\n# ax.set_yticks(np.arange(0, 100.00+10, 10))\nax.set_yticks(np.arange(np.round(min(results+results_0), -1), np.round(max(results+results_0), -1)+10, 10))\n\nfilename = \"saved/vis/Eval_Attr_SST.pdf\"\nos.makedirs(os.path.dirname(filename), exist_ok=True) \n\nplt.savefig(filename, bbox_inches='tight', pad_inches=0.1) ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e728433fe209ae5bce10e2254c8662dd29c60edf
75,266
ipynb
Jupyter Notebook
notebooks/03_auto_aggregation.ipynb
suhara/learnit
5db6e80d2bd2d1bf1a381f668db40288888236e4
[ "Apache-2.0" ]
5
2020-02-27T22:16:29.000Z
2020-07-12T21:50:38.000Z
notebooks/03_auto_aggregation.ipynb
suhara/learnit
5db6e80d2bd2d1bf1a381f668db40288888236e4
[ "Apache-2.0" ]
2
2020-02-28T04:06:04.000Z
2020-02-28T04:06:08.000Z
notebooks/03_auto_aggregation.ipynb
suhara/learnit
5db6e80d2bd2d1bf1a381f668db40288888236e4
[ "Apache-2.0" ]
1
2020-02-27T22:19:17.000Z
2020-02-27T22:19:17.000Z
60.503215
27,560
0.671073
[ [ [ "# AutoAggregator for Merging External Tables\n\nIn many situations your data may be stored in several tables. Let's see how `AutoConverter` can take care of it. After you perform conversion you can proceed with `AutoLearn` as usual.\n\nSometimes not all the data is stored in one table. For example you can have some user information in one table and some additional data in other tables, such as transactions, user activity logs etc. If you are trying to predict something about one user you can have hundreds of transactions for each user. And you want to use this data too.\n\nIf you have multiple data tables you want to use for your task you can use\nAutoConverter for linking these tables and grouping values. \n\nLet's load the libraries that we are going to use and have a closer look on the datasets:\n", "_____no_output_____" ] ], [ [ "import os\nimport sys\nsys.path.append('../')\nimport pandas as pd\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom learnit import AutoConverter, AutoLearn, Evaluate\nwarnings.filterwarnings('ignore')\n\ndirpath = \"data/kaggle-kkbox-churn-prediction-challenge-1k/\"\ndf_main = pd.read_csv(os.path.join(dirpath, 'members_train.csv'), parse_dates=['registration_init_time', 'expiration_date'])\ndf_sub = pd.read_csv(os.path.join(dirpath, 'transactions.csv'), parse_dates=['membership_expire_date', 'transaction_date'])\ndf_sub2 = pd.read_csv(os.path.join(dirpath, 'user_logs.csv'), parse_dates=['date'])", "Using TensorFlow backend.\n" ] ], [ [ "You need to predict churn of the users and you have two\ntables:\n\n1) users table. This table consists of users, some information about them and, most importantly the target values (labels for prediction). The latter suggests that this should be the main table for the purpose of `AutoConverter`. This table also contains ID column that we are going to use to link it to the second table.\n\n", "_____no_output_____" ] ], [ [ "df_main.head()", "_____no_output_____" ] ], [ [ "\n\n2) user_transactions table: this table consists of separate transaction records. Each user, represented by User_ID, can have multiple transactions. In order to get a single feature matrix for our learning algorithm, we will need to group this information by User_ID before we merge user table with user_transactions table. But don't worry! `AutoConverter` will take care of this!\n\n", "_____no_output_____" ] ], [ [ "df_sub.head()", "_____no_output_____" ], [ "df_sub2.head()", "_____no_output_____" ] ], [ [ "Let's take care about both of the tables. For that we will create `AutoConverter` object as usual. We are also creating `subtables` dictionary that tells the system how to link the secondary tables to the main one. For each secondary table this dictionary contains its name, the variable that contains the table (`df_sub` in our case), the column in the main table that is used for linking (`link_key`) and the column of the secondary table used for linking and grouping items (`group_key`). You can use as many secondary tables as you want.", "_____no_output_____" ] ], [ [ "ac = AutoConverter(target='is_churn')\n\"\"\"subtables = {'second_table': {'table': df_sub,\n 'link_key': 'msno',\n 'group_key': 'msno'}}\"\"\"\nsubtables = {'transactions': {'table': df_sub,\n 'link_key': 'msno',\n 'group_key': 'msno'},\n 'user_logs': {'table': df_sub2,\n 'link_key': 'msno',\n 'group_key': 'msno'}}\nX, y = ac.fit_transform(df_main, subtables)\n\nprint(X.shape)\nac.feature_names", "(898, 302)\n" ] ], [ [ "As we can see now we have features from the main table and the ones from the secondary table. Printing out feature names also gives us understanding about how well the system has extracted the features. ", "_____no_output_____" ], [ "Now we can train the model using `AutoLearn` the same way as we did before.", "_____no_output_____" ] ], [ [ "print(X.shape)\nal = AutoLearn(level=2)\nal.learn(X, y)\ninfo, metrics = al.display(tab=False)\ninfo.head()", "(898, 302)\nLearning from data...\n" ], [ "metrics.head()", "_____no_output_____" ], [ "#Visualize train/test performance with error bars (standard deviation)\nmetrics.plot(kind=\"bar\", rot=0)", "_____no_output_____" ] ], [ [ "It is usual for the predictive models to achieve better results on *training set* than on *test_set*. This happens because the model can see the data in the *training set* and take it into account and can not see the data from the *test set*. This difference, however should not be too big. For example if the model achieves 99% results on training data and 60% on test data that would indicate that the model is overfitting the data and fails to generalize to the unseen data. It would be a good idea to increase the dataset in this case.", "_____no_output_____" ], [ "## Understanding Predictive Models via Feature Importance Calculation\nLet's have a closer look at the model that AutoLearn trained for us. Here's how we can get the full list of metrics for the given model.\nUsers usually want to understand what stands behind the model's decisions. One way to do so is to calculate the feature importance. This metric shows which data column is giving the highest contribution to the decision has made. This is a good way to perform a sanity check of the model. \n\nTo activate the function we use pre-fit `AutoConverter` and pre-trained `AutoLearn` instance.", "_____no_output_____" ] ], [ [ "# Feature Importance calculation with Evaluate\ne = Evaluate(ac=ac, alearn=al)\ncol_imp_df = e.calculate_column_importance()\npd.DataFrame(col_imp_df[\"roc_auc\"])", "_____no_output_____" ], [ "# Will be implemented as a function. Don't worry!\ncol_imp_df[\"roc_auc\"].sort_values().plot(kind=\"barh\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
e72843c7c67aba730d0bb6551e0904a725eb14c5
381,643
ipynb
Jupyter Notebook
assignment-1-part-2-KG/.ipynb_checkpoints/assignment-1-part-2-machine-reading-kg-checkpoint.ipynb
dbmi-pitt/bioinf-2071-knowledge-graph
f80f90309743466d7d0c97ae3b9303363cf789d5
[ "Apache-2.0" ]
1
2021-02-11T23:06:04.000Z
2021-02-11T23:06:04.000Z
assignment-1-part-2-KG/.ipynb_checkpoints/assignment-1-part-2-machine-reading-kg-checkpoint.ipynb
dbmi-pitt/bioinf-2071-knowledge-graph
f80f90309743466d7d0c97ae3b9303363cf789d5
[ "Apache-2.0" ]
null
null
null
assignment-1-part-2-KG/.ipynb_checkpoints/assignment-1-part-2-machine-reading-kg-checkpoint.ipynb
dbmi-pitt/bioinf-2071-knowledge-graph
f80f90309743466d7d0c97ae3b9303363cf789d5
[ "Apache-2.0" ]
null
null
null
35.899069
1,143
0.509392
[ [ [ "## Assignment 1 Part II - machine reading knowledge graph instance closure", "_____no_output_____" ], [ "In this part of the Assignment I you will learn about applying logical closure over a knowledge graph creating using predications extracted using machine reading.\n\nThis notebook is broken into 3 parts. \n\nFirst, you will learn about the rule programming paradigm using CLIPS through a python library called ClipsPy. You will also learn some recmmended practices for writing production rules. \n\nSecond, you will create a domain of discourse that fits your interests using PubMed and eutils. You will query predications for the domain of discourse from SemMedDB.\n\nFinally, you will run transitive and symmetric closer over the. You will also compare results with the predication set prior to closure.\n", "_____no_output_____" ], [ "### 1) The rule programming paradigm and recommended practices for writing production rules.", "_____no_output_____" ], [ "Complete the cells and answer the questions in [the ClipsPy Hands On Tutorial](clipspy-handson-tutorial.ipynb)", "_____no_output_____" ], [ "### 2) Create a domain of discourse that fits your interest and query for predications.", "_____no_output_____" ], [ "#### Assign to the list variable between 100 and 300 PMIDs from a biomedical domain that you have a particular interest in. These could be from a collection you have access to or from a new search. For example, you might pick a disease of interest and search for systematic reviews, meta-analyses, or other study types (e.g., animal or in vitro studies). Do not do more than 300 because it will overtax the server later on when it queries SemMedDB. Be sure to list PMIDs as integer types e.g., 1,2,3. ", "_____no_output_____" ] ], [ [ "seed_pmid_L = [27569062,24283439,22098123,21062266,20518782,17184278,15096145,10333626,7917994,1619641,1842204,22946697,3789005,3712393,4697981,15490149,15490149,11078563,22837569,19047498,16825433,12522802,12494447,12239713,8725790,9147900,8867662,7917994,1358488,1867292,2063896,1915501,2596507,2596506,2596505,3265306,3425595,26293599,23379592,25049390,22946697,21797849,21178588,20735442,20030748,20882035,19206169,19120036,19156172,19376813,18042262,18456719,17661820,17551924,18039946,16439621,16804887,16474404,16372351,15488754,12752577,12529711,12634870,12494447,12529707,12522804,12116271,12384786,11102944,10925386,11078563,9781012,9272711,8725790,1619641,1785648,3265306,3789005,4697981,18039235,17611774,11078563,2149308,25742478,24705357,21500339,18854871,15488754,10528867,21204800,19376813,18039946,26446362,18039946,24101678,27149079,24458522,26150740,17703371,22946697,11078563,18039235,20735442,21495172,21495178,21062266,17056636,26242988,21495174,16906537,19063751,17974516,20186801,21204800,16520774]", "_____no_output_____" ], [ "import entrezpy.elink.elinker", "_____no_output_____" ] ], [ [ "#### The next two cells obtain PMIDs for articles cited in the PMIDs you chose and also articles that cite the PMIDs you chose. Both cells uses the eutils API with my email and API key. There is only a very tiny difference in the eutils call.\n\n#### Note: the call returns an error if it cannot retrieve citations for a given PMID. You can ignore the errors so long is it is retrieving citations for most PMIDs.", "_____no_output_____" ] ], [ [ "cited_in_pmids = {}\nfor pmid in seed_pmid_L: \n print(\"INFO: pmid {}\".format(pmid))\n e = entrezpy.elink.elinker.Elinker('bioinf2071-course-notebook',\n '[email protected]',\n '2f8f82da766ed382a93ccb66e19b07a4ef09')\n try:\n analyzer = e.inquire({'db':'pubmed',\n 'dbfrom':'pubmed',\n 'linkname' : 'pubmed_pubmed_citedin',\n 'id' : [pmid]}\n )\n except TypeError:\n print(\"ERROR: problem retrieving citations for {}\".format(pmid))\n continue\n \n r = analyzer.get_result().dump()\n linkunits = [x['linkunits'] for x in r['linksets']]\n\n for l in linkunits:\n for u in l:\n cited_in_pmids[u['uid']] = None\n\n# print(cited_in_pmids.keys())\n", "INFO: pmid 27569062\nINFO: pmid 24283439\nINFO: pmid 22098123\nINFO: pmid 21062266\nINFO: pmid 20518782\nINFO: pmid 17184278\nERROR: problem retrieving citations for 17184278\nINFO: pmid 15096145\nINFO: pmid 10333626\nERROR: problem retrieving citations for 10333626\nINFO: pmid 7917994\nINFO: pmid 1619641\nINFO: pmid 1842204\nERROR: problem retrieving citations for 1842204\nINFO: pmid 22946697\nINFO: pmid 3789005\nINFO: pmid 3712393\nINFO: pmid 4697981\nINFO: pmid 15490149\nINFO: pmid 15490149\nINFO: pmid 11078563\nINFO: pmid 22837569\nINFO: pmid 19047498\nINFO: pmid 16825433\nINFO: pmid 12522802\nINFO: pmid 12494447\nERROR: problem retrieving citations for 12494447\nINFO: pmid 12239713\nINFO: pmid 8725790\nINFO: pmid 9147900\nERROR: problem retrieving citations for 9147900\nINFO: pmid 8867662\nINFO: pmid 7917994\nINFO: pmid 1358488\nERROR: problem retrieving citations for 1358488\nINFO: pmid 1867292\nINFO: pmid 2063896\nINFO: pmid 1915501\nINFO: pmid 2596507\nINFO: pmid 2596506\nINFO: pmid 2596505\nINFO: pmid 3265306\nINFO: pmid 3425595\nINFO: pmid 26293599\nINFO: pmid 23379592\nINFO: pmid 25049390\nINFO: pmid 22946697\nINFO: pmid 21797849\nINFO: pmid 21178588\nINFO: pmid 20735442\nINFO: pmid 20030748\nINFO: pmid 20882035\nINFO: pmid 19206169\nINFO: pmid 19120036\nINFO: pmid 19156172\nINFO: pmid 19376813\nINFO: pmid 18042262\nINFO: pmid 18456719\nINFO: pmid 17661820\nINFO: pmid 17551924\nINFO: pmid 18039946\nINFO: pmid 16439621\nINFO: pmid 16804887\nINFO: pmid 16474404\nINFO: pmid 16372351\nINFO: pmid 15488754\nINFO: pmid 12752577\nINFO: pmid 12529711\nINFO: pmid 12634870\nINFO: pmid 12494447\nERROR: problem retrieving citations for 12494447\nINFO: pmid 12529707\nINFO: pmid 12522804\nERROR: problem retrieving citations for 12522804\nINFO: pmid 12116271\nINFO: pmid 12384786\nINFO: pmid 11102944\nERROR: problem retrieving citations for 11102944\nINFO: pmid 10925386\nINFO: pmid 11078563\nINFO: pmid 9781012\nINFO: pmid 9272711\nINFO: pmid 8725790\nINFO: pmid 1619641\nINFO: pmid 1785648\nERROR: problem retrieving citations for 1785648\nINFO: pmid 3265306\nINFO: pmid 3789005\nINFO: pmid 4697981\nINFO: pmid 18039235\nINFO: pmid 17611774\nINFO: pmid 11078563\nINFO: pmid 2149308\nINFO: pmid 25742478\nINFO: pmid 24705357\nINFO: pmid 21500339\nINFO: pmid 18854871\nINFO: pmid 15488754\nINFO: pmid 10528867\nINFO: pmid 21204800\nINFO: pmid 19376813\nINFO: pmid 18039946\nINFO: pmid 26446362\nINFO: pmid 18039946\nINFO: pmid 24101678\nINFO: pmid 27149079\nINFO: pmid 24458522\nINFO: pmid 26150740\nINFO: pmid 17703371\nINFO: pmid 22946697\nINFO: pmid 11078563\nINFO: pmid 18039235\nINFO: pmid 20735442\nINFO: pmid 21495172\nINFO: pmid 21495178\nINFO: pmid 21062266\nINFO: pmid 17056636\nINFO: pmid 26242988\nINFO: pmid 21495174\nINFO: pmid 16906537\nERROR: problem retrieving citations for 16906537\nINFO: pmid 19063751\nINFO: pmid 17974516\nERROR: problem retrieving citations for 17974516\nINFO: pmid 20186801\nINFO: pmid 21204800\nINFO: pmid 16520774\n" ], [ "citing_pmids = {}\nfor pmid in seed_pmid_L: \n print(\"INFO: pmid {}\".format(pmid))\n e = entrezpy.elink.elinker.Elinker('bioinf2071-course-notebook',\n '[email protected]',\n '2f8f82da766ed382a93ccb66e19b07a4ef09')\n try:\n analyzer = e.inquire({'db':'pubmed',\n 'dbfrom':'pubmed',\n 'linkname' : 'pubmed_pubmed_refs',\n 'id' : [pmid]}\n )\n except TypeError:\n print(\"ERROR: problem retrieving citations for {}\".format(pmid))\n continue \n \n r = analyzer.get_result().dump()\n linkunits = [x['linkunits'] for x in r['linksets']]\n\n for l in linkunits:\n for u in l:\n citing_pmids[u['uid']] = None\n# print(citing_pmids.keys())", "INFO: pmid 27569062\nINFO: pmid 24283439\nERROR: problem retrieving citations for 24283439\nINFO: pmid 22098123\nINFO: pmid 21062266\nINFO: pmid 20518782\nERROR: problem retrieving citations for 20518782\nINFO: pmid 17184278\nERROR: problem retrieving citations for 17184278\nINFO: pmid 15096145\nERROR: problem retrieving citations for 15096145\nINFO: pmid 10333626\nERROR: problem retrieving citations for 10333626\nINFO: pmid 7917994\nERROR: problem retrieving citations for 7917994\nINFO: pmid 1619641\nINFO: pmid 1842204\nERROR: problem retrieving citations for 1842204\nINFO: pmid 22946697\nINFO: pmid 3789005\nERROR: problem retrieving citations for 3789005\nINFO: pmid 3712393\nINFO: pmid 4697981\nINFO: pmid 15490149\nINFO: pmid 15490149\nINFO: pmid 11078563\nERROR: problem retrieving citations for 11078563\nINFO: pmid 22837569\nINFO: pmid 19047498\nERROR: problem retrieving citations for 19047498\nINFO: pmid 16825433\nINFO: pmid 12522802\nERROR: problem retrieving citations for 12522802\nINFO: pmid 12494447\nERROR: problem retrieving citations for 12494447\nINFO: pmid 12239713\nERROR: problem retrieving citations for 12239713\nINFO: pmid 8725790\nERROR: problem retrieving citations for 8725790\nINFO: pmid 9147900\nERROR: problem retrieving citations for 9147900\nINFO: pmid 8867662\nERROR: problem retrieving citations for 8867662\nINFO: pmid 7917994\nERROR: problem retrieving citations for 7917994\nINFO: pmid 1358488\nERROR: problem retrieving citations for 1358488\nINFO: pmid 1867292\nERROR: problem retrieving citations for 1867292\nINFO: pmid 2063896\nERROR: problem retrieving citations for 2063896\nINFO: pmid 1915501\nINFO: pmid 2596507\nERROR: problem retrieving citations for 2596507\nINFO: pmid 2596506\nERROR: problem retrieving citations for 2596506\nINFO: pmid 2596505\nERROR: problem retrieving citations for 2596505\nINFO: pmid 3265306\nERROR: problem retrieving citations for 3265306\nINFO: pmid 3425595\nERROR: problem retrieving citations for 3425595\nINFO: pmid 26293599\nINFO: pmid 23379592\nINFO: pmid 25049390\nINFO: pmid 22946697\nINFO: pmid 21797849\nERROR: problem retrieving citations for 21797849\nINFO: pmid 21178588\nERROR: problem retrieving citations for 21178588\nINFO: pmid 20735442\nERROR: problem retrieving citations for 20735442\nINFO: pmid 20030748\nERROR: problem retrieving citations for 20030748\nINFO: pmid 20882035\nERROR: problem retrieving citations for 20882035\nINFO: pmid 19206169\nINFO: pmid 19120036\nERROR: problem retrieving citations for 19120036\nINFO: pmid 19156172\nINFO: pmid 19376813\nINFO: pmid 18042262\nERROR: problem retrieving citations for 18042262\nINFO: pmid 18456719\nERROR: problem retrieving citations for 18456719\nINFO: pmid 17661820\nERROR: problem retrieving citations for 17661820\nINFO: pmid 17551924\nERROR: problem retrieving citations for 17551924\nINFO: pmid 18039946\nERROR: problem retrieving citations for 18039946\nINFO: pmid 16439621\nERROR: problem retrieving citations for 16439621\nINFO: pmid 16804887\nERROR: problem retrieving citations for 16804887\nINFO: pmid 16474404\nERROR: problem retrieving citations for 16474404\nINFO: pmid 16372351\nERROR: problem retrieving citations for 16372351\nINFO: pmid 15488754\nERROR: problem retrieving citations for 15488754\nINFO: pmid 12752577\nERROR: problem retrieving citations for 12752577\nINFO: pmid 12529711\nERROR: problem retrieving citations for 12529711\nINFO: pmid 12634870\nERROR: problem retrieving citations for 12634870\nINFO: pmid 12494447\nERROR: problem retrieving citations for 12494447\nINFO: pmid 12529707\nERROR: problem retrieving citations for 12529707\nINFO: pmid 12522804\nERROR: problem retrieving citations for 12522804\nINFO: pmid 12116271\nERROR: problem retrieving citations for 12116271\nINFO: pmid 12384786\nERROR: problem retrieving citations for 12384786\nINFO: pmid 11102944\nERROR: problem retrieving citations for 11102944\nINFO: pmid 10925386\nERROR: problem retrieving citations for 10925386\nINFO: pmid 11078563\nERROR: problem retrieving citations for 11078563\nINFO: pmid 9781012\nERROR: problem retrieving citations for 9781012\nINFO: pmid 9272711\nERROR: problem retrieving citations for 9272711\nINFO: pmid 8725790\nERROR: problem retrieving citations for 8725790\nINFO: pmid 1619641\nINFO: pmid 1785648\nERROR: problem retrieving citations for 1785648\nINFO: pmid 3265306\nERROR: problem retrieving citations for 3265306\nINFO: pmid 3789005\nERROR: problem retrieving citations for 3789005\nINFO: pmid 4697981\nINFO: pmid 18039235\nERROR: problem retrieving citations for 18039235\nINFO: pmid 17611774\nINFO: pmid 11078563\nERROR: problem retrieving citations for 11078563\nINFO: pmid 2149308\nERROR: problem retrieving citations for 2149308\nINFO: pmid 25742478\nINFO: pmid 24705357\nINFO: pmid 21500339\nINFO: pmid 18854871\nINFO: pmid 15488754\nERROR: problem retrieving citations for 15488754\nINFO: pmid 10528867\nERROR: problem retrieving citations for 10528867\nINFO: pmid 21204800\nINFO: pmid 19376813\nINFO: pmid 18039946\nERROR: problem retrieving citations for 18039946\nINFO: pmid 26446362\nINFO: pmid 18039946\nERROR: problem retrieving citations for 18039946\nINFO: pmid 24101678\nINFO: pmid 27149079\nERROR: problem retrieving citations for 27149079\nINFO: pmid 24458522\nERROR: problem retrieving citations for 24458522\nINFO: pmid 26150740\nINFO: pmid 17703371\nERROR: problem retrieving citations for 17703371\nINFO: pmid 22946697\nINFO: pmid 11078563\nERROR: problem retrieving citations for 11078563\nINFO: pmid 18039235\nERROR: problem retrieving citations for 18039235\nINFO: pmid 20735442\nERROR: problem retrieving citations for 20735442\nINFO: pmid 21495172\nINFO: pmid 21495178\nERROR: problem retrieving citations for 21495178\nINFO: pmid 21062266\nINFO: pmid 17056636\nINFO: pmid 26242988\nINFO: pmid 21495174\nERROR: problem retrieving citations for 21495174\nINFO: pmid 16906537\nERROR: problem retrieving citations for 16906537\nINFO: pmid 19063751\nERROR: problem retrieving citations for 19063751\nINFO: pmid 17974516\nERROR: problem retrieving citations for 17974516\nINFO: pmid 20186801\nINFO: pmid 21204800\nINFO: pmid 16520774\nERROR: problem retrieving citations for 16520774\n" ], [ "all_pmids = seed_pmid_L + list(cited_in_pmids.keys()) + list(citing_pmids.keys())", "_____no_output_____" ], [ "len(all_pmids)", "_____no_output_____" ] ], [ [ "#### The next cells connect to an instance of SemMedDB running on the same server as this notebook. Below is query for predications extracted from the articles in the set of PMIDS you picked, citing artcles, and articles cited by. \n\n**NOTE:** The cell that runs the actual semmeddb query will likely run for more than 2 hours. This is because there are over 90 million predications in the database.\n\n**NOTE:** The VPN is likely to drop during that time. If that happens, you can reconnect to the VPN and then click on the red button that shows up in the upper right of the notebook indicating that the notebook is not connected. That will reset the connection. Since the notebook kernel is actually running on the server, all of the data in the notebook from previous cells should still be available. Sometimes, the mysql connection will 'go away' (die). You can re-run the block below to reconnect mysql if you need to. ", "_____no_output_____" ] ], [ [ "import MySQLdb \ncnx = MySQLdb.connect(user='semmeddb', passwd='semmeddb',\n host='127.0.0.1',\n db='semmeddb')\ncursor = cnx.cursor()", "_____no_output_____" ], [ "## We store the PMIDs list in a table on the database \nimport os\ncursor.execute('''DROP TABLE IF EXISTS t_pmids_{};'''.format(os.getenv('JUPYTERHUB_USER')))\n\ntable_description = '''\n CREATE TABLE IF NOT EXISTS `t_pmids_{}`(\n `pmid` char(25) NOT NULL\n ) ENGINE=InnoDB DEFAULT CHARACTER SET latin1 COLLATE=latin1_swedish_ci;'''.format(os.getenv('JUPYTERHUB_USER'))\n\ntry:\n print(\"Creating temp table for pmid list\")\n cursor.execute(table_description)\nexcept:\n print(\"ERROR: Table creation failed\")\nelse:\n print(\"OK\")", "Creating temp table for pmid list\nOK\n" ], [ "for pmid in all_pmids: \n add_pmid = ('''INSERT INTO t_pmids_{} (pmid) VALUES ({})'''.format(os.getenv('JUPYTERHUB_USER'),pmid))\n cursor.execute(add_pmid)\n \n# Make sure data is committed to the database\ncnx.commit()", "_____no_output_____" ], [ "## This is the main query - it joins the SemMedDB predication table with the PMIDs of interest. \n## NOTE: this query can take more than 2 hours to run\nsmdb_q = ''' \nselect p.predication_id,\n p.sentence_id,\n null as p_index, \n p.PMID as pmid, \n null as subject_source, \n SUBSTRING_INDEX(p.SUBJECT_CUI,'|',1) as subject_cui, \n p.SUBJECT_NAME as subject_name,\n p.SUBJECT_SEMTYPE as subject_type,\n null as subject_score,\n p.PREDICATE as predicate,\n null as object_source,\n SUBSTRING_INDEX(p.OBJECT_CUI,'|',1) as object_cui,\n p.OBJECT_NAME as object_name,\n p.OBJECT_SEMTYPE as object_type,\n null as object_score,\n 0.8 as belief,\n null as sentence,\n 'semmedb' as source\nfrom semmeddb.PREDICATION p\nwhere p.PMID in (select pmid from t_pmids_{})\n;\n'''.format(os.getenv('JUPYTERHUB_USER'))\ncursor.execute(smdb_q)\ndata = cursor.fetchall()", "_____no_output_____" ], [ "## Confirm that the results were returned and save to a pickle file\nimport pickle\nif len(data) > 0:\n print('INFO: SemMedDB Query returned {} predications.'.format(len(data)))\n f = open('semmedb-results.pickle','wb')\n pickle.dump(data,f)\n f.close()\n print('INFO: SemMedDB results saved to semmedb-results.pickle')\nelse:\n print('INFO: No SemMedDB results returned. If the previous query block ran without error, check if the PMIDs collection and loading blocks worked')", "INFO: SemMedDB Query returned 10757 predications.\nINFO: SemMedDB results saved to semmedb-results.pickle\n" ], [ "## If the query as successful, then we can drop the teporary table\ncursor.execute('''DROP TABLE t_pmids_{};'''.format(os.getenv('JUPYTERHUB_USER')))", "_____no_output_____" ] ], [ [ "#### The next cells give some summary information about the predications extracted from the cells above. Take a look at some of the predications and answer the questions that follow. ", "_____no_output_____" ] ], [ [ "import random\nsampleIdxL = random.sample(range(0,len(data)), 100)", "_____no_output_____" ], [ "## Obtain predication score data \nscoreData = []\nfor idx in sampleIdxL:\n cursor.execute('''SELECT * FROM PREDICATION_AUX WHERE PREDICATION_ID = {};'''.format(data[idx][0]))\n scoreRec = cursor.fetchall()\n scoreData.append(scoreRec[0])", "_____no_output_____" ], [ "## Obtain the sentences from which predications were read by machine reader\nsentenceData = []\nfor idx in sampleIdxL:\n cursor.execute('''SELECT * FROM SENTENCE WHERE SENTENCE_ID = {};'''.format(data[idx][1]))\n sentenceRec = cursor.fetchall()\n sentenceData.append(sentenceRec[0])", "_____no_output_____" ], [ "print(scoreData[0])", "(55833977, 55833976, 'brain', 1, 1, 949, 954, 1000, 'PREP', 942, 944, 'Mitogen Activated Protein Kinase', 1, 3, 887, 925, 876, datetime.datetime(2018, 3, 6, 12, 25, 6))\n" ], [ "colLabs = ('predication_id','sentence_id','index','pmid','subject_source','subject_cui','subject_name','subject_type','subject_score','predicate','object_source','object_cui','object_name','object_type','object_score','belief','sentence','source')\nscoreLabs = ('predication_aux_id','predication_id','subject_text','subject_dist','subject_maxdist','subject_start_index','subject_end_index','subject_score','indicator_type','predicate_start_index','predicate_end_index','object_text','object_dist','object_maxdist','object_start_index','object_end_index','object_score','curr_timestamp')\nsentenceLabs = ('sentence_id','pmid','type','number','section_header','sentence','normalized_section_header')\n\nctr = 0\nfor i in sampleIdxL:\n r = zip(colLabs,data[i])\n print('\\n\\nPREDICTION ' + str(i))\n print('\\n\\t'.join([str(x) for x in tuple(r)]))\n \n t = zip(sentenceLabs,sentenceData[ctr])\n print('\\nSENTENCE DATA:')\n print('\\n\\t'.join([str(x) for x in tuple(t)]))\n \n s = zip(scoreLabs,scoreData[ctr])\n print('\\nSCORE DATA:')\n print('\\n\\t'.join([str(x) for x in tuple(s)]))\n \n \n print('--------------------------------------------------------------------------------\\n\\n')\n ctr += 1", "\n\nPREDICTION 10583\n('predication_id', 55833976)\n\t('sentence_id', 96726013)\n\t('index', None)\n\t('pmid', '16271875')\n\t('subject_source', None)\n\t('subject_cui', 'C0006104')\n\t('subject_name', 'Brain')\n\t('subject_type', 'bpoc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0752312')\n\t('object_name', 'Mitogen-Activated Protein Kinases')\n\t('object_type', 'aapp')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 96726013)\n\t('pmid', '16271875')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Here, we identify lovastatin as a potent inhibitor of p21Ras/Mitogen Activated Protein Kinase (MAPK) activity in the brain.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 55833977)\n\t('predication_id', 55833976)\n\t('subject_text', 'brain')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 949)\n\t('subject_end_index', 954)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 942)\n\t('predicate_end_index', 944)\n\t('object_text', 'Mitogen Activated Protein Kinase')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 887)\n\t('object_end_index', 925)\n\t('object_score', 876)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 25, 6))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7388\n('predication_id', 52670928)\n\t('sentence_id', 90873049)\n\t('index', None)\n\t('pmid', '15635069')\n\t('subject_source', None)\n\t('subject_cui', 'C0205448')\n\t('subject_name', 'Two')\n\t('subject_type', 'qnco')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C0449851')\n\t('object_name', 'Techniques')\n\t('object_type', 'ftcn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 90873049)\n\t('pmid', '15635069')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', 'METHODS')\n\t('sentence', 'These two methods facilitate rapid analysis of translocation breakpoints and screening for cryptic chromosome imbalance.')\n\t('normalized_section_header', 'METHODS')\n\nSCORE DATA:\n('predication_aux_id', 52670929)\n\t('predication_id', 52670928)\n\t('subject_text', 'two')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 858)\n\t('subject_end_index', 861)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 858)\n\t('predicate_end_index', 869)\n\t('object_text', 'methods')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 862)\n\t('object_end_index', 869)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 11, 33, 44))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8316\n('predication_id', 55404209)\n\t('sentence_id', 95931218)\n\t('index', None)\n\t('pmid', '16679933')\n\t('subject_source', None)\n\t('subject_cui', 'C0175704')\n\t('subject_name', 'LEOPARD Syndrome')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 95931218)\n\t('pmid', '16679933')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'A missense mutation (836-->G; Tyr279Cys) in exon 7 of PTPN11 gene was identified in this patient and his mother with LEOPARD syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 55404206)\n\t('predication_id', 55404209)\n\t('subject_text', 'LEOPARD syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 874)\n\t('subject_end_index', 890)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 869)\n\t('predicate_end_index', 873)\n\t('object_text', 'patient')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 840)\n\t('object_end_index', 847)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 17, 18))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5357\n('predication_id', 87759962)\n\t('sentence_id', 151920402)\n\t('index', None)\n\t('pmid', '24125472')\n\t('subject_source', None)\n\t('subject_cui', 'C0080222')\n\t('subject_name', 'TGFB1 protein, human|TGFB1')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'DISRUPTS')\n\t('object_source', None)\n\t('object_cui', 'C0031715')\n\t('object_name', 'Phosphorylation')\n\t('object_type', 'moft')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 151920402)\n\t('pmid', '24125472')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', 'KEY RESULTS')\n\t('sentence', 'Further molecular analysis revealed that AP impaired RANKL-induced NF-kappaB signalling by inhibiting the phosphorylation of TGF-beta-activated kinase 1, suppressing the phosphorylation and degradation of IkappaBalpha, and subsequently preventing the nuclear translocation of the NF-kappaB p65 subunit.')\n\t('normalized_section_header', 'RESULTS')\n\nSCORE DATA:\n('predication_aux_id', 87759966)\n\t('predication_id', 87759962)\n\t('subject_text', 'TGF-beta-activated kinase 1')\n\t('subject_dist', 0)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 1644)\n\t('subject_end_index', 1671)\n\t('subject_score', 868)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1679)\n\t('predicate_end_index', 1690)\n\t('object_text', 'phosphorylation')\n\t('object_dist', 1)\n\t('object_maxdist', 5)\n\t('object_start_index', 1695)\n\t('object_end_index', 1710)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 11, 30, 31))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4493\n('predication_id', 63682523)\n\t('sentence_id', 111480401)\n\t('index', None)\n\t('pmid', '18413802')\n\t('subject_source', None)\n\t('subject_cui', 'C0174680')\n\t('subject_name', 'Cyclin D1|CCND1')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'DISRUPTS')\n\t('object_source', None)\n\t('object_cui', 'C1150604')\n\t('object_name', 'MAP kinase kinase activity')\n\t('object_type', 'moft')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 111480401)\n\t('pmid', '18413802')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', '')\n\t('sentence', 'This appeared to be due to inhibition of phospho-MEK, phospho-ERK, suppression of cyclin D1, and hypophosphorylation of pRb at the CDK4-specific sites, resulting in a G(1) cell cycle arrest.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 63682523)\n\t('predication_id', 63682523)\n\t('subject_text', 'cyclin D1')\n\t('subject_dist', 4)\n\t('subject_maxdist', 9)\n\t('subject_start_index', 1288)\n\t('subject_end_index', 1297)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 1227)\n\t('predicate_end_index', 1237)\n\t('object_text', 'MEK')\n\t('object_dist', 1)\n\t('object_maxdist', 9)\n\t('object_start_index', 1249)\n\t('object_end_index', 1252)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 13, 51))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3363\n('predication_id', 107214196)\n\t('sentence_id', 199271400)\n\t('index', None)\n\t('pmid', '28358377')\n\t('subject_source', None)\n\t('subject_cui', 'C0031678')\n\t('subject_name', 'Phosphoric Monoester Hydrolases')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C1395184')\n\t('object_name', 'depolarization')\n\t('object_type', 'patf')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 199271400)\n\t('pmid', '28358377')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Knockdown of phosphatase and tensin homolog-induced putative kinase 1 (PINK1) inhibited the BAY-induced Deltapsi depolarization, mitophagy stimulation, ROS increase and cell death.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 107214197)\n\t('predication_id', 107214196)\n\t('subject_text', 'phosphatase')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1515)\n\t('subject_end_index', 1526)\n\t('subject_score', 1000)\n\t('indicator_type', 'ADJ')\n\t('predicate_start_index', 1546)\n\t('predicate_end_index', 1553)\n\t('object_text', 'depolarization')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 1621)\n\t('object_end_index', 1635)\n\t('object_score', 763)\n\t('curr_timestamp', datetime.datetime(2018, 3, 10, 11, 9, 17))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5436\n('predication_id', 106591311)\n\t('sentence_id', 197931368)\n\t('index', None)\n\t('pmid', '28566769')\n\t('subject_source', None)\n\t('subject_cui', '29123')\n\t('subject_name', 'ANKRD11')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0220687')\n\t('object_name', 'KBG syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 197931368)\n\t('pmid', '28566769')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'A commentary on ANKRD11 variants cause variable clinical features associated with KBG syndrome and Coffin-Siris-like syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 106591309)\n\t('predication_id', 106591311)\n\t('subject_text', 'ANKRD11')\n\t('subject_dist', 2)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 185)\n\t('subject_end_index', 192)\n\t('subject_score', 827)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 235)\n\t('predicate_end_index', 245)\n\t('object_text', 'KBG syndrome')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 257)\n\t('object_end_index', 269)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 10, 11, 3, 20))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3722\n('predication_id', 92575887)\n\t('sentence_id', 131972907)\n\t('index', None)\n\t('pmid', '25317411')\n\t('subject_source', None)\n\t('subject_cui', 'C0547070')\n\t('subject_name', 'Ablation')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'USES')\n\t('object_source', None)\n\t('object_cui', 'C1441129')\n\t('object_name', 'Radioactive Iodine')\n\t('object_type', 'phsu')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 131972907)\n\t('pmid', '25317411')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', 'CONCLUSION')\n\t('sentence', 'CONCLUSION: This study found that BRAF mutation is associated with classic PTC and central lymph node metastasis and higher necessity of radioactive iodine ablation.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 92575887)\n\t('predication_id', 92575887)\n\t('subject_text', 'ablation')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1782)\n\t('subject_end_index', 1790)\n\t('subject_score', 901)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 1763)\n\t('predicate_end_index', 1790)\n\t('object_text', 'radioactive iodine')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1763)\n\t('object_end_index', 1781)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 2, 6, 28))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8159\n('predication_id', 44352941)\n\t('sentence_id', 76060131)\n\t('index', None)\n\t('pmid', '10706287')\n\t('subject_source', None)\n\t('subject_cui', '820')\n\t('subject_name', 'CAMP')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'INTERACTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0001492')\n\t('object_name', 'Adenylate Cyclase')\n\t('object_type', 'gngm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 76060131)\n\t('pmid', '10706287')\n\t('type', 'ab')\n\t('number', 5)\n\t('section_header', '')\n\t('sentence', 'Previous studies have suggested that Drosophila NF1 acts not only as a Ras-GAP but also as a possible regulator of the cAMP pathway that involves the rutabaga (rut)-encoded adenylyl cyclase.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 44352930)\n\t('predication_id', 44352941)\n\t('subject_text', 'cAMP')\n\t('subject_dist', 1)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 893)\n\t('subject_end_index', 897)\n\t('subject_score', 861)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 911)\n\t('predicate_end_index', 919)\n\t('object_text', 'adenylyl cyclase')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 953)\n\t('object_end_index', 969)\n\t('object_score', 824)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 1, 2, 2))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 6778\n('predication_id', 62719272)\n\t('sentence_id', 109645836)\n\t('index', None)\n\t('pmid', '18446851')\n\t('subject_source', None)\n\t('subject_cui', '5921')\n\t('subject_name', 'RASA1')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C0158570')\n\t('object_name', 'Vascular anomaly')\n\t('object_type', 'cgab')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 109645836)\n\t('pmid', '18446851')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Parkes Weber syndrome, vein of Galen aneurysmal malformation, and other fast-flow vascular anomalies are caused by RASA1 mutations.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 62719273)\n\t('predication_id', 62719272)\n\t('subject_text', 'RASA1')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 271)\n\t('subject_end_index', 276)\n\t('subject_score', 861)\n\t('indicator_type', 'INFER')\n\t('predicate_start_index', 198)\n\t('predicate_end_index', 256)\n\t('object_text', 'vascular anomalies')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 238)\n\t('object_end_index', 256)\n\t('object_score', 852)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 48, 38))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 117\n('predication_id', 93855270)\n\t('sentence_id', 141486329)\n\t('index', None)\n\t('pmid', '25049390')\n\t('subject_source', None)\n\t('subject_cui', '6016')\n\t('subject_name', 'RIT1')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0007634')\n\t('object_name', 'Cells')\n\t('object_type', 'cell')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 141486329)\n\t('pmid', '25049390')\n\t('type', 'ab')\n\t('number', 5)\n\t('section_header', '')\n\t('sentence', 'Expression of the mutant RASA2, MAP2K1, or RIT1 alleles in heterologous cells increased RAS-ERK pathway activation, supporting a causative role in NS pathogenesis.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 93855279)\n\t('predication_id', 93855270)\n\t('subject_text', 'RIT1')\n\t('subject_dist', 3)\n\t('subject_maxdist', 7)\n\t('subject_start_index', 1111)\n\t('subject_end_index', 1115)\n\t('subject_score', 861)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 1061)\n\t('predicate_end_index', 1071)\n\t('object_text', 'cells')\n\t('object_dist', 4)\n\t('object_maxdist', 7)\n\t('object_start_index', 1140)\n\t('object_end_index', 1145)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 4, 9, 25))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5528\n('predication_id', 103519171)\n\t('sentence_id', 191346215)\n\t('index', None)\n\t('pmid', '27760236')\n\t('subject_source', None)\n\t('subject_cui', 'C1263846')\n\t('subject_name', 'Attention deficit hyperactivity disorder')\n\t('subject_type', 'mobd')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C1457887')\n\t('object_name', 'Symptoms')\n\t('object_type', 'sosy')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 191346215)\n\t('pmid', '27760236')\n\t('type', 'ab')\n\t('number', 10)\n\t('section_header', '')\n\t('sentence', 'Autistic symptoms in this NF1 cohort demonstrated a robust unitary factor structure, with the first principal component explaining 30.9% of the variance in SRS-2 scores, and a strong association with ADHD symptoms (r = 0.61).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 103519170)\n\t('predication_id', 103519171)\n\t('subject_text', 'ADHD')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 2705)\n\t('subject_end_index', 2709)\n\t('subject_score', 928)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 2705)\n\t('predicate_end_index', 2718)\n\t('object_text', 'symptoms')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 2710)\n\t('object_end_index', 2718)\n\t('object_score', 928)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 14, 37, 36))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7591\n('predication_id', 79369385)\n\t('sentence_id', 142168807)\n\t('index', None)\n\t('pmid', '22772368')\n\t('subject_source', None)\n\t('subject_cui', 'C0040690')\n\t('subject_name', 'Transforming Growth Factor beta')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0017337')\n\t('object_name', 'Genes')\n\t('object_type', 'gngm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 142168807)\n\t('pmid', '22772368')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Loeys-Dietz syndrome (LDS) associates with a tissue signature for high transforming growth factor (TGF)-beta signaling but is often caused by heterozygous mutations in genes encoding positive effectors of TGF-beta signaling, including either subunit of the TGF-beta receptor or SMAD3, thereby engendering controversy regarding the mechanism of disease.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 79369385)\n\t('predication_id', 79369385)\n\t('subject_text', 'transforming growth factor (TGF)-beta')\n\t('subject_dist', 2)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 380)\n\t('subject_end_index', 417)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 480)\n\t('predicate_end_index', 482)\n\t('object_text', 'genes')\n\t('object_dist', 1)\n\t('object_maxdist', 9)\n\t('object_start_index', 483)\n\t('object_end_index', 488)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 1, 9, 39))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2748\n('predication_id', 64508365)\n\t('sentence_id', 113068164)\n\t('index', None)\n\t('pmid', '18954903')\n\t('subject_source', None)\n\t('subject_cui', 'C0006826')\n\t('subject_name', 'Malignant Neoplasms')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008059')\n\t('object_name', 'Child')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 113068164)\n\t('pmid', '18954903')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'JMML incidence approaches 1.2/million persons in the United States (Cancer Incidence and Survival Among Children and Adolescents: United States SEER Program 1975-1995).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 64508362)\n\t('predication_id', 64508365)\n\t('subject_text', 'Cancer')\n\t('subject_dist', 2)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 550)\n\t('subject_end_index', 556)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 586)\n\t('predicate_end_index', 591)\n\t('object_text', 'Children')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 592)\n\t('object_end_index', 600)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 38, 32))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 6510\n('predication_id', 62868324)\n\t('sentence_id', 109929034)\n\t('index', None)\n\t('pmid', '20535210')\n\t('subject_source', None)\n\t('subject_cui', 'C0175704')\n\t('subject_name', 'LEOPARD Syndrome')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 109929034)\n\t('pmid', '20535210')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'We have generated iPSCs from patients with LEOPARD syndrome (an acronym formed from its main features; that is, lentigines, electrocardiographic abnormalities, ocular hypertelorism, pulmonary valve stenosis, abnormal genitalia, retardation of growth and deafness), an autosomal-dominant developmental disorder belonging to a relatively prevalent class of inherited RAS-mitogen-activated protein kinase signalling diseases, which also includes Noonan syndrome, with pleomorphic effects on several tissues and organ systems.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 62868336)\n\t('predication_id', 62868324)\n\t('subject_text', 'LEOPARD syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 18)\n\t('subject_start_index', 639)\n\t('subject_end_index', 655)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 634)\n\t('predicate_end_index', 638)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 625)\n\t('object_end_index', 633)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 51, 57))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 6759\n('predication_id', 62663472)\n\t('sentence_id', 109536852)\n\t('index', None)\n\t('pmid', '20812000')\n\t('subject_source', None)\n\t('subject_cui', 'C1334474')\n\t('subject_name', 'MAP2K1 gene|MAP2K1')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C1275081')\n\t('object_name', 'Cardio-facio-cutaneous syndrome')\n\t('object_type', 'cgab')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 109536852)\n\t('pmid', '20812000')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Here, we describe the laboratory protocols and methods that we used to identify mutations in BRAF and MEK1/2 genes as causative for CFC syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 62663469)\n\t('predication_id', 62663472)\n\t('subject_text', 'MEK1/2 genes')\n\t('subject_dist', 1)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 1026)\n\t('subject_end_index', 1038)\n\t('subject_score', 896)\n\t('indicator_type', 'ADJ')\n\t('predicate_start_index', 1042)\n\t('predicate_end_index', 1051)\n\t('object_text', 'CFC syndrome')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 1056)\n\t('object_end_index', 1068)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 47, 20))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8640\n('predication_id', 51910402)\n\t('sentence_id', 89474885)\n\t('index', None)\n\t('pmid', '15897885')\n\t('subject_source', None)\n\t('subject_cui', 'C1333543')\n\t('subject_name', 'FGFR3 gene|FGFR3')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0027651')\n\t('object_name', 'Neoplasm')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 89474885)\n\t('pmid', '15897885')\n\t('type', 'ab')\n\t('number', 6)\n\t('section_header', '')\n\t('sentence', 'FGFR3 mutations were present in 54 tumours (55%) and three cell lines (10%), and Ras gene mutations in 13 tumours (13%) and four cell lines (13%).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 51910403)\n\t('predication_id', 51910402)\n\t('subject_text', 'FGFR3')\n\t('subject_dist', 2)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 928)\n\t('subject_end_index', 933)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 963)\n\t('predicate_end_index', 965)\n\t('object_text', 'tumours')\n\t('object_dist', 1)\n\t('object_maxdist', 5)\n\t('object_start_index', 969)\n\t('object_end_index', 976)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 11, 18, 12))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4830\n('predication_id', 47483234)\n\t('sentence_id', 81398449)\n\t('index', None)\n\t('pmid', '15928039')\n\t('subject_source', None)\n\t('subject_cui', '4597')\n\t('subject_name', 'MVD')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 81398449)\n\t('pmid', '15928039')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', '')\n\t('sentence', 'Eight of 19 patients with NS/MPD carried the Thr73Ile substitution.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 47483245)\n\t('predication_id', 47483234)\n\t('subject_text', 'MPD')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1164)\n\t('subject_end_index', 1167)\n\t('subject_score', 734)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1150)\n\t('predicate_end_index', 1154)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 1141)\n\t('object_end_index', 1149)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 10, 9, 2))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 10457\n('predication_id', 50002221)\n\t('sentence_id', 85996938)\n\t('index', None)\n\t('pmid', '14500346')\n\t('subject_source', None)\n\t('subject_cui', 'C0026882')\n\t('subject_name', 'Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0206530')\n\t('object_name', 'Germ-Line Mutation')\n\t('object_type', 'genf')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 85996938)\n\t('pmid', '14500346')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Only 1 of the 15 cases with a germ-line mutation in hMLH1 had a mutation in BRAF.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 50002221)\n\t('predication_id', 50002221)\n\t('subject_text', 'mutation')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1476)\n\t('subject_end_index', 1484)\n\t('subject_score', 1000)\n\t('indicator_type', 'AUX')\n\t('predicate_start_index', 1470)\n\t('predicate_end_index', 1473)\n\t('object_text', 'germ-line mutation')\n\t('object_dist', 2)\n\t('object_maxdist', 3)\n\t('object_start_index', 1442)\n\t('object_end_index', 1460)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 10, 48, 2))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4026\n('predication_id', 85838912)\n\t('sentence_id', 155089673)\n\t('index', None)\n\t('pmid', '22583402')\n\t('subject_source', None)\n\t('subject_cui', 'C0042440')\n\t('subject_name', 'Vegetables')\n\t('subject_type', 'food')\n\t('subject_score', None)\n\t('predicate', 'PREDISPOSES')\n\t('object_source', None)\n\t('object_cui', 'C0006826')\n\t('object_name', 'Malignant Neoplasms')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 155089673)\n\t('pmid', '22583402')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Epidemiological studies have also revealed that high dietary intakes of fruits and vegetables reduce the risk of cancer.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 85838912)\n\t('predication_id', 85838912)\n\t('subject_text', 'vegetables')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 1239)\n\t('subject_end_index', 1249)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 1261)\n\t('predicate_end_index', 1265)\n\t('object_text', 'cancer')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 1269)\n\t('object_end_index', 1275)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 11, 0, 6))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7340\n('predication_id', 66075404)\n\t('sentence_id', 116088399)\n\t('index', None)\n\t('pmid', '21094706')\n\t('subject_source', None)\n\t('subject_cui', 'C0039082')\n\t('subject_name', 'Syndrome')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 116088399)\n\t('pmid', '21094706')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Clinical and molecular characterization of 17q21.31 microdeletion syndrome in 14 French patients with mental retardation.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 66075410)\n\t('predication_id', 66075404)\n\t('subject_text', 'syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 231)\n\t('subject_end_index', 239)\n\t('subject_score', 791)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 240)\n\t('predicate_end_index', 242)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 259)\n\t('object_end_index', 267)\n\t('object_score', 827)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 16, 16, 58))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3714\n('predication_id', 92575328)\n\t('sentence_id', 131971678)\n\t('index', None)\n\t('pmid', '25317411')\n\t('subject_source', None)\n\t('subject_cui', 'C0812241')\n\t('subject_name', 'BRAF gene|BRAF|SNRPE')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0238463')\n\t('object_name', 'Papillary thyroid carcinoma')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 131971678)\n\t('pmid', '25317411')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'BRAF mutation may predict higher necessity of postoperative radioactive iodine ablation in papillary thyroid cancer.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 92575328)\n\t('predication_id', 92575328)\n\t('subject_text', 'BRAF')\n\t('subject_dist', 3)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 183)\n\t('subject_end_index', 187)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 277)\n\t('predicate_end_index', 279)\n\t('object_text', 'papillary thyroid cancer')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 280)\n\t('object_end_index', 304)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 2, 6, 26))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1411\n('predication_id', 110369597)\n\t('sentence_id', 206445044)\n\t('index', None)\n\t('pmid', '29942251')\n\t('subject_source', None)\n\t('subject_cui', 'C0887870')\n\t('subject_name', 'Focal Adhesions')\n\t('subject_type', 'celc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0673232')\n\t('object_name', 'protein kinase B gamma|AKT3')\n\t('object_type', 'aapp')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 206445044)\n\t('pmid', '29942251')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', '')\n\t('sentence', 'Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 110369597)\n\t('predication_id', 110369597)\n\t('subject_text', 'focal adhesion')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 1797)\n\t('subject_end_index', 1811)\n\t('subject_score', 901)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1790)\n\t('predicate_end_index', 1792)\n\t('object_text', 'AKT3')\n\t('object_dist', 1)\n\t('object_maxdist', 17)\n\t('object_start_index', 1706)\n\t('object_end_index', 1710)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 8, 3, 13, 6, 22))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2002\n('predication_id', 60507441)\n\t('sentence_id', 105377225)\n\t('index', None)\n\t('pmid', '19509418')\n\t('subject_source', None)\n\t('subject_cui', 'C0000768')\n\t('subject_name', 'Congenital Abnormality')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0242354')\n\t('object_name', 'Congenital Disorders')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 105377225)\n\t('pmid', '19509418')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Noonan syndrome (NS) is an autosomal dominant congenital disorder characterized by multiple birth defects including heart defects and myeloproliferative disease (MPD).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 60507443)\n\t('predication_id', 60507441)\n\t('subject_text', 'birth defects')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 474)\n\t('subject_end_index', 487)\n\t('subject_score', 901)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 442)\n\t('predicate_end_index', 455)\n\t('object_text', 'congenital disorder')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 422)\n\t('object_end_index', 441)\n\t('object_score', 843)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 1, 35))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7568\n('predication_id', 90494350)\n\t('sentence_id', 143201016)\n\t('index', None)\n\t('pmid', '23852183')\n\t('subject_source', None)\n\t('subject_cui', 'C0042974')\n\t('subject_name', 'von Willebrand Disease')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C1458140')\n\t('object_name', 'Bleeding tendency')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 143201016)\n\t('pmid', '23852183')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', 'PURPOSE OF REVIEW')\n\t('sentence', 'PURPOSE OF REVIEW: Von Willebrand disease (VWD) is an autosomally inherited bleeding disorder caused by a deficiency or abnormality of von Willebrand factor (VWF).')\n\t('normalized_section_header', 'OBJECTIVE')\n\nSCORE DATA:\n('predication_aux_id', 90494352)\n\t('predication_id', 90494350)\n\t('subject_text', 'Von Willebrand disease')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 283)\n\t('subject_end_index', 305)\n\t('subject_score', 1000)\n\t('indicator_type', 'SPEC')\n\t('predicate_start_index', 283)\n\t('predicate_end_index', 363)\n\t('object_text', 'bleeding disorder')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 346)\n\t('object_end_index', 363)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 18, 5, 30))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9490\n('predication_id', 63385213)\n\t('sentence_id', 110908025)\n\t('index', None)\n\t('pmid', '21495179')\n\t('subject_source', None)\n\t('subject_cui', 'C0003477')\n\t('subject_name', 'Anxiety, Separation')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0587248')\n\t('object_name', 'Costello syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 110908025)\n\t('pmid', '21495179')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', '')\n\t('sentence', 'Further, separation anxiety is more common in Costello syndrome than in the general population, affecting 39% of this cohort, and males are more often overly anxious than females.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 63385214)\n\t('predication_id', 63385213)\n\t('subject_text', 'separation anxiety')\n\t('subject_dist', 2)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1606)\n\t('subject_end_index', 1624)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1640)\n\t('predicate_end_index', 1642)\n\t('object_text', 'Costello syndrome')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 1643)\n\t('object_end_index', 1660)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 5, 11))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4468\n('predication_id', 72143451)\n\t('sentence_id', 127878549)\n\t('index', None)\n\t('pmid', '18472967')\n\t('subject_source', None)\n\t('subject_cui', 'C0164786')\n\t('subject_name', 'Proto-Oncogene Proteins c-akt|AKT1')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C0017638')\n\t('object_name', 'Glioma')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 127878549)\n\t('pmid', '18472967')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Specifically, we find that activated Raf-1 cooperates with Arf loss or Akt activation to generate gliomas similar to activated KRas under the same conditions.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 72143432)\n\t('predication_id', 72143451)\n\t('subject_text', 'Akt')\n\t('subject_dist', 2)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1325)\n\t('subject_end_index', 1328)\n\t('subject_score', 694)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1349)\n\t('predicate_end_index', 1357)\n\t('object_text', 'gliomas')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 1358)\n\t('object_end_index', 1365)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 18, 45, 26))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2987\n('predication_id', 88979611)\n\t('sentence_id', 140391389)\n\t('index', None)\n\t('pmid', '23933734')\n\t('subject_source', None)\n\t('subject_cui', 'C0018787')\n\t('subject_name', 'Heart')\n\t('subject_type', 'bpoc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0752312')\n\t('object_name', 'Mitogen-Activated Protein Kinases')\n\t('object_type', 'aapp')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 140391389)\n\t('pmid', '23933734')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'These mutations enhance mitogen-activated protein kinase signaling in the heart and pharmacological inhibition of extracellular signal-regulated kinase (ERK) 1 and 2 improves cardiac function in Lmna(H222P/H222P) mice.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 88979610)\n\t('predication_id', 88979611)\n\t('subject_text', 'heart')\n\t('subject_dist', 1)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 633)\n\t('subject_end_index', 638)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 626)\n\t('predicate_end_index', 628)\n\t('object_text', 'mitogen-activated protein kinase')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 583)\n\t('object_end_index', 615)\n\t('object_score', 928)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 14, 15, 18))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4287\n('predication_id', 71756893)\n\t('sentence_id', 127123228)\n\t('index', None)\n\t('pmid', '20141835')\n\t('subject_source', None)\n\t('subject_cui', 'C0812241')\n\t('subject_name', 'BRAF gene|BRAF|SNRPE')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C0025202')\n\t('object_name', 'melanoma')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 127123228)\n\t('pmid', '20141835')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Kinase-dead BRAF mimics the effects of the BRAF-selective drugs and kinase-dead Braf and oncogenic Ras cooperate to induce melanoma in mice.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 71756881)\n\t('predication_id', 71756893)\n\t('subject_text', 'BRAF')\n\t('subject_dist', 2)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 784)\n\t('subject_end_index', 788)\n\t('subject_score', 851)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 900)\n\t('predicate_end_index', 906)\n\t('object_text', 'melanoma')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 907)\n\t('object_end_index', 915)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 18, 35, 41))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8921\n('predication_id', 41526711)\n\t('sentence_id', 71114952)\n\t('index', None)\n\t('pmid', '12151887')\n\t('subject_source', None)\n\t('subject_cui', 'C0036439')\n\t('subject_name', 'Scoliosis, unspecified')\n\t('subject_type', 'anab')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008059')\n\t('object_name', 'Child')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 71114952)\n\t('pmid', '12151887')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Approximately 20% of children with Type I neurofibromatosis present with scoliosis with or without the classic dystrophic features, such as vertebral scalloping and rib penciling.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 41526710)\n\t('predication_id', 41526711)\n\t('subject_text', 'scoliosis')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 736)\n\t('subject_end_index', 745)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 725)\n\t('predicate_end_index', 729)\n\t('object_text', 'children')\n\t('object_dist', 3)\n\t('object_maxdist', 3)\n\t('object_start_index', 678)\n\t('object_end_index', 686)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 0, 33, 41))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 160\n('predication_id', 67218170)\n\t('sentence_id', 118305359)\n\t('index', None)\n\t('pmid', '20882035')\n\t('subject_source', None)\n\t('subject_cui', 'C0010417')\n\t('subject_name', 'Cryptorchidism')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 118305359)\n\t('pmid', '20882035')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Noonan syndrome is an autosomal dominant disease characterized by dysmorphic features, webbed neck, cardiac anomalies, short stature and cryptorchidism.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 67218171)\n\t('predication_id', 67218170)\n\t('subject_text', 'cryptorchidism')\n\t('subject_dist', 1)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 440)\n\t('subject_end_index', 454)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 346)\n\t('predicate_end_index', 359)\n\t('object_text', 'disease')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 338)\n\t('object_end_index', 345)\n\t('object_score', 828)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 16, 44, 31))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9895\n('predication_id', 75063907)\n\t('sentence_id', 133599117)\n\t('index', None)\n\t('pmid', '23089489')\n\t('subject_source', None)\n\t('subject_cui', 'C1511790')\n\t('subject_name', 'Detection')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'TREATS')\n\t('object_source', None)\n\t('object_cui', 'C0919267')\n\t('object_name', 'ovarian neoplasm')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 133599117)\n\t('pmid', '23089489')\n\t('type', 'ab')\n\t('number', 11)\n\t('section_header', '')\n\t('sentence', 'Immunohistochemistry with the VE1 antibody is a specific and sensitive tool for detection of the BRAF V600E mutation in serous ovarian tumors and may provide a practical screening test, especially in tumor samples with low epithelial content.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 75063908)\n\t('predication_id', 75063907)\n\t('subject_text', 'detection')\n\t('subject_dist', 2)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 2146)\n\t('subject_end_index', 2155)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 2183)\n\t('predicate_end_index', 2185)\n\t('object_text', 'ovarian tumors')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 2199)\n\t('object_end_index', 2213)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 23, 15, 42))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3979\n('predication_id', 78424205)\n\t('sentence_id', 140277975)\n\t('index', None)\n\t('pmid', '22875039')\n\t('subject_source', None)\n\t('subject_cui', 'C0746922')\n\t('subject_name', 'NODE')\n\t('subject_type', 'bpoc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0007090')\n\t('object_name', 'Carcinogens')\n\t('object_type', 'hops')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 140277975)\n\t('pmid', '22875039')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 78424202)\n\t('predication_id', 78424205)\n\t('subject_text', 'nodes')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 370)\n\t('subject_end_index', 375)\n\t('subject_score', 872)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 358)\n\t('predicate_end_index', 360)\n\t('object_text', 'Oncogenic')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 338)\n\t('object_end_index', 347)\n\t('object_score', 853)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 0, 46, 45))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2385\n('predication_id', 112300727)\n\t('sentence_id', 210669973)\n\t('index', None)\n\t('pmid', '29410316')\n\t('subject_source', None)\n\t('subject_cui', 'C0031727')\n\t('subject_name', 'Phosphotransferases')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'AFFECTS')\n\t('object_source', None)\n\t('object_cui', 'C0037083')\n\t('object_name', 'Signal Transduction')\n\t('object_type', 'celf')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 210669973)\n\t('pmid', '29410316')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Tyrosine kinases are enzymes playing a critical role in cellular signaling.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 112300727)\n\t('predication_id', 112300727)\n\t('subject_text', 'kinases')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 379)\n\t('subject_end_index', 386)\n\t('subject_score', 888)\n\t('indicator_type', 'INFER')\n\t('predicate_start_index', 379)\n\t('predicate_end_index', 398)\n\t('object_text', 'cellular signaling')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 426)\n\t('object_end_index', 444)\n\t('object_score', 964)\n\t('curr_timestamp', datetime.datetime(2018, 8, 4, 12, 4, 41))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7212\n('predication_id', 41700813)\n\t('sentence_id', 71419277)\n\t('index', None)\n\t('pmid', '12717436')\n\t('subject_source', None)\n\t('subject_cui', 'C1335280')\n\t('subject_name', 'PTPN11 gene|PTPN11')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0023467')\n\t('object_name', 'Leukemia, Myelocytic, Acute')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 71419277)\n\t('pmid', '12717436')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Somatic mutations in PTPN11 in juvenile myelomonocytic leukemia, myelodysplastic syndromes and acute myeloid leukemia.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 41700812)\n\t('predication_id', 41700813)\n\t('subject_text', 'PTPN11')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 171)\n\t('subject_end_index', 177)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 178)\n\t('predicate_end_index', 180)\n\t('object_text', 'acute myeloid leukemia')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 251)\n\t('object_end_index', 273)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 0, 35, 26))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2022\n('predication_id', 99721024)\n\t('sentence_id', 183312140)\n\t('index', None)\n\t('pmid', '26178382')\n\t('subject_source', None)\n\t('subject_cui', 'C0599155')\n\t('subject_name', 'Mutation, Missense')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0599755')\n\t('object_name', 'cohort')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 183312140)\n\t('pmid', '26178382')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', '')\n\t('sentence', 'Constitutional missense mutations at p.Arg1809 affect 1.23% of unrelated NF1 probands in the UAB cohort, therefore this specific NF1 genotype-phenotype correlation will affect counseling and management of a significant number of patients.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 99721024)\n\t('predication_id', 99721024)\n\t('subject_text', 'missense mutations')\n\t('subject_dist', 3)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 1677)\n\t('subject_end_index', 1695)\n\t('subject_score', 901)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1748)\n\t('predicate_end_index', 1750)\n\t('object_text', 'cohort')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 1765)\n\t('object_end_index', 1771)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 11, 40, 45))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8702\n('predication_id', 22354260)\n\t('sentence_id', 39255843)\n\t('index', None)\n\t('pmid', '9202609')\n\t('subject_source', None)\n\t('subject_cui', 'C0027022')\n\t('subject_name', 'Myeloproliferative disease')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'OCCURS_IN')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'podg')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 39255843)\n\t('pmid', '9202609')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Occurrence of myeloproliferative disorder in patients with Noonan syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 22354258)\n\t('predication_id', 22354260)\n\t('subject_text', 'myeloproliferative disorder')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 164)\n\t('subject_end_index', 191)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 150)\n\t('predicate_end_index', 160)\n\t('object_text', 'patients')\n\t('object_dist', 2)\n\t('object_maxdist', 3)\n\t('object_start_index', 195)\n\t('object_end_index', 203)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 5, 20, 57, 54))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1818\n('predication_id', 98425458)\n\t('sentence_id', 156938885)\n\t('index', None)\n\t('pmid', '26458870')\n\t('subject_source', None)\n\t('subject_cui', 'C0027022')\n\t('subject_name', 'Myeloproliferative disease')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 156938885)\n\t('pmid', '26458870')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'A significant fraction of NS-patients also develop myeloproliferative disorders.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 98425450)\n\t('predication_id', 98425458)\n\t('subject_text', 'myeloproliferative disorders')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 535)\n\t('subject_end_index', 563)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 527)\n\t('predicate_end_index', 534)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 513)\n\t('object_end_index', 521)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 11, 11, 23))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2179\n('predication_id', 69959663)\n\t('sentence_id', 123602731)\n\t('index', None)\n\t('pmid', '21263000')\n\t('subject_source', None)\n\t('subject_cui', 'C0028326')\n\t('subject_name', 'Noonan Syndrome')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0237401')\n\t('object_name', 'Individual')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 123602731)\n\t('pmid', '21263000')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', '')\n\t('sentence', 'In conclusion, mutations in NRAS from individuals with Noonan syndrome activated N-Ras signaling and induced developmental defects in zebrafish embryos, indicating that activating mutations in NRAS cause Noonan syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 69959665)\n\t('predication_id', 69959663)\n\t('subject_text', 'Noonan syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 1690)\n\t('subject_end_index', 1705)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1685)\n\t('predicate_end_index', 1689)\n\t('object_text', 'individuals')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 1673)\n\t('object_end_index', 1684)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 17, 51, 7))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3823\n('predication_id', 89613454)\n\t('sentence_id', 143323156)\n\t('index', None)\n\t('pmid', '24152792')\n\t('subject_source', None)\n\t('subject_cui', 'C0029005')\n\t('subject_name', 'Oncogene Proteins')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0280100')\n\t('object_name', 'Solid tumor')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 143323156)\n\t('pmid', '24152792')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'Although BRAF(V600E) is a driver oncoprotein and pharmacologic target in solid tumors such as melanoma, lung, and thyroid cancer, it remains unknown whether BRAF(V600E) is an appropriate therapeutic target in hematopoietic neoplasms.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 89613453)\n\t('predication_id', 89613454)\n\t('subject_text', 'oncoprotein')\n\t('subject_dist', 2)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 628)\n\t('subject_end_index', 639)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 665)\n\t('predicate_end_index', 667)\n\t('object_text', 'solid tumors')\n\t('object_dist', 1)\n\t('object_maxdist', 9)\n\t('object_start_index', 668)\n\t('object_end_index', 686)\n\t('object_score', 983)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 15, 57, 16))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4848\n('predication_id', 63627774)\n\t('sentence_id', 111373092)\n\t('index', None)\n\t('pmid', '18803425')\n\t('subject_source', None)\n\t('subject_cui', 'C0007634')\n\t('subject_name', 'Cells')\n\t('subject_type', 'cell')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0023434')\n\t('object_name', 'Chronic Lymphocytic Leukemia')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 111373092)\n\t('pmid', '18803425')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', '')\n\t('sentence', 'In this review, we consider the use of array CGH as a clinical tool for the identification of genomic alterations with prognostic significance in CLL, and suggest ways to integrate this test into the clinical molecular diagnostic laboratory work flow.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 63627772)\n\t('predication_id', 63627774)\n\t('subject_text', 'CLL')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1965)\n\t('subject_end_index', 1968)\n\t('subject_score', 916)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 1965)\n\t('predicate_end_index', 1968)\n\t('object_text', 'CLL')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1965)\n\t('object_end_index', 1968)\n\t('object_score', 916)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 12, 15))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4445\n('predication_id', 64368712)\n\t('sentence_id', 112797855)\n\t('index', None)\n\t('pmid', '18523868')\n\t('subject_source', None)\n\t('subject_cui', 'C0808232')\n\t('subject_name', 'Therapeutic Intervention')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'TREATS')\n\t('object_source', None)\n\t('object_cui', 'C0280100')\n\t('object_name', 'Solid tumor')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 112797855)\n\t('pmid', '18523868')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'The identification and characterization of the components of individual signal transduction cascades, and advances in our understanding on how these biological signals are integrated in cancer initiation and progression, have provided new strategies for therapeutic intervention in solid tumors and hematological malignancies.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 64368716)\n\t('predication_id', 64368712)\n\t('subject_text', 'therapeutic intervention')\n\t('subject_dist', 1)\n\t('subject_maxdist', 10)\n\t('subject_start_index', 599)\n\t('subject_end_index', 623)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 624)\n\t('predicate_end_index', 626)\n\t('object_text', 'solid tumors')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 627)\n\t('object_end_index', 639)\n\t('object_score', 983)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 34, 19))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2881\n('predication_id', 57061016)\n\t('sentence_id', 98990575)\n\t('index', None)\n\t('pmid', '17308357')\n\t('subject_source', None)\n\t('subject_cui', 'C0012929')\n\t('subject_name', 'DNA, Mitochondrial')\n\t('subject_type', 'bacs')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0596263')\n\t('object_name', 'Carcinogenesis')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 98990575)\n\t('pmid', '17308357')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Thus, either the majority of diverse mtDNA mutations observed in tumors are not important for the process of carcinogenesis or that they play a common oncogenic role.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 57061015)\n\t('predication_id', 57061016)\n\t('subject_text', 'mtDNA')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 962)\n\t('subject_end_index', 967)\n\t('subject_score', 790)\n\t('indicator_type', 'INFER')\n\t('predicate_start_index', 990)\n\t('predicate_end_index', 1054)\n\t('object_text', 'carcinogenesis')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1040)\n\t('object_end_index', 1054)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 47))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1804\n('predication_id', 99588213)\n\t('sentence_id', 183038523)\n\t('index', None)\n\t('pmid', '26467218')\n\t('subject_source', None)\n\t('subject_cui', 'C0349639')\n\t('subject_name', 'Juvenile myelomonocytic leukemia')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0206530')\n\t('object_name', 'Germ-Line Mutation')\n\t('object_type', 'genf')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 183038523)\n\t('pmid', '26467218')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', 'CONCLUSIONS')\n\t('sentence', 'Neither of affected individuals in this family presented with juvenile myelomonocytic leukemia (JMML), which together with previously published results suggest that the risk for NS individuals with a germline NRAS mutation developing JMML is not different from the proportion seen in other NS cases.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 99588214)\n\t('predication_id', 99588213)\n\t('subject_text', 'JMML')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 1590)\n\t('subject_end_index', 1594)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1573)\n\t('predicate_end_index', 1583)\n\t('object_text', 'germline NRAS mutation')\n\t('object_dist', 1)\n\t('object_maxdist', 7)\n\t('object_start_index', 1550)\n\t('object_end_index', 1572)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 11, 37, 40))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4176\n('predication_id', 61379960)\n\t('sentence_id', 107064772)\n\t('index', None)\n\t('pmid', '21142662')\n\t('subject_source', None)\n\t('subject_cui', 'C0007115')\n\t('subject_name', 'Malignant neoplasm of thyroid')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0043210')\n\t('object_name', 'Woman')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 107064772)\n\t('pmid', '21142662')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'The less aggressive histologic subtypes of thyroid cancer are more common in women, whereas the more aggressive histologic subtypes have similar gender distribution.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 61379959)\n\t('predication_id', 61379960)\n\t('subject_text', 'thyroid cancer')\n\t('subject_dist', 2)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 532)\n\t('subject_end_index', 546)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 563)\n\t('predicate_end_index', 565)\n\t('object_text', 'women')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 572)\n\t('object_end_index', 577)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 18, 51))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1166\n('predication_id', 69960868)\n\t('sentence_id', 123605194)\n\t('index', None)\n\t('pmid', '21383288')\n\t('subject_source', None)\n\t('subject_cui', 'C1334474')\n\t('subject_name', 'MAP2K1 gene|MAP2K1')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'AUGMENTS')\n\t('object_source', None)\n\t('object_cui', 'C1150423')\n\t('object_name', 'kinase activity')\n\t('object_type', 'moft')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 123605194)\n\t('pmid', '21383288')\n\t('type', 'ab')\n\t('number', 6)\n\t('section_header', '')\n\t('sentence', 'The MEK1(C121S) mutation was shown to increase kinase activity and confer robust resistance to both RAF and MEK inhibition in vitro.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 69960875)\n\t('predication_id', 69960868)\n\t('subject_text', 'MEK1')\n\t('subject_dist', 3)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 1330)\n\t('subject_end_index', 1334)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1364)\n\t('predicate_end_index', 1372)\n\t('object_text', 'kinase activity')\n\t('object_dist', 1)\n\t('object_maxdist', 5)\n\t('object_start_index', 1373)\n\t('object_end_index', 1388)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 17, 51, 8))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4547\n('predication_id', 54095978)\n\t('sentence_id', 93508827)\n\t('index', None)\n\t('pmid', '17690212')\n\t('subject_source', None)\n\t('subject_cui', 'C0087111')\n\t('subject_name', 'Therapeutic procedure')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'USES')\n\t('object_source', None)\n\t('object_cui', 'C1514713')\n\t('object_name', 'Raf Kinase Inhibitor')\n\t('object_type', 'phsu')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 93508827)\n\t('pmid', '17690212')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'These mutations lead to constitutive activation of BRAF with increased downstream ERK (extracellular signal-regulated kinase) signaling; therefore, the development of RAF kinase inhibitors for targeted therapy is being actively pursued.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 54095994)\n\t('predication_id', 54095978)\n\t('subject_text', 'therapy')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 640)\n\t('subject_end_index', 647)\n\t('subject_score', 872)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 621)\n\t('predicate_end_index', 624)\n\t('object_text', 'RAF kinase inhibitors')\n\t('object_dist', 1)\n\t('object_maxdist', 8)\n\t('object_start_index', 599)\n\t('object_end_index', 620)\n\t('object_score', 988)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 11, 56, 1))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7303\n('predication_id', 73760351)\n\t('sentence_id', 131023820)\n\t('index', None)\n\t('pmid', '22659270')\n\t('subject_source', None)\n\t('subject_cui', 'C0007859')\n\t('subject_name', 'Neck Pain')\n\t('subject_type', 'sosy')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C1457887')\n\t('object_name', 'Symptoms')\n\t('object_type', 'sosy')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 131023820)\n\t('pmid', '22659270')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', '')\n\t('sentence', 'This malformation should be considered in patients with the 17q21.31 microdeletion syndrome, presenting suggestive symptoms (headache, neck pain, cerebellar signs or muscle weakness).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 73760351)\n\t('predication_id', 73760351)\n\t('subject_text', 'neck pain')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1576)\n\t('subject_end_index', 1585)\n\t('subject_score', 1000)\n\t('indicator_type', 'SPEC')\n\t('predicate_start_index', 1556)\n\t('predicate_end_index', 1574)\n\t('object_text', 'symptoms')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1556)\n\t('object_end_index', 1564)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 22, 43, 28))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1297\n('predication_id', 58258401)\n\t('sentence_id', 101233392)\n\t('index', None)\n\t('pmid', '17586837')\n\t('subject_source', None)\n\t('subject_cui', 'C0599755')\n\t('subject_name', 'cohort')\n\t('subject_type', 'humn')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', '6654')\n\t('object_name', 'SOS1')\n\t('object_type', 'aapp')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 101233392)\n\t('pmid', '17586837')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', 'METHODS AND RESULTS')\n\t('sentence', 'METHODS AND RESULTS: We investigated SOS1 in a large cohort of patients with disorders of the NS-CFCS spectrum, who had previously tested negative for mutations in PTPN11, KRAS, BRAF, MEK1 and MEK2.')\n\t('normalized_section_header', 'RESULTS')\n\nSCORE DATA:\n('predication_aux_id', 58258400)\n\t('predication_id', 58258401)\n\t('subject_text', 'cohort')\n\t('subject_dist', 1)\n\t('subject_maxdist', 11)\n\t('subject_start_index', 810)\n\t('subject_end_index', 816)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 799)\n\t('predicate_end_index', 801)\n\t('object_text', 'SOS1')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 788)\n\t('object_end_index', 792)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 13, 8, 25))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 303\n('predication_id', 56790266)\n\t('sentence_id', 98493050)\n\t('index', None)\n\t('pmid', '17551924')\n\t('subject_source', None)\n\t('subject_cui', '5605')\n\t('subject_name', 'MAP2K2')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0039082')\n\t('object_name', 'Syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 98493050)\n\t('pmid', '17551924')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'The recent discoveries of germline HRAS mutations in patients with Costello syndrome and mutations in BRAF, MEK1, and MEK2 in CFC syndrome uncovered the biologic mechanism for the shared phenotypic findings based on the close interaction of the affected gene products within the MAP kinase pathway.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 56790263)\n\t('predication_id', 56790266)\n\t('subject_text', 'MEK2')\n\t('subject_dist', 1)\n\t('subject_maxdist', 8)\n\t('subject_start_index', 656)\n\t('subject_end_index', 660)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 661)\n\t('predicate_end_index', 663)\n\t('object_text', 'syndrome')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 668)\n\t('object_end_index', 676)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 42, 10))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 10071\n('predication_id', 73127362)\n\t('sentence_id', 129748780)\n\t('index', None)\n\t('pmid', '21788131')\n\t('subject_source', None)\n\t('subject_cui', 'C0041582')\n\t('subject_name', 'Ulcer')\n\t('subject_type', 'patf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 129748780)\n\t('pmid', '21788131')\n\t('type', 'ab')\n\t('number', 12)\n\t('section_header', 'CONCLUSIONS')\n\t('sentence', 'Patients with mutations in BRAF or NRAS gene are frequently present with ulceration, and mutation in BRAF or NRAS gene is indicator for poor prognosis.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 73127359)\n\t('predication_id', 73127362)\n\t('subject_text', 'ulceration')\n\t('subject_dist', 1)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 1884)\n\t('subject_end_index', 1894)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1873)\n\t('predicate_end_index', 1877)\n\t('object_text', 'Patients')\n\t('object_dist', 4)\n\t('object_maxdist', 4)\n\t('object_start_index', 1805)\n\t('object_end_index', 1813)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 22, 29, 55))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5560\n('predication_id', 101326831)\n\t('sentence_id', 186722480)\n\t('index', None)\n\t('pmid', '26934580')\n\t('subject_source', None)\n\t('subject_cui', 'C0004352')\n\t('subject_name', 'Autistic Disorder')\n\t('subject_type', 'mobd')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 186722480)\n\t('pmid', '26934580')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'To explore this hypothesis we sought to discover whether autistic patients more often have rare coding, single-nucleotide variants within tumor suppressor and oncogenes and whether autistic patients are more often diagnosed with neoplasms.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 101326832)\n\t('predication_id', 101326831)\n\t('subject_text', 'autistic')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 841)\n\t('subject_end_index', 849)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 841)\n\t('predicate_end_index', 858)\n\t('object_text', 'patients')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 850)\n\t('object_end_index', 858)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 13, 22, 31))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3319\n('predication_id', 105134315)\n\t('sentence_id', 194798980)\n\t('index', None)\n\t('pmid', '28637487')\n\t('subject_source', None)\n\t('subject_cui', 'C0544886')\n\t('subject_name', 'Somatic mutation')\n\t('subject_type', 'comd')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0151779')\n\t('object_name', '[X]Malignant melanoma of skin, unspecified')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 194798980)\n\t('pmid', '28637487')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', 'CONCLUSION')\n\t('sentence', 'Other curiosities are observed, such as a high rate of somatic NF1 mutation in cutaneous melanoma, lung cancer, ovarian carcinoma and glioblastoma which are not usually associated with neurofibromatosis type 1.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 105134315)\n\t('predication_id', 105134315)\n\t('subject_text', 'somatic NF1 mutation')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 1964)\n\t('subject_end_index', 1984)\n\t('subject_score', 901)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1985)\n\t('predicate_end_index', 1987)\n\t('object_text', 'cutaneous melanoma')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 1988)\n\t('object_end_index', 2006)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 10, 10, 49, 15))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 212\n('predication_id', 68900900)\n\t('sentence_id', 121542544)\n\t('index', None)\n\t('pmid', '19156172')\n\t('subject_source', None)\n\t('subject_cui', '5605')\n\t('subject_name', 'MAP2K2')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 121542544)\n\t('pmid', '19156172')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'Here we screened a cohort of 33 individuals with CFCS for MEK1 and MEK2 gene mutations to further explore their molecular spectrum in this disorder, and to analyze genotype-phenotype correlations.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 68900899)\n\t('predication_id', 68900900)\n\t('subject_text', 'MEK2')\n\t('subject_dist', 3)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 860)\n\t('subject_end_index', 864)\n\t('subject_score', 884)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 930)\n\t('predicate_end_index', 932)\n\t('object_text', 'disorder')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 938)\n\t('object_end_index', 946)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 17, 25, 6))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 304\n('predication_id', 56790304)\n\t('sentence_id', 98493050)\n\t('index', None)\n\t('pmid', '17551924')\n\t('subject_source', None)\n\t('subject_cui', 'C0812241')\n\t('subject_name', 'BRAF gene|BRAF|SNRPE')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0039082')\n\t('object_name', 'Syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 98493050)\n\t('pmid', '17551924')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'The recent discoveries of germline HRAS mutations in patients with Costello syndrome and mutations in BRAF, MEK1, and MEK2 in CFC syndrome uncovered the biologic mechanism for the shared phenotypic findings based on the close interaction of the affected gene products within the MAP kinase pathway.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 56790303)\n\t('predication_id', 56790304)\n\t('subject_text', 'BRAF')\n\t('subject_dist', 1)\n\t('subject_maxdist', 8)\n\t('subject_start_index', 640)\n\t('subject_end_index', 644)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 661)\n\t('predicate_end_index', 663)\n\t('object_text', 'syndrome')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 668)\n\t('object_end_index', 676)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 42, 10))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5882\n('predication_id', 74784786)\n\t('sentence_id', 133055870)\n\t('index', None)\n\t('pmid', '20645028')\n\t('subject_source', None)\n\t('subject_cui', 'C0948380')\n\t('subject_name', 'Colorectal cancer metastatic')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 133055870)\n\t('pmid', '20645028')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'In this study, 239 samples obtained from 215 patients with metastatic colorectal cancer were tested for the presence of the seven most common mutations in the KRAS gene and the V600E mutation in the BRAF gene.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 74784792)\n\t('predication_id', 74784786)\n\t('subject_text', 'metastatic colorectal cancer')\n\t('subject_dist', 1)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 1057)\n\t('subject_end_index', 1085)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1052)\n\t('predicate_end_index', 1056)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 1043)\n\t('object_end_index', 1051)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 23, 7, 10))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1095\n('predication_id', 93914265)\n\t('sentence_id', 147027199)\n\t('index', None)\n\t('pmid', '25337068')\n\t('subject_source', None)\n\t('subject_cui', 'C0206530')\n\t('subject_name', 'Germ-Line Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'CAUSES')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 147027199)\n\t('pmid', '25337068')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Noonan syndrome (NS) and related disorders, which are now summarized under the term RASopathies, are caused by germline mutations in genes encoding protein components of the Ras/mitogen-activated protein kinase pathway.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 93914265)\n\t('predication_id', 93914265)\n\t('subject_text', 'germline mutations')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 418)\n\t('subject_end_index', 436)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 408)\n\t('predicate_end_index', 414)\n\t('object_text', 'disorders')\n\t('object_dist', 2)\n\t('object_maxdist', 3)\n\t('object_start_index', 334)\n\t('object_end_index', 343)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 4, 15, 7))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 10307\n('predication_id', 59016053)\n\t('sentence_id', 102623117)\n\t('index', None)\n\t('pmid', '17065421')\n\t('subject_source', None)\n\t('subject_cui', 'C1511790')\n\t('subject_name', 'Detection')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'TREATS')\n\t('object_source', None)\n\t('object_cui', 'C0009402')\n\t('object_name', 'Carcinoma of the Large Intestine')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 102623117)\n\t('pmid', '17065421')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Detection of BRAF V600E mutation in colorectal cancer: comparison of automatic sequencing and real-time chemistry methodology.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 59016052)\n\t('predication_id', 59016053)\n\t('subject_text', 'Detection')\n\t('subject_dist', 2)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 149)\n\t('subject_end_index', 158)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 182)\n\t('predicate_end_index', 184)\n\t('object_text', 'colorectal cancer')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 185)\n\t('object_end_index', 202)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 13, 20, 36))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5989\n('predication_id', 91086065)\n\t('sentence_id', 149622761)\n\t('index', None)\n\t('pmid', '24865967')\n\t('subject_source', None)\n\t('subject_cui', 'C0017968')\n\t('subject_name', 'Glycoproteins')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C1167322')\n\t('object_name', 'integral to membrane')\n\t('object_type', 'celc')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 149622761)\n\t('pmid', '24865967')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'Here, we identify \"protein zero-related\" (PZR), a transmembrane glycoprotein that interfaces with the extracellular matrix to promote cell migration, as a major hyper-tyrosyl-phosphorylated protein in mouse and zebrafish models of NS and LS.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 91086077)\n\t('predication_id', 91086065)\n\t('subject_text', 'glycoprotein')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 787)\n\t('subject_end_index', 799)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 773)\n\t('predicate_end_index', 799)\n\t('object_text', 'transmembrane')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 773)\n\t('object_end_index', 786)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 23, 47, 7))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 763\n('predication_id', 77738756)\n\t('sentence_id', 138903873)\n\t('index', None)\n\t('pmid', '22609219')\n\t('subject_source', None)\n\t('subject_cui', 'C0018704')\n\t('subject_name', 'Health care facility')\n\t('subject_type', 'hcro')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008972')\n\t('object_name', 'Clinical Research')\n\t('object_type', 'resa')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 138903873)\n\t('pmid', '22609219')\n\t('type', 'ab')\n\t('number', 9)\n\t('section_header', 'LIMITATIONS')\n\t('sentence', 'LIMITATIONS: This study was limited by the small number of cases, all from a single institution.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 77738751)\n\t('predication_id', 77738756)\n\t('subject_text', 'institution')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 1746)\n\t('subject_end_index', 1757)\n\t('subject_score', 888)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1732)\n\t('predicate_end_index', 1736)\n\t('object_text', 'study')\n\t('object_dist', 3)\n\t('object_maxdist', 4)\n\t('object_start_index', 1674)\n\t('object_end_index', 1679)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 0, 30))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4086\n('predication_id', 76629125)\n\t('sentence_id', 136695605)\n\t('index', None)\n\t('pmid', '22190222')\n\t('subject_source', None)\n\t('subject_cui', 'C0238463')\n\t('subject_name', 'Papillary thyroid carcinoma')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 136695605)\n\t('pmid', '22190222')\n\t('type', 'ab')\n\t('number', 11)\n\t('section_header', 'CONCLUSIONS')\n\t('sentence', 'CONCLUSIONS: The BRAF(V600E) mutation was associated with high-risk clinicopathologic characteristics in patients with PTC.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 76629126)\n\t('predication_id', 76629125)\n\t('subject_text', 'PTC')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 2185)\n\t('subject_end_index', 2188)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 2180)\n\t('predicate_end_index', 2184)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 2171)\n\t('object_end_index', 2179)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 0, 0, 4))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2798\n('predication_id', 56376680)\n\t('sentence_id', 97730535)\n\t('index', None)\n\t('pmid', '18060073')\n\t('subject_source', None)\n\t('subject_cui', 'C0812241')\n\t('subject_name', 'BRAF gene|BRAF|SNRPE')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0007600')\n\t('object_name', 'Cell Line')\n\t('object_type', 'cell')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 97730535)\n\t('pmid', '18060073')\n\t('type', 'ab')\n\t('number', 6)\n\t('section_header', 'BACKGROUND')\n\t('sentence', 'Together, these four cell lines contained four different BRAF mutations, two of which were novel.')\n\t('normalized_section_header', 'BACKGROUND')\n\nSCORE DATA:\n('predication_aux_id', 56376683)\n\t('predication_id', 56376680)\n\t('subject_text', 'BRAF')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 1073)\n\t('subject_end_index', 1077)\n\t('subject_score', 815)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1048)\n\t('predicate_end_index', 1057)\n\t('object_text', 'cell lines')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 1031)\n\t('object_end_index', 1041)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 34, 53))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7235\n('predication_id', 46420688)\n\t('sentence_id', 79617556)\n\t('index', None)\n\t('pmid', '10586172')\n\t('subject_source', None)\n\t('subject_cui', 'C0028326')\n\t('subject_name', 'Noonan Syndrome')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008059')\n\t('object_name', 'Child')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 79617556)\n\t('pmid', '10586172')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', 'OBJECTIVE')\n\t('sentence', 'OBJECTIVE: To report the relative prevalence of various forms of congenital heart disease (CHD) in children with Noonan syndrome (NS) and to describe anatomic characteristics of the subgroup of patients with atrioventricular canal (AVC).')\n\t('normalized_section_header', 'OBJECTIVE')\n\nSCORE DATA:\n('predication_aux_id', 46420686)\n\t('predication_id', 46420688)\n\t('subject_text', 'Noonan syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 431)\n\t('subject_end_index', 446)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 426)\n\t('predicate_end_index', 430)\n\t('object_text', 'children')\n\t('object_dist', 1)\n\t('object_maxdist', 5)\n\t('object_start_index', 417)\n\t('object_end_index', 425)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 1, 22, 15))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 6790\n('predication_id', 57108372)\n\t('sentence_id', 99077550)\n\t('index', None)\n\t('pmid', '17875889')\n\t('subject_source', None)\n\t('subject_cui', 'C0587248')\n\t('subject_name', 'Costello syndrome')\n\t('subject_type', 'dsyn')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0347390')\n\t('object_name', 'Skin Papilloma')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 99077550)\n\t('pmid', '17875889')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Verrucous papules and plaques in a pediatric patient: cutaneous papillomas associated with Costello syndrome.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 57108355)\n\t('predication_id', 57108372)\n\t('subject_text', 'Costello syndrome')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 248)\n\t('subject_end_index', 265)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 232)\n\t('predicate_end_index', 242)\n\t('object_text', 'cutaneous papillomas')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 205)\n\t('object_end_index', 225)\n\t('object_score', 983)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 12, 47, 51))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 268\n('predication_id', 62003390)\n\t('sentence_id', 108272603)\n\t('index', None)\n\t('pmid', '18456719')\n\t('subject_source', None)\n\t('subject_cui', 'C1335280')\n\t('subject_name', 'PTPN11 gene|PTPN11')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 108272603)\n\t('pmid', '18456719')\n\t('type', 'ab')\n\t('number', 6)\n\t('section_header', 'RESULTS')\n\t('sentence', 'RESULTS: Mutations were identified in seven patients with CFC (two in BRAF, one in KRAS, one in MEK1, two in MEK2 and one in SOS1).')\n\t('normalized_section_header', 'RESULTS')\n\nSCORE DATA:\n('predication_aux_id', 62003390)\n\t('predication_id', 62003390)\n\t('subject_text', 'CFC')\n\t('subject_dist', 1)\n\t('subject_maxdist', 11)\n\t('subject_start_index', 1112)\n\t('subject_end_index', 1115)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1107)\n\t('predicate_end_index', 1111)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 1098)\n\t('object_end_index', 1106)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 32, 38))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5611\n('predication_id', 94775421)\n\t('sentence_id', 130883803)\n\t('index', None)\n\t('pmid', '24913517')\n\t('subject_source', None)\n\t('subject_cui', 'C0000768')\n\t('subject_name', 'Congenital Abnormality')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0039082')\n\t('object_name', 'Syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 130883803)\n\t('pmid', '24913517')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'This Review summarises these malformation syndromes and discusses the recent research into their aetiology.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 94775424)\n\t('predication_id', 94775421)\n\t('subject_text', 'malformation')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 678)\n\t('subject_end_index', 690)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 678)\n\t('predicate_end_index', 700)\n\t('object_text', 'syndromes')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 691)\n\t('object_end_index', 700)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 5, 36, 55))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1561\n('predication_id', 95705517)\n\t('sentence_id', 175039704)\n\t('index', None)\n\t('pmid', '26518681')\n\t('subject_source', None)\n\t('subject_cui', 'C0596392')\n\t('subject_name', 'craniofacial')\n\t('subject_type', 'bpoc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0000768')\n\t('object_name', 'Congenital Abnormality')\n\t('object_type', 'cgab')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 175039704)\n\t('pmid', '26518681')\n\t('type', 'ab')\n\t('number', 13)\n\t('section_header', 'WHAT IS KNOWN')\n\t('sentence', 'WHAT IS KNOWN: * Noonan syndrome is a common genetically heterogeneous disorder of autosomal dominant inheritance characterized by craniofacial dysmorphism, short stature, congenital heart defects, variable cognitive deficit, and other anomalies.')\n\t('normalized_section_header', 'BACKGROUND')\n\nSCORE DATA:\n('predication_aux_id', 95705516)\n\t('predication_id', 95705517)\n\t('subject_text', 'craniofacial')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 2223)\n\t('subject_end_index', 2235)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 2223)\n\t('predicate_end_index', 2247)\n\t('object_text', 'dysmorphism')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 2236)\n\t('object_end_index', 2247)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 10, 10, 3))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3747\n('predication_id', 93480610)\n\t('sentence_id', 170490838)\n\t('index', None)\n\t('pmid', '25293576')\n\t('subject_source', None)\n\t('subject_cui', 'C0008838')\n\t('subject_name', 'Cisplatin')\n\t('subject_type', 'inch')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C0003392')\n\t('object_name', 'Antineoplastic Agents')\n\t('object_type', 'phsu')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 170490838)\n\t('pmid', '25293576')\n\t('type', 'ab')\n\t('number', 10)\n\t('section_header', '')\n\t('sentence', 'Treatment combinations based on triptorelin and PI3K and ERK inhibitors and chemotherapeutic agent cisplatin have synergistic effects in in vitro models of TNBC.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 93480611)\n\t('predication_id', 93480610)\n\t('subject_text', 'cisplatin')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 2032)\n\t('subject_end_index', 2041)\n\t('subject_score', 901)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 2009)\n\t('predicate_end_index', 2041)\n\t('object_text', 'chemotherapeutic agent')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 2009)\n\t('object_end_index', 2031)\n\t('object_score', 901)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 3, 33, 3))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2266\n('predication_id', 60528666)\n\t('sentence_id', 105419408)\n\t('index', None)\n\t('pmid', '18632602')\n\t('subject_source', None)\n\t('subject_cui', 'C0001418')\n\t('subject_name', 'Adenocarcinoma')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0024109')\n\t('object_name', 'Lung')\n\t('object_type', 'bpoc')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 105419408)\n\t('pmid', '18632602')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'We performed mutational profiling of a large cohort of lung adenocarcinomas to uncover other potential somatic mutations in genes of this pathway that could contribute to lung tumorigenesis.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 60528671)\n\t('predication_id', 60528666)\n\t('subject_text', 'adenocarcinomas')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 657)\n\t('subject_end_index', 672)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 652)\n\t('predicate_end_index', 672)\n\t('object_text', 'lung')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 652)\n\t('object_end_index', 656)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 1, 57))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5640\n('predication_id', 100808393)\n\t('sentence_id', 185620494)\n\t('index', None)\n\t('pmid', '26742794')\n\t('subject_source', None)\n\t('subject_cui', 'C0178784')\n\t('subject_name', 'Organ')\n\t('subject_type', 'bpoc')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0056077')\n\t('object_name', 'coenzyme Q10')\n\t('object_type', 'orch')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 185620494)\n\t('pmid', '26742794')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', 'BACKGROUND')\n\t('sentence', 'Thus, early diagnosis, first evoked by mitochondrial respiratory chain (MRC) spectrophotometric analysis, then confirmed by direct measurement of CoQ10 levels, is of critical importance to prevent irreversible damage in organs such as the kidney and the central nervous system.')\n\t('normalized_section_header', 'BACKGROUND')\n\nSCORE DATA:\n('predication_aux_id', 100808394)\n\t('predication_id', 100808393)\n\t('subject_text', 'organs')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 909)\n\t('subject_end_index', 915)\n\t('subject_score', 966)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 906)\n\t('predicate_end_index', 908)\n\t('object_text', 'CoQ10')\n\t('object_dist', 3)\n\t('object_maxdist', 7)\n\t('object_start_index', 835)\n\t('object_end_index', 840)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 13, 15, 41))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 4609\n('predication_id', 53914642)\n\t('sentence_id', 93172354)\n\t('index', None)\n\t('pmid', '16880785')\n\t('subject_source', None)\n\t('subject_cui', 'C0027651')\n\t('subject_name', 'Neoplasm')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 93172354)\n\t('pmid', '16880785')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'The effects of sorafenib--an oral multikinase inhibitor targeting the tumour and tumour vasculature--were evaluated in patients with advanced melanoma enrolled in a large multidisease Phase II randomised discontinuation trial (RDT).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 53914637)\n\t('predication_id', 53914642)\n\t('subject_text', 'tumour')\n\t('subject_dist', 2)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 343)\n\t('subject_end_index', 349)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 395)\n\t('predicate_end_index', 397)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 398)\n\t('object_end_index', 406)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 11, 53, 11))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3509\n('predication_id', 101585986)\n\t('sentence_id', 187273188)\n\t('index', None)\n\t('pmid', '27138882')\n\t('subject_source', None)\n\t('subject_cui', 'C0547070')\n\t('subject_name', 'Ablation')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'USES')\n\t('object_source', None)\n\t('object_cui', 'C1441129')\n\t('object_name', 'Radioactive Iodine')\n\t('object_type', 'phsu')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 187273188)\n\t('pmid', '27138882')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', 'METHODS')\n\t('sentence', 'Secondary outcome measures were tumour size, nodal positivity and radioactive iodine ablation rate.')\n\t('normalized_section_header', 'METHODS')\n\nSCORE DATA:\n('predication_aux_id', 101585989)\n\t('predication_id', 101585986)\n\t('subject_text', 'ablation')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1064)\n\t('subject_end_index', 1072)\n\t('subject_score', 861)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 1045)\n\t('predicate_end_index', 1072)\n\t('object_text', 'radioactive iodine')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1045)\n\t('object_end_index', 1063)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 13, 50, 24))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3847\n('predication_id', 89800442)\n\t('sentence_id', 139091918)\n\t('index', None)\n\t('pmid', '23844038')\n\t('subject_source', None)\n\t('subject_cui', '3429')\n\t('subject_name', 'IFI27|PSMD9|SSSCA1|DCTN6|TMED7')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'STIMULATES')\n\t('object_source', None)\n\t('object_cui', '2048')\n\t('object_name', 'EPHB2|MAPK1')\n\t('object_type', 'gngm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 139091918)\n\t('pmid', '23844038')\n\t('type', 'ab')\n\t('number', 5)\n\t('section_header', '')\n\t('sentence', 'In a BRAF(V600E)-containing xenograft model of human melanoma, orally administered dabrafenib inhibited ERK activation, downregulated Ki67, and upregulated p27, leading to tumor growth inhibition.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 89800442)\n\t('predication_id', 89800442)\n\t('subject_text', 'p27')\n\t('subject_dist', 2)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 1265)\n\t('subject_end_index', 1268)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 1211)\n\t('predicate_end_index', 1221)\n\t('object_text', 'ERK')\n\t('object_dist', 0)\n\t('object_maxdist', 5)\n\t('object_start_index', 1207)\n\t('object_end_index', 1210)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 16, 17, 6))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3049\n('predication_id', 73279269)\n\t('sentence_id', 130058526)\n\t('index', None)\n\t('pmid', '22683711')\n\t('subject_source', None)\n\t('subject_cui', 'C0853032')\n\t('subject_name', 'Sebaceous nevus')\n\t('subject_type', 'patf')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0079471')\n\t('object_name', 'HRAS gene|HRAS')\n\t('object_type', 'gngm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 130058526)\n\t('pmid', '22683711')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'We report that of 65 sebaceous nevi studied, 62 (95%) had mutations in the HRAS gene and 3 (5%) had mutations in the KRAS gene.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 73279269)\n\t('predication_id', 73279269)\n\t('subject_text', 'sebaceous nevi')\n\t('subject_dist', 2)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 593)\n\t('subject_end_index', 607)\n\t('subject_score', 884)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 646)\n\t('predicate_end_index', 648)\n\t('object_text', 'HRAS gene')\n\t('object_dist', 1)\n\t('object_maxdist', 3)\n\t('object_start_index', 653)\n\t('object_end_index', 662)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 22, 32, 51))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1864\n('predication_id', 90864646)\n\t('sentence_id', 134354176)\n\t('index', None)\n\t('pmid', '23799582')\n\t('subject_source', None)\n\t('subject_cui', 'C0007634')\n\t('subject_name', 'Cells')\n\t('subject_type', 'cell')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 134354176)\n\t('pmid', '23799582')\n\t('type', 'ab')\n\t('number', 5)\n\t('section_header', '')\n\t('sentence', 'Furthermore, many pigmentation disorders show additional defects in cells other than melanocytes, and identification of the genetic insults in these disorders has revealed pleiotropic genes, where a single gene is required for various functions in different cell types.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 90864647)\n\t('predication_id', 90864646)\n\t('subject_text', 'cell types')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 1484)\n\t('subject_end_index', 1500)\n\t('subject_score', 865)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1471)\n\t('predicate_end_index', 1473)\n\t('object_text', 'disorders')\n\t('object_dist', 4)\n\t('object_maxdist', 10)\n\t('object_start_index', 1369)\n\t('object_end_index', 1378)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 18, 46, 1))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9265\n('predication_id', 61017303)\n\t('sentence_id', 106352848)\n\t('index', None)\n\t('pmid', '18632543')\n\t('subject_source', None)\n\t('subject_cui', 'C0020028')\n\t('subject_name', 'Hospitals, University')\n\t('subject_type', 'hcro')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008976')\n\t('object_name', 'Clinical Trials')\n\t('object_type', 'resa')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 106352848)\n\t('pmid', '18632543')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', 'OBJECTIVE')\n\t('sentence', 'DESIGN, SETTING, AND PARTICIPANTS: Sixty-two of 114 eligible children (54%) with NF1 participated in a randomized, double-blind, placebo-controlled trial conducted between January 20, 2006, and February 8, 2007, at an NF1 referral center at a Dutch university hospital.')\n\t('normalized_section_header', 'OBJECTIVE')\n\nSCORE DATA:\n('predication_aux_id', 61017310)\n\t('predication_id', 61017303)\n\t('subject_text', 'university hospital')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 1064)\n\t('subject_end_index', 1089)\n\t('subject_score', 901)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1053)\n\t('predicate_end_index', 1055)\n\t('object_text', 'trial')\n\t('object_dist', 2)\n\t('object_maxdist', 5)\n\t('object_start_index', 957)\n\t('object_end_index', 962)\n\t('object_score', 814)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 11, 37))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 660\n('predication_id', 95672095)\n\t('sentence_id', 156760360)\n\t('index', None)\n\t('pmid', '25815234')\n\t('subject_source', None)\n\t('subject_cui', 'C0087111')\n\t('subject_name', 'Therapeutic procedure')\n\t('subject_type', 'topp')\n\t('subject_score', None)\n\t('predicate', 'TREATS')\n\t('object_source', None)\n\t('object_cui', 'C0702166')\n\t('object_name', 'Acne')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 156760360)\n\t('pmid', '25815234')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'After 2 months of oral isotretinoin treatment, improvement in her acne was observed.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 95672094)\n\t('predication_id', 95672095)\n\t('subject_text', 'treatment')\n\t('subject_dist', 2)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 627)\n\t('subject_end_index', 636)\n\t('subject_score', 851)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 650)\n\t('predicate_end_index', 652)\n\t('object_text', 'acne')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 657)\n\t('object_end_index', 661)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 10, 9, 21))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8700\n('predication_id', 41784887)\n\t('sentence_id', 71568635)\n\t('index', None)\n\t('pmid', '12393424')\n\t('subject_source', None)\n\t('subject_cui', 'C0460004')\n\t('subject_name', 'Head and neck structure')\n\t('subject_type', 'blor')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', 'C0006826')\n\t('object_name', 'Malignant Neoplasms')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 71568635)\n\t('pmid', '12393424')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', '')\n\t('sentence', 'The highest solid tumor O/E ratios were 4317 for vulvar cancer, 2362 for esophageal cancer, and 706 for head and neck cancer.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 41784877)\n\t('predication_id', 41784887)\n\t('subject_text', 'head and neck')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1319)\n\t('subject_end_index', 1332)\n\t('subject_score', 824)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 1319)\n\t('predicate_end_index', 1339)\n\t('object_text', 'cancer')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1333)\n\t('object_end_index', 1339)\n\t('object_score', 824)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 0, 36, 18))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5007\n('predication_id', 108932690)\n\t('sentence_id', 203026656)\n\t('index', None)\n\t('pmid', '28674118')\n\t('subject_source', None)\n\t('subject_cui', 'C0085390')\n\t('subject_name', 'Li-Fraumeni Syndrome')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'ISA')\n\t('object_source', None)\n\t('object_cui', 'C0039082')\n\t('object_name', 'Syndrome')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 203026656)\n\t('pmid', '28674118')\n\t('type', 'ab')\n\t('number', 10)\n\t('section_header', '')\n\t('sentence', 'However, other cancer predisposition syndromes, such as Li-Fraumeni syndrome, RASopathies, and others, may be associated with an increased risk for NB.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 108932690)\n\t('predication_id', 108932690)\n\t('subject_text', 'Li-Fraumeni syndrome')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 1514)\n\t('subject_end_index', 1534)\n\t('subject_score', 1000)\n\t('indicator_type', 'SPEC')\n\t('predicate_start_index', 1495)\n\t('predicate_end_index', 1534)\n\t('object_text', 'syndromes')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 1495)\n\t('object_end_index', 1504)\n\t('object_score', 851)\n\t('curr_timestamp', datetime.datetime(2018, 3, 11, 12, 22, 36))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 5494\n('predication_id', 108797585)\n\t('sentence_id', 202736055)\n\t('index', None)\n\t('pmid', '28965845')\n\t('subject_source', None)\n\t('subject_cui', 'C0036341')\n\t('subject_name', 'Schizophrenia')\n\t('subject_type', 'mobd')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0237401')\n\t('object_name', 'Individual')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 202736055)\n\t('pmid', '28965845')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'Besides the 600 kb BP4-BP5 CNV found in 0.5%-1% of individuals with autism spectrum disorders and schizophrenia and whose rearrangement causes reciprocal defects in head size and body weight, a second distal 220 kb BP2-BP3 CNV is likewise a potent driver of neuropsychiatric, anatomical, and metabolic pathologies.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 108797585)\n\t('predication_id', 108797585)\n\t('subject_text', 'schizophrenia')\n\t('subject_dist', 1)\n\t('subject_maxdist', 10)\n\t('subject_start_index', 562)\n\t('subject_end_index', 575)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 521)\n\t('predicate_end_index', 525)\n\t('object_text', 'individuals')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 509)\n\t('object_end_index', 520)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 11, 12, 21, 20))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 227\n('predication_id', 71558670)\n\t('sentence_id', 126739032)\n\t('index', None)\n\t('pmid', '19376813')\n\t('subject_source', None)\n\t('subject_cui', 'C0206530')\n\t('subject_name', 'Germ-Line Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008059')\n\t('object_name', 'Child')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 126739032)\n\t('pmid', '19376813')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'Children born with germ-line mutations in BRAF, MEK1 or MEK2 develop cardio-facio-cutaneous (CFC) syndrome, an autosomal dominant syndrome characterized by a distinctive facial appearance, heart defects, skin and hair abnormalities and mental retardation.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 71558672)\n\t('predication_id', 71558670)\n\t('subject_text', 'germ-line mutations')\n\t('subject_dist', 1)\n\t('subject_maxdist', 13)\n\t('subject_start_index', 552)\n\t('subject_end_index', 577)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 547)\n\t('predicate_end_index', 551)\n\t('object_text', 'Children')\n\t('object_dist', 2)\n\t('object_maxdist', 2)\n\t('object_start_index', 533)\n\t('object_end_index', 541)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 18, 30, 51))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3867\n('predication_id', 90722447)\n\t('sentence_id', 144215024)\n\t('index', None)\n\t('pmid', '23599677')\n\t('subject_source', None)\n\t('subject_cui', 'C0752312')\n\t('subject_name', 'Mitogen-Activated Protein Kinases')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'TREATS')\n\t('object_source', None)\n\t('object_cui', 'C0278883')\n\t('object_name', 'Metastatic melanoma')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 144215024)\n\t('pmid', '23599677')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'The MAPK (ERK) Pathway: Investigational Combinations for the Treatment Of BRAF-Mutated Metastatic Melanoma.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 90722439)\n\t('predication_id', 90722447)\n\t('subject_text', 'MAPK')\n\t('subject_dist', 4)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 187)\n\t('subject_end_index', 191)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 244)\n\t('predicate_end_index', 253)\n\t('object_text', 'Metastatic Melanoma')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 276)\n\t('object_end_index', 295)\n\t('object_score', 852)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 18, 30, 33))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1102\n('predication_id', 88010980)\n\t('sentence_id', 130492686)\n\t('index', None)\n\t('pmid', '24183794')\n\t('subject_source', None)\n\t('subject_cui', 'C0017337')\n\t('subject_name', 'Genes')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'ASSOCIATED_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 130492686)\n\t('pmid', '24183794')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'The genes that are mutated in these diseases are varied and include genes that encode lymphatic endothelial cell (LEC) growth factor receptors and their ligands and transcription factors that control LEC fate and function.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 88010976)\n\t('predication_id', 88010980)\n\t('subject_text', 'genes')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 539)\n\t('subject_end_index', 544)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 562)\n\t('predicate_end_index', 564)\n\t('object_text', 'diseases')\n\t('object_dist', 1)\n\t('object_maxdist', 7)\n\t('object_start_index', 571)\n\t('object_end_index', 579)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 8, 11, 56, 57))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9212\n('predication_id', 67727994)\n\t('sentence_id', 119296092)\n\t('index', None)\n\t('pmid', '20543203')\n\t('subject_source', None)\n\t('subject_cui', 'C0206530')\n\t('subject_name', 'Germ-Line Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 119296092)\n\t('pmid', '20543203')\n\t('type', 'ab')\n\t('number', 11)\n\t('section_header', 'CONCLUSION')\n\t('sentence', 'CONCLUSION: A report of germline mutations of CBL in three patients with JMML is presented here, confirming the existence of an unreported inheritable condition associated with a predisposition to JMML.')\n\t('normalized_section_header', 'CONCLUSIONS')\n\nSCORE DATA:\n('predication_aux_id', 67727994)\n\t('predication_id', 67727994)\n\t('subject_text', 'germline mutations')\n\t('subject_dist', 2)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 1831)\n\t('subject_end_index', 1849)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1857)\n\t('predicate_end_index', 1859)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 1866)\n\t('object_end_index', 1874)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 16, 56, 43))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8006\n('predication_id', 49668596)\n\t('sentence_id', 85381972)\n\t('index', None)\n\t('pmid', '15797016')\n\t('subject_source', None)\n\t('subject_cui', 'C0259902')\n\t('subject_name', 'gastrulation')\n\t('subject_type', 'orgf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0042567')\n\t('object_name', 'Vertebrates')\n\t('object_type', 'vtbt')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 85381972)\n\t('pmid', '15797016')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Conserved patterns of cell movements during vertebrate gastrulation.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 49668595)\n\t('predication_id', 49668596)\n\t('subject_text', 'gastrulation')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 208)\n\t('subject_end_index', 220)\n\t('subject_score', 888)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 197)\n\t('predicate_end_index', 220)\n\t('object_text', 'vertebrate')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 197)\n\t('object_end_index', 207)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 10, 42, 37))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 6260\n('predication_id', 84107410)\n\t('sentence_id', 151650271)\n\t('index', None)\n\t('pmid', '21678127')\n\t('subject_source', None)\n\t('subject_cui', 'C0007634')\n\t('subject_name', 'Cells')\n\t('subject_type', 'cell')\n\t('subject_score', None)\n\t('predicate', 'LOCATION_OF')\n\t('object_source', None)\n\t('object_cui', '1758')\n\t('object_name', 'DMP1')\n\t('object_type', 'aapp')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 151650271)\n\t('pmid', '21678127')\n\t('type', 'ab')\n\t('number', 3)\n\t('section_header', '')\n\t('sentence', 'Real-time PCR analysis indicated that FGF2 upregulates Ank, Enpp1, Mgp, Slc20a1, and Dmp1 in MLO-Y4 cells.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 84107411)\n\t('predication_id', 84107410)\n\t('subject_text', 'cells')\n\t('subject_dist', 1)\n\t('subject_maxdist', 1)\n\t('subject_start_index', 775)\n\t('subject_end_index', 780)\n\t('subject_score', 827)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 759)\n\t('predicate_end_index', 761)\n\t('object_text', 'Dmp1')\n\t('object_dist', 1)\n\t('object_maxdist', 7)\n\t('object_start_index', 754)\n\t('object_end_index', 758)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 3, 7))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7998\n('predication_id', 59180701)\n\t('sentence_id', 102922966)\n\t('index', None)\n\t('pmid', '16364920')\n\t('subject_source', None)\n\t('subject_cui', '22882')\n\t('subject_name', 'ZHX2')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'STIMULATES')\n\t('object_source', None)\n\t('object_cui', '5894')\n\t('object_name', 'RAF1')\n\t('object_type', 'gngm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 102922966)\n\t('pmid', '16364920')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Our data suggest that B-RAF activates C-RAF through a mechanism involving 14-3-3 mediated heterooligomerization and C-RAF transphosphorylation.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 59180694)\n\t('predication_id', 59180701)\n\t('subject_text', 'RAF')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1082)\n\t('subject_end_index', 1085)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1086)\n\t('predicate_end_index', 1095)\n\t('object_text', 'C-RAF')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 1180)\n\t('object_end_index', 1185)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 13, 23, 19))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 3257\n('predication_id', 111250693)\n\t('sentence_id', 208401923)\n\t('index', None)\n\t('pmid', '29683489')\n\t('subject_source', None)\n\t('subject_cui', 'C0040648')\n\t('subject_name', 'TRANSCRIPTION FACTOR')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'AFFECTS')\n\t('object_source', None)\n\t('object_cui', 'C0037080')\n\t('object_name', 'Signal Pathways')\n\t('object_type', 'celf')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 208401923)\n\t('pmid', '29683489')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Forkhead box O (FOXO) and p53 proteins are transcription factors that regulate diverse signalling pathways to control cell cycle, apoptosis and metabolism.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 111250693)\n\t('predication_id', 111250693)\n\t('subject_text', 'transcription factors')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 315)\n\t('subject_end_index', 336)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 342)\n\t('predicate_end_index', 350)\n\t('object_text', 'signalling pathways')\n\t('object_dist', 1)\n\t('object_maxdist', 4)\n\t('object_start_index', 365)\n\t('object_end_index', 384)\n\t('object_score', 884)\n\t('curr_timestamp', datetime.datetime(2018, 8, 3, 23, 43, 35))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2268\n('predication_id', 60528757)\n\t('sentence_id', 105419408)\n\t('index', None)\n\t('pmid', '18632602')\n\t('subject_source', None)\n\t('subject_cui', 'C0017337')\n\t('subject_name', 'Genes')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PREDISPOSES')\n\t('object_source', None)\n\t('object_cui', 'C1326912')\n\t('object_name', 'Tumorigenesis')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 105419408)\n\t('pmid', '18632602')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'We performed mutational profiling of a large cohort of lung adenocarcinomas to uncover other potential somatic mutations in genes of this pathway that could contribute to lung tumorigenesis.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 60528757)\n\t('predication_id', 60528757)\n\t('subject_text', 'genes')\n\t('subject_dist', 2)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 727)\n\t('subject_end_index', 732)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 760)\n\t('predicate_end_index', 770)\n\t('object_text', 'tumorigenesis')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 785)\n\t('object_end_index', 798)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 1, 57))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8425\n('predication_id', 63266682)\n\t('sentence_id', 110682259)\n\t('index', None)\n\t('pmid', '19008228')\n\t('subject_source', None)\n\t('subject_cui', 'C0026882')\n\t('subject_name', 'Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'AFFECTS')\n\t('object_source', None)\n\t('object_cui', 'C0012634')\n\t('object_name', 'Disease')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 110682259)\n\t('pmid', '19008228')\n\t('type', 'ab')\n\t('number', 10)\n\t('section_header', '')\n\t('sentence', 'Collectively, these studies reaffirm the positive role of SHP-2 phosphatase in cell motility and suggest a new mechanism by which SHP-2 GOF mutations contribute to diseases.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 63266682)\n\t('predication_id', 63266682)\n\t('subject_text', 'mutations')\n\t('subject_dist', 0)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 1824)\n\t('subject_end_index', 1833)\n\t('subject_score', 723)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 1834)\n\t('predicate_end_index', 1844)\n\t('object_text', 'diseases')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 1848)\n\t('object_end_index', 1856)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 1, 46))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 2515\n('predication_id', 92907312)\n\t('sentence_id', 129728549)\n\t('index', None)\n\t('pmid', '24736444')\n\t('subject_source', None)\n\t('subject_cui', 'C0284927')\n\t('subject_name', '6-pyruvoyltetrahydropterin synthase|SLC25A3|REG1A|PTPRU')\n\t('subject_type', 'gngm')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0521449')\n\t('object_name', 'Cytoplasmic')\n\t('object_type', 'blor')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 129728549)\n\t('pmid', '24736444')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'The PTPN11 (protein-tyrosine phosphatase, non-receptor type 11) gene encodes SHP2, a cytoplasmic PTP that is essential for vertebrate development.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 92907308)\n\t('predication_id', 92907312)\n\t('subject_text', 'PTP')\n\t('subject_dist', 0)\n\t('subject_maxdist', 0)\n\t('subject_start_index', 412)\n\t('subject_end_index', 415)\n\t('subject_score', 872)\n\t('indicator_type', 'MOD/HEAD')\n\t('predicate_start_index', 400)\n\t('predicate_end_index', 415)\n\t('object_text', 'cytoplasmic')\n\t('object_dist', 0)\n\t('object_maxdist', 0)\n\t('object_start_index', 400)\n\t('object_end_index', 411)\n\t('object_score', 872)\n\t('curr_timestamp', datetime.datetime(2018, 3, 9, 2, 38, 25))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 8719\n('predication_id', 49953719)\n\t('sentence_id', 85907761)\n\t('index', None)\n\t('pmid', '17218256')\n\t('subject_source', None)\n\t('subject_cui', 'C0021368')\n\t('subject_name', 'Inflammation')\n\t('subject_type', 'patf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0026809')\n\t('object_name', 'Mus')\n\t('object_type', 'mamm')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 85907761)\n\t('pmid', '17218256')\n\t('type', 'ti')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Increased myeloid cell responses to M-CSF and RANKL cause bone loss and inflammation in SH3BP2 \"cherubism\" mice.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 49953713)\n\t('predication_id', 49953719)\n\t('subject_text', 'inflammation')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 232)\n\t('subject_end_index', 244)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 245)\n\t('predicate_end_index', 247)\n\t('object_text', 'mice')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 267)\n\t('object_end_index', 271)\n\t('object_score', 790)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 10, 47, 14))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 969\n('predication_id', 62233133)\n\t('sentence_id', 108708932)\n\t('index', None)\n\t('pmid', '18651097')\n\t('subject_source', None)\n\t('subject_cui', 'C0028326')\n\t('subject_name', 'Noonan Syndrome')\n\t('subject_type', 'cgab')\n\t('subject_score', None)\n\t('predicate', 'PART_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 108708932)\n\t('pmid', '18651097')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', '')\n\t('sentence', 'In the patients with a CFC phenotype, three mutations, including a novel three amino-acid insertion, were identified in one CFC patient and two patients with both NS and CFC phenotypes.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 62233136)\n\t('predication_id', 62233133)\n\t('subject_text', 'NS')\n\t('subject_dist', 1)\n\t('subject_maxdist', 2)\n\t('subject_start_index', 1622)\n\t('subject_end_index', 1624)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1612)\n\t('predicate_end_index', 1616)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 1603)\n\t('object_end_index', 1611)\n\t('object_score', 888)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 14, 37, 48))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1732\n('predication_id', 80507425)\n\t('sentence_id', 144431950)\n\t('index', None)\n\t('pmid', '22591685')\n\t('subject_source', None)\n\t('subject_cui', 'C0949628')\n\t('subject_name', 'Uniparental Disomy')\n\t('subject_type', 'comd')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0023449')\n\t('object_name', 'Leukemia, Lymphocytic, Acute')\n\t('object_type', 'neop')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 144431950)\n\t('pmid', '22591685')\n\t('type', 'ab')\n\t('number', 2)\n\t('section_header', '')\n\t('sentence', 'We analyzed 77 samples from hematologic malignancies, identifying a somatic mutation in CBL (p.C381R) in one patient with T-ALL that was associated with a uniparental disomy at the CBL locus and a germline heterozygous mutation in one patient with JMML.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 80507428)\n\t('predication_id', 80507425)\n\t('subject_text', 'uniparental disomy')\n\t('subject_dist', 1)\n\t('subject_maxdist', 5)\n\t('subject_start_index', 612)\n\t('subject_end_index', 630)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 594)\n\t('predicate_end_index', 604)\n\t('object_text', 'ALL')\n\t('object_dist', 1)\n\t('object_maxdist', 7)\n\t('object_start_index', 581)\n\t('object_end_index', 584)\n\t('object_score', 923)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 1, 37, 18))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9389\n('predication_id', 23093628)\n\t('sentence_id', 40479635)\n\t('index', None)\n\t('pmid', '8824319')\n\t('subject_source', None)\n\t('subject_cui', 'C0301714')\n\t('subject_name', 'Amino Acids, Basic')\n\t('subject_type', 'aapp')\n\t('subject_score', None)\n\t('predicate', 'AFFECTS')\n\t('object_source', None)\n\t('object_cui', 'C0596901')\n\t('object_name', 'Membrane')\n\t('object_type', 'celc')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 40479635)\n\t('pmid', '8824319')\n\t('type', 'ab')\n\t('number', 8)\n\t('section_header', '')\n\t('sentence', 'Both proteins contain a C-terminal cluster of basic amino acids, which could provide a mechanism for membrane association.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 23093631)\n\t('predication_id', 23093628)\n\t('subject_text', 'basic amino acids')\n\t('subject_dist', 2)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 1222)\n\t('subject_end_index', 1239)\n\t('subject_score', 1000)\n\t('indicator_type', 'NOM')\n\t('predicate_start_index', 1292)\n\t('predicate_end_index', 1303)\n\t('object_text', 'membrane')\n\t('object_dist', 0)\n\t('object_maxdist', 4)\n\t('object_start_index', 1283)\n\t('object_end_index', 1291)\n\t('object_score', 694)\n\t('curr_timestamp', datetime.datetime(2018, 3, 5, 21, 6, 4))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 10246\n('predication_id', 69165177)\n\t('sentence_id', 122052276)\n\t('index', None)\n\t('pmid', '18368129')\n\t('subject_source', None)\n\t('subject_cui', 'C0025202')\n\t('subject_name', 'melanoma')\n\t('subject_type', 'neop')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0337810')\n\t('object_name', 'Italians')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 122052276)\n\t('pmid', '18368129')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Melanocortin-1 receptor (MC1R) variants have been associated with BRAF (v-raf murine sarcoma viral oncogene homolog B1) mutations in non-CSD (chronic solar-damaged) melanomas in an Italian and an American population.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 69165166)\n\t('predication_id', 69165177)\n\t('subject_text', 'melanomas')\n\t('subject_dist', 1)\n\t('subject_maxdist', 7)\n\t('subject_start_index', 459)\n\t('subject_end_index', 468)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 469)\n\t('predicate_end_index', 471)\n\t('object_text', 'Italian')\n\t('object_dist', 1)\n\t('object_maxdist', 2)\n\t('object_start_index', 475)\n\t('object_end_index', 482)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 17, 31, 29))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 9650\n('predication_id', 41383760)\n\t('sentence_id', 70870171)\n\t('index', None)\n\t('pmid', '11280422')\n\t('subject_source', None)\n\t('subject_cui', 'C0023014')\n\t('subject_name', 'Language Development Disorders')\n\t('subject_type', 'mobd')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0008059')\n\t('object_name', 'Child')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 70870171)\n\t('pmid', '11280422')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'Executive functioning was investigated in 34 children (24 boys and 10 girls) with developmental language disorder (DLD) and 21 children (18 boys and 3 girls) with high-functioning autistic disorder (HAD) matched on Full Scale IQ, Nonverbal IQ, age (mean age 9 year, 1 month), and SES.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 41383762)\n\t('predication_id', 41383760)\n\t('subject_text', 'developmental language disorder')\n\t('subject_dist', 1)\n\t('subject_maxdist', 13)\n\t('subject_start_index', 321)\n\t('subject_end_index', 352)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 310)\n\t('predicate_end_index', 314)\n\t('object_text', 'children')\n\t('object_dist', 3)\n\t('object_maxdist', 4)\n\t('object_start_index', 278)\n\t('object_end_index', 286)\n\t('object_score', 861)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 0, 32, 18))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 1355\n('predication_id', 110554069)\n\t('sentence_id', 206858901)\n\t('index', None)\n\t('pmid', '29892954')\n\t('subject_source', None)\n\t('subject_cui', 'C0686346')\n\t('subject_name', 'Gender dysphoria')\n\t('subject_type', 'mobd')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0683971')\n\t('object_name', 'General Population')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 206858901)\n\t('pmid', '29892954')\n\t('type', 'ab')\n\t('number', 4)\n\t('section_header', '')\n\t('sentence', 'Our data showed a higher incidence of cytogenetic alterations in Gender Dysphoria (2.65%) than in the general population (0.53%) (p < 0.0001).')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 110554069)\n\t('predication_id', 110554069)\n\t('subject_text', 'Gender Dysphoria')\n\t('subject_dist', 1)\n\t('subject_maxdist', 4)\n\t('subject_start_index', 944)\n\t('subject_end_index', 960)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 974)\n\t('predicate_end_index', 976)\n\t('object_text', 'general population')\n\t('object_dist', 1)\n\t('object_maxdist', 1)\n\t('object_start_index', 987)\n\t('object_end_index', 1005)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 8, 3, 15, 21, 43))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 7314\n('predication_id', 83961125)\n\t('sentence_id', 151358756)\n\t('index', None)\n\t('pmid', '22544363')\n\t('subject_source', None)\n\t('subject_cui', 'C0453879')\n\t('subject_name', 'Intellectual functioning disability')\n\t('subject_type', 'fndg')\n\t('subject_score', None)\n\t('predicate', 'COEXISTS_WITH')\n\t('object_source', None)\n\t('object_cui', 'C0559758')\n\t('object_name', 'Multisystem disorder')\n\t('object_type', 'dsyn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 151358756)\n\t('pmid', '22544363')\n\t('type', 'ab')\n\t('number', 1)\n\t('section_header', '')\n\t('sentence', 'We show that haploinsufficiency of KANSL1 is sufficient to cause the 17q21.31 microdeletion syndrome, a multisystem disorder characterized by intellectual disability, hypotonia and distinctive facial features.')\n\t('normalized_section_header', '')\n\nSCORE DATA:\n('predication_aux_id', 83961123)\n\t('predication_id', 83961125)\n\t('subject_text', 'intellectual disability')\n\t('subject_dist', 1)\n\t('subject_maxdist', 3)\n\t('subject_start_index', 447)\n\t('subject_end_index', 476)\n\t('subject_score', 1000)\n\t('indicator_type', 'VERB')\n\t('predicate_start_index', 430)\n\t('predicate_end_index', 443)\n\t('object_text', 'multisystem disorder')\n\t('object_dist', 1)\n\t('object_maxdist', 5)\n\t('object_start_index', 409)\n\t('object_end_index', 429)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 7, 3, 3, 8))\n--------------------------------------------------------------------------------\n\n\n\n\nPREDICTION 10234\n('predication_id', 63892475)\n\t('sentence_id', 111887778)\n\t('index', None)\n\t('pmid', '18682506')\n\t('subject_source', None)\n\t('subject_cui', 'C0026882')\n\t('subject_name', 'Mutation')\n\t('subject_type', 'genf')\n\t('subject_score', None)\n\t('predicate', 'PROCESS_OF')\n\t('object_source', None)\n\t('object_cui', 'C0030705')\n\t('object_name', 'Patients')\n\t('object_type', 'humn')\n\t('object_score', None)\n\t('belief', Decimal('0.8'))\n\t('sentence', None)\n\t('source', 'semmedb')\n\nSENTENCE DATA:\n('sentence_id', 111887778)\n\t('pmid', '18682506')\n\t('type', 'ab')\n\t('number', 7)\n\t('section_header', 'RESULTS')\n\t('sentence', 'RESULTS: The BRAF(V600E) mutation was found in 38 of 102 (37.3%) PTC patients, and was significantly more frequent in patients older than 60 yr (P = 0.02), in advanced stages (P = 0.03), and in cases with vascular invasion (P = 0.02).')\n\t('normalized_section_header', 'RESULTS')\n\nSCORE DATA:\n('predication_aux_id', 63892475)\n\t('predication_id', 63892475)\n\t('subject_text', 'mutation')\n\t('subject_dist', 3)\n\t('subject_maxdist', 6)\n\t('subject_start_index', 1101)\n\t('subject_end_index', 1109)\n\t('subject_score', 1000)\n\t('indicator_type', 'PREP')\n\t('predicate_start_index', 1197)\n\t('predicate_end_index', 1199)\n\t('object_text', 'patients')\n\t('object_dist', 1)\n\t('object_maxdist', 6)\n\t('object_start_index', 1200)\n\t('object_end_index', 1208)\n\t('object_score', 1000)\n\t('curr_timestamp', datetime.datetime(2018, 3, 6, 15, 20, 2))\n--------------------------------------------------------------------------------\n\n\n" ], [ "cursor.close()\ncnx.close()", "_____no_output_____" ], [ "[predication_id,sentence_id,index,pmid,subject_source,subject_cui,subject_name,subject_type,subject_score,predicate,object_source,object_cui,object_name,object_type,object_score,belief,sentence,source] = range(0,18)\nsummaryD = {\n 'subject_entities':0,\n 'subject_types':0,\n 'predicate_types':0,\n 'object_entities':0,\n 'object_types':0,\n }\nfor t in data:\n if not summaryD.get('s' + t[subject_cui]):\n summaryD['s' + t[subject_cui]] = True\n summaryD['subject_entities'] += 1\n if not summaryD.get('s' + t[subject_type]):\n summaryD['s' + t[subject_type]] = True\n summaryD['subject_types'] += 1\n if not summaryD.get(t[predicate]):\n summaryD[t[predicate]] = True\n summaryD['predicate_types'] += 1\n if not summaryD.get('o' + t[object_cui]):\n summaryD['o' + t[object_cui]] = True\n summaryD['object_entities'] += 1\n if not summaryD.get('o' + t[object_type]):\n summaryD['o' + t[object_type]] = True\n summaryD['object_types'] += 1\nprint('subject_entities: {}\\nsubject_types: {}\\npredicate_types: {}\\nobject_entities: {}\\nobject_types: {}'.format(summaryD['subject_entities'],summaryD['subject_types'],summaryD['predicate_types'],summaryD['object_entities'],summaryD['object_types'])) ", "subject_entities: 2137\nsubject_types: 95\npredicate_types: 45\nobject_entities: 1738\nobject_types: 91\n" ] ], [ [ "##### E4 - Answer the following questions (NOTE: you can read more about SemMedDB and how it was built here - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3509487/): ", "_____no_output_____" ], [ "1. Did you find any predications that make sense to you? Are they what you would expect for the domain of discourse you chose? List a few of them and explain why you think they make sense.\n\nANSWER: \n\n2. In your own words, what is a potential use for the semantic types assigned to the subject and object entities?\n\nANSWER: \n\n3. How are the predicate types different from the semantic types assigned to subject and object entities?\n\nANSWER: \n\n4. Did you find any predications that did not make sense? List them and explain why they do not make sense.\n\nANSWER: \n\n5. What is the NLP tool used to extract these predications? In a sentence or two, describe the approach that was used to train the NLP tool. Can you think of any limitations that the approach might place on the predication set you just queried?\n\nANSWER: \n\n6. Each predication has scores for the subject and object entities (see the score data) and a belief score (see the predication data). The belief score indicates a prior probability of the predication being correct given the sentence. What do the subject and object entity scores represent?\n \nANSWER: ", "_____no_output_____" ], [ "### The next cells using a forward chaining reasoner to apply simple closure to the predications extracted above", "_____no_output_____" ] ], [ [ "## Set up the CLIPS environment\nimport clips\nenv = clips.Environment()\n\nMIN_PREDICATION_BELIEF = 0.8\nMIN_TRANSITIVE_BELIEF = 0.6 # chosen because it retains most depth 1 transitive inferences over semmed \n\n## NOTE: This clears test-inference.ntriples each time this cell is ran\n## This accomplishes that\nf = open(\"test-inference.ntriples\",'w')\nf.close()", "_____no_output_____" ] ], [ [ "#### We set the rules up first. Look at the code and comments and then answer the questions below.", "_____no_output_____" ] ], [ [ "## First, we clear the working memory of the rule engine. This removes all Working Memory Elements and resets the \n## inference algorithm\nenv.clear()\nenv.reset()\n\n## We will be writing the triples to RDF. This file is opened by the rule enginein append mode \nenv.eval('(open \"test-inference.ntriples\" writeFile \"a\")')\n\n\n## We build a template/frame for object-attribute-value entities. There is a slot to track if the oav object is \n## based on inference. There is also a slot to track a belief score which accumulates uncertainty. the predNS\n## slot is to track the namespace of the oav entity which helps with translating the results to RDF\nenv.build(\"\"\"\n(deftemplate oav\n (slot object)\n (slot attribute)\n (slot value)\n (slot predNS)\n (slot inferred (default No))\n (slot belief (default 0.0)))\n\"\"\")\n\n## Here is the first rule - transitive closure. Notice that it only applies to a subset of \n## predicate types - predisposes, causes, influence, precedes, and part_of\n## These correspond to predicate properties defined in the Relation Ontology\n## The 'test' ensures that the total belief score for the entity is above the the threshold\n## set above\nenv.build(\"\"\"\n(defrule transitive\n \"a simple transitivity rule\"\n (oav (object ?o)\n (attribute ?pred&:(member$ ?pred (create$ predisposes causes influence precedes part_of)))\n (value ?s)\n (predNS RO)\n (inferred No)\n (belief ?b1))\n (oav (object ?s)\n (attribute ?pred)\n (value ?q)\n (predNS RO)\n (inferred No)\n (belief ?b2))\n (test (>= (* ?b1 ?b2) {}))\n =>\n (assert (oav (object ?o)\n (attribute ?pred)\n (value ?q)\n (inferred Yes)\n (predNS RO)\n (belief (* ?b1 ?b2))))\n \n (printout writeFile (format nil \"<http://dikb.org/cfc#%s> <http://purl.obolibrary.org/obo/%s> <http://dikb.org/cfc#%s>.%n\" ?o ?pred ?q)) \n)\n\"\"\".format(MIN_TRANSITIVE_BELIEF))\n# NOTE: add this line to RHS to see the belief scores:\n# (printout writeFile (format nil \"b1: %f, b2: %f, belief: %f>.%n\" ?b1 ?b2 (* ?b1 ?b2))) \n\n\n\n## Here is the second rule - symmetric closure. Like the first rule, it only operates on certain\n## predicate types\nenv.build(\"\"\"\n(defrule symmetric\n \"a simple symmetry rule\"\n (oav (object ?o)\n (attribute ?pred&:(member$ ?pred (create$ interacts_with association associated_with coexists_with compared_with same_as)))\n (value ?s)\n (predNS RO)\n (inferred No)\n (belief ?b))\n =>\n (assert (oav (object ?s)\n (attribute ?pred)\n (value ?o)\n (inferred Yes)\n (predNS RO)\n (belief ?b)))\n \n (printout writeFile (format nil \"<http://dikb.org/cfc#%s> <http://purl.obolibrary.org/obo/%s> <http://dikb.org/cfc#%s>.%n\" ?o ?pred ?s)) \n)\n\"\"\")\n", "_____no_output_____" ] ], [ [ "**E5** - Questions about the closure rules:\n\n1. What happens to working memory when the RHS of the rule (the 'assert' statement) triggers?\n\nANSWER: \n\n2. The reasoner will run the rules over data in working memory (loaded by the cells below) using 'forward chaining'. Describe what that actually means. \n\nANSWER:\n\n\n3. What has to be true for reasoning come to a halt?\n\nANSWER:\n\n\n4. Symmetric and transitive closure are very simple forms of logical inference. Can you think of additional rules that would help to ensure that the resulting knowledge base contains additional entailed facts of interest? \n\nANSWER: \n\n5. COMMENT: One of the advantages of using a rule based programming approach to conduct graph closure is that the rules can be very modular and little or no thought needs to be put into how the rules interact with each other. It is difficult to see that in this toy example but imagine that there exists hundreds of rules to apply various other inferences in addition to these closures (e.g. by applying valid argument forms and use case specific reasoning). In that case, the rules could be maintained separately from the code as declarative knowledge objects. The principle that the rules should be modular becomes a great advantage for growing and maintainnig the system. \n\n", "_____no_output_____" ], [ "### The next cells parse the SemMedDb predications and load them as facts into working memory", "_____no_output_____" ] ], [ [ "originalTriplesF = open('original-triples.ntriples','w')\n\nresourceD = {}\nresourceDinv = {}\nrcnt = 0\nfctStrD = {}\nsemTypeD = {}\nlabelsD = {}\n\n[PREDICATION_ID,SENTENCE_ID,INDEX,PMID,SUBJECT_SOURCE,SUBJECT_CUI,SUBJECT_NAME,SUBJECT_TYPE,SUBJECT_SCORE,PREDICATE,OBJECT_SOURCE,OBJECT_CUI,OBJECT_NAME,OBJECT_TYPE,OBJECT_SCORE,BELIEF,SENTENCE,SOURCE] = range(0,18)\n\n## Uses the serialized predications from the cells above\nf = open('semmedb-results.pickle','rb')\nstmtTpls = pickle.load(f)\nf.close()\n\nfor stmtSplt in stmtTpls: \n \n if len(stmtSplt) != 18:\n continue\n \n if float(stmtSplt[BELIEF]) < MIN_PREDICATION_BELIEF:\n continue\n \n (subj, pred, obj) = (stmtSplt[SUBJECT_CUI],\n stmtSplt[PREDICATE].lower().strip(),\n stmtSplt[OBJECT_CUI])\n \n # Subj and obj need to be validly formatted CUIs\n if subj == None or obj == None or subj == \"\" or obj == \"\" or subj[0] != 'C' or obj[0] != 'C':\n continue\n\n # only write out and/or do inference over some predicates\n if pred not in ['affects','associated_with','augments',\n 'causes','coexists_with','complicates',\n 'disrupts','inhibits','interacts_with',\n 'part_of','precedes', 'predisposes','prevents',\n 'treats',\n 'association', \n 'influence', \n 'modification',\n 'regulateactivity',\n 'regulateamount',\n 'acetylation',\n 'deacetylation',\n 'defarnesylation',\n 'degeranylgeranylation',\n 'deglycosylation',\n 'dehydroxylation',\n 'demethylation'\n 'demyristoylation',\n 'depalmitoylation',\n 'dephosphorylation',\n 'deribosylation',\n 'desumoylation',\n 'deubiquitination',\n 'farnesylation',\n 'geranylgeranylation',\n 'glycosylation',\n 'hydroxylation',\n 'methylation',\n 'myristoylation',\n 'palmitoylation',\n 'phosphorylation',\n 'ribosylation',\n 'sumoylation',\n 'ubiquitination',\n 'activation','inhibition','increaseamount','decreaseamount' \n ]: \n continue\n \n # write the original triple to file, regardless of the predicate\n originalTriplesF.write(\"<http://dikb.org/cfc#{}> <http://purl.obolibrary.org/obo/{}> <http://dikb.org/cfc#{}>.\\n\".format(subj,pred,obj))\n \n # Track the subject and objet names\n subjName = stmtSplt[SUBJECT_NAME]\n objName = stmtSplt[OBJECT_NAME]\n if not labelsD.get(subjName):\n labelsD[subj] = subjName\n \n if not labelsD.get(objName):\n labelsD[obj] = objName\n \n # Track the semantic types\n semTypesStr = stmtSplt[SUBJECT_TYPE]\n if semTypesStr.find('[') == -1:\n if not semTypeD.get(subj):\n semTypeD[subj] = [semTypesStr.strip()]\n else:\n semTypeD[subj] = semTypeD[subj].append(semTypesStr.strip())\n else:\n semTypesStr = semTypesStr.replace(\"'\",'').replace('[','').replace(']','')\n semTypesL = [x.strip() for x in semTypesStr.split(',')]\n if not semTypeD.get(subj):\n semTypeD[subj] = semTypesL\n else:\n semTypeD[subj] = semTypeD[subj] + semTypesL\n \n semTypesStr = stmtSplt[OBJECT_TYPE]\n if semTypesStr.find('[') == -1:\n if not semTypeD.get(obj):\n semTypeD[obj] = [semTypesStr.strip()]\n else:\n semTypeD[obj] = semTypeD[obj].append(semTypesStr.strip())\n else:\n semTypesStr = semTypesStr.replace(\"'\",'').replace('[','').replace(']','')\n semTypesL = [x.strip() for x in semTypesStr.split(',')]\n if not semTypeD.get(obj):\n semTypeD[obj] = semTypesL\n else:\n semTypeD[obj] = semTypeD[obj] + semTypesL\n \n if not resourceD.get(subj):\n resourceD[subj] = 'r{}'.format(rcnt)\n resourceDinv['r{}'.format(rcnt)] = subj\n rcnt += 1\n \n if not resourceD.get(obj):\n resourceD[obj] = 'r{}'.format(rcnt)\n resourceDinv['r{}'.format(rcnt)] = obj\n rcnt += 1 \n \n \n fctStr = \"\"\"\n(oav (object {})\n (attribute {})\n (value {})\n (predNS {})\n (belief {})\n)\"\"\".format(resourceD[subj], pred, resourceD[obj], 'RO', float(stmtSplt[BELIEF]))\n \n if not fctStrD.get(fctStr): \n env.assert_string(fctStr)\n fctStrD[fctStr] = 1\n\n# write the human readable labels as triples\nfor (e,st) in labelsD.items():\n if st:\n originalTriplesF.write('<http://dikb.org/cfc#{}> <http://www.w3.org/2000/01/rdf-schema#label> \"{}\".\\n'.format(e,st.replace('\"','')))\n\n# Write the semantic types as triples\nfor (e,st) in semTypeD.items():\n if st:\n stSet = set(st)\n for elt in stSet:\n originalTriplesF.write(\"<http://dikb.org/cfc#{}> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://umls.org/st/#{}>.\\n\".format(e,elt.replace('\"','')))\n \noriginalTriplesF.close()", "_____no_output_____" ] ], [ [ "#### This shows how many predications were loaded into working memory from the original set of predications after filtering out predicate types that we don't want in the knowledge graph", "_____no_output_____" ] ], [ [ "len(fctStrD.keys())", "_____no_output_____" ] ], [ [ "#### The next cell shows how the data loaded into working memory is formatted", "_____no_output_____" ] ], [ [ "i = 0\nfor fact in env.facts():\n print(fact)\n if i == 20:\n break\n i += 1", "(initial-fact)\n(oav (object r0) (attribute part_of) (value r1) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r2) (attribute associated_with) (value r3) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r0) (attribute causes) (value r3) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r2) (attribute associated_with) (value r4) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r2) (attribute associated_with) (value r5) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r6) (attribute coexists_with) (value r7) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r4) (attribute coexists_with) (value r7) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r8) (attribute causes) (value r9) (predNS RO) (inferred No) (belief 0.8))\n(oav (object r10) (attribute causes) (value r7) (predNS RO) (inferred No) (belief 0.8))\nf-10 (oav (object r11) (attribute affects) (value r12) (predNS RO) (inferred No) (belief 0.8))\nf-11 (oav (object r11) (attribute associated_with) (value r7) (predNS RO) (inferred No) (belief 0.8))\nf-12 (oav (object r11) (attribute part_of) (value r13) (predNS RO) (inferred No) (belief 0.8))\nf-13 (oav (object r4) (attribute part_of) (value r1) (predNS RO) (inferred No) (belief 0.8))\nf-14 (oav (object r14) (attribute part_of) (value r13) (predNS RO) (inferred No) (belief 0.8))\nf-15 (oav (object r4) (attribute part_of) (value r13) (predNS RO) (inferred No) (belief 0.8))\nf-16 (oav (object r15) (attribute coexists_with) (value r16) (predNS RO) (inferred No) (belief 0.8))\nf-17 (oav (object r17) (attribute coexists_with) (value r18) (predNS RO) (inferred No) (belief 0.8))\nf-18 (oav (object r17) (attribute coexists_with) (value r19) (predNS RO) (inferred No) (belief 0.8))\nf-19 (oav (object r17) (attribute coexists_with) (value r20) (predNS RO) (inferred No) (belief 0.8))\nf-20 (oav (object r12) (attribute affects) (value r21) (predNS RO) (inferred No) (belief 0.8))\n" ] ], [ [ "#### The next cell runs the inference algorithm over the predications loaded into working memory. The count that is printed is the number of inferrences from the reasoner", "_____no_output_____" ] ], [ [ "env.run() ", "_____no_output_____" ], [ "env.eval(\"close(writeFile)\")", "_____no_output_____" ] ], [ [ "### The next cells wrap normalize the identifiers and format original and inferred predications and while translating the newly extracted knowledge statements to RDF", "_____no_output_____" ], [ "The reason for this step is that we plan to integrate the knowledge extracted from the literature in SemMedDB with a much larger ontology-based knowledge graph. The semantics of that larger system are based on OBO-Foundry ontologies and the data format is RDF. This is not always a requirement. There are other systems for representing knowledge in graph form including Neo4J and Grakn which we do not have time to over in class.", "_____no_output_____" ] ], [ [ "# predicate to RO mapping \npredMapD = {\n 'affects':'RO_0002211',\n 'associated_with':'RO_0002610',\n 'augments':'augments',\n 'causes':'RO_0002501',\n 'coexists_with':'RO_0002490',\n 'complicates':'complicates',\n 'disrupts':'disrupts',\n 'inhibits':'RO_0002212',\n 'interacts_with':'RO_0002434',\n 'part_of':'BFO_0000050',\n 'precedes':'BFO_0000063',\n 'predisposes':'RO_0003302',\n 'prevents':'RO_0002599',\n 'stimulates':'RO_0002213',\n 'treats':'RO_0002606',\n 'influence':'RO_0002501',\n 'association':'RO_0002610',\n 'modification':'RO_0002436',\n 'regulateactivity':'RO_0002436',\n 'regulateamount':'RO_0002436',\n 'phosphorylation':'RO_0002447',\n 'dephosphorylation':'RO_0002436',\n 'ubiquitination':'RO_0002436',\n 'deubiquitination':'RO_0002436',\n 'sumoylation':'RO_0002436',\n 'desumoylation':'RO_0002436',\n 'hydroxylation':'RO_0002436',\n 'dehydroxylation':'RO_0002436',\n 'acetylation':'RO_0002436',\n 'deacetylation':'RO_0002436',\n 'glycosylation':'RO_0002436',\n 'deglycosylation':'RO_0002436',\n 'farnesylation':'RO_0002436',\n 'defarnesylation':'RO_0002436',\n 'geranylgeranylation':'RO_0002436',\n 'degeranylgeranylation':'RO_0002436',\n 'palmitoylation':'RO_0002436',\n 'depalmitoylation':'RO_0002436',\n 'myristoylation':'RO_0002436',\n 'demyristoylation':'RO_0002436',\n 'ribosylation':'RO_0002436',\n 'deribosylation':'RO_0002436',\n 'methylation':'RO_0002436',\n 'demethylation':'RO_0002436',\n 'activation':'RO_0002429',\n 'inhibition':'RO_0002212',\n 'increaseamount':'RO_0002429',\n 'decreaseamount':'RO_0002212'\n}\n\nimport re\n\nf = open('test-inference.ntriples','r')\nbuf = f.read()\nf.close()\nrsL = buf.split('\\n')\n\n#rgx.findall(s)\n#[('http://purl.org/vocab/relationship/', 'influence')]\n\nrgx = re.compile('(http://purl.obolibrary.org/obo/)([a-z_]+)')\nfor i in range(0,len(rsL)):\n if rsL[i] == \"\":\n continue\n \n ml = rgx.findall(rsL[i])\n if len(ml) != 1:\n print('ERROR: could not match on predicate regex: {}'.format(rsL[i]))\n continue\n \n (uri,predicate) = ml[0]\n rsL[i] = rsL[i].replace(predicate, predMapD[predicate])\n \n\nf = open('inferred-transitive-and-symmetric.ntriples','w')\nrgx = re.compile('#(r[0-9]+)')\nfor it in rsL:\n keyL = rgx.findall(it)\n newTr = it\n for k in keyL:\n if resourceDinv.get(k):\n newTr = newTr.replace('http://dikb.org/cfc#' + k, 'http://dikb.org/cfc#' + resourceDinv[k]) \n else:\n print('ERROR: key not found in resourceDinv: {}'.format(k))\n f.write(newTr + '\\n')\nf.close()\n", "_____no_output_____" ], [ "# Do the same RO mapping for the original predications\nf = open('original-triples.ntriples','r')\nbuf = f.read()\nf.close()\nrsL = buf.split('\\n')\n\nrgx = re.compile('(http://purl.obolibrary.org/obo/)([a-z_]+)')\nf = open('original-triples-RO-mapped.ntriples','w')\nfor i in range(0,len(rsL)):\n if rsL[i] == \"\":\n continue\n \n if rsL[i].find('<http://www.w3.org/2000/01/rdf-schema#label>') != -1 or rsL[i].find('<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>') != -1:\n f.write(rsL[i] + '\\n') \n continue\n \n ml = rgx.findall(rsL[i])\n if len(ml) != 1:\n print('ERROR: could not match on predicate regex: {}'.format(rsL[i]))\n continue\n \n (uri,predicate) = ml[0]\n f.write(rsL[i].replace(predicate, predMapD[predicate]) + '\\n') \nf.close()", "_____no_output_____" ] ], [ [ "### Running SPARQL over the original and inferred triples", "_____no_output_____" ] ], [ [ "## We load the original extracted predicates (filtered as above) into an \n## in-memory triple store. Notice that we load RO as well.\n\nimport rdflib", "_____no_output_____" ], [ "## The SPARQL query just pulls up to 100 predications and \n## the human readable labels. The output \ng = rdflib.Graph()\ng.parse(\"original-triples-RO-mapped.ntriples\", format='nt')\nprint('Triple count after loading original-triples-RO-mapped.ntriples: {}'.format(len(g)))\ng.parse(\"ro_with_imports_AD_mods.owl\", format='xml')\n\nqres = g.query(\"\"\"\nSELECT ?s_lab ?p_lab ?o_lab ?s_type ?o_type ?s ?p ?o\nWHERE {{\n ?s ?p ?o.\n ?s rdfs:label ?s_lab.\n ?s rdf:type ?s_type.\n OPTIONAL{?p rdfs:label ?p_lab.}\n ?o rdfs:label ?o_lab.\n ?o rdf:type ?o_type.\n }}\nLIMIT 100\n\"\"\")\n\nfor row in qres:\n if row[5].find('cfc') == -1:\n continue\n print(\"\\n\\nTRIPLE:\\n{}\".format('\\n'.join([str(x) for x in row])))", "Triple count after loading original-triples-RO-mapped.ntriples: 6484\n\n\nTRIPLE:\nCryptorchidism\nexistence overlaps\nNeck webbing\nhttp://umls.org/st/#cgab\nhttp://umls.org/st/#cgab\nhttp://dikb.org/cfc#C0010417\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0221217\n\n\nTRIPLE:\nCryptorchidism\nexistence overlaps\nBleeding tendency\nhttp://umls.org/st/#cgab\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0010417\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C1458140\n\n\nTRIPLE:\nCryptorchidism\nexistence overlaps\nChest deformity\nhttp://umls.org/st/#cgab\nhttp://umls.org/st/#fndg\nhttp://dikb.org/cfc#C0010417\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0238983\n\n\nTRIPLE:\nHyperextension\ncorrelated with\nCostello syndrome\nhttp://umls.org/st/#sosy\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0231451\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0587248\n\n\nTRIPLE:\nTissue specimen\npart of\nMalignant melanoma of conjunctiva\nhttp://umls.org/st/#tisu\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1292533\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0346360\n\n\nTRIPLE:\nPosterior Fossa\npart of\nBrain\nhttp://umls.org/st/#bpoc\nhttp://umls.org/st/#bpoc\nhttp://dikb.org/cfc#C1116439\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0006104\n\n\nTRIPLE:\nMAP Kinase Kinase Kinase\nexistence overlaps\nJUN gene|JUN\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0211705\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0085942\n\n\nTRIPLE:\nMAP Kinase Kinase Kinase\nexistence overlaps\nReactive Oxygen Species\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#bacs\nhttp://dikb.org/cfc#C0211705\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0162772\n\n\nTRIPLE:\nAging\nregulates\nGrowth\nhttp://umls.org/st/#orgf\nhttp://umls.org/st/#orgf\nhttp://dikb.org/cfc#C0001811\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0018270\n\n\nTRIPLE:\nAging\nregulates\nCell Cycle\nhttp://umls.org/st/#orgf\nhttp://umls.org/st/#celf\nhttp://dikb.org/cfc#C0001811\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0007586\n\n\nTRIPLE:\nVirus Diseases\nprecedes\n[X]Acute myocarditis, unspecified\nhttp://umls.org/st/#dsyn\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0042769\nhttp://purl.obolibrary.org/obo/BFO_0000063\nhttp://dikb.org/cfc#C0155686\n\n\nTRIPLE:\nFLG gene|FLG\ncorrelated with\nDermatitis, Atopic\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C1414633\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0011615\n\n\nTRIPLE:\nEpidermis\npart of\nHuman\nhttp://umls.org/st/#tisu\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C0014520\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0020114\n\n\nTRIPLE:\nMultiple Lentigines\nexistence overlaps\nCongenital heart disease\nhttp://umls.org/st/#dsyn\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C1328931\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0152021\n\n\nTRIPLE:\nfraxetin\nNone\nReceptor Binding\nhttp://umls.org/st/#orch\nhttp://umls.org/st/#moft\nhttp://dikb.org/cfc#C0538398\nhttp://purl.obolibrary.org/obo/disrupts\nhttp://dikb.org/cfc#C0597358\n\n\nTRIPLE:\nPROTEIN KINASE\nis substance that treats\nMalignant neoplasm of lung\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0033640\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0242379\n\n\nTRIPLE:\nPROTEIN KINASE\ncausal relation between processes\nCell Death\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#celf\nhttp://dikb.org/cfc#C0033640\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0007587\n\n\nTRIPLE:\nPROTEIN KINASE\ncausal relation between processes\nAutophagy\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#celf\nhttp://dikb.org/cfc#C0033640\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0004391\n\n\nTRIPLE:\nPROTEIN KINASE\ninteracts with\nPharmaceutical Preparations\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0033640\nhttp://purl.obolibrary.org/obo/RO_0002434\nhttp://dikb.org/cfc#C0013227\n\n\nTRIPLE:\nDNA, Mitochondrial\nexistence overlaps\nGenes, Mitochondrial\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0012929\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C1563761\n\n\nTRIPLE:\nDNA, Mitochondrial\ncorrelated with\nCarcinogenesis\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0012929\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0596263\n\n\nTRIPLE:\nDNA, Mitochondrial\ncorrelated with\nCommon Neoplasm\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0012929\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C1333132\n\n\nTRIPLE:\nTLR3 gene|TLR3\npart of\nHuman\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C1336635\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0020114\n\n\nTRIPLE:\nMPZL1 protein, human|MPZL1\nregulates\nExtracellular Matrix\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#tisu\nhttp://dikb.org/cfc#C1334837\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0015350\n" ], [ "## We load the original AND inferred predications and run the same query. The print statement filters \n## out triples we have seen before from the orignal predications.\ng = rdflib.Graph()\ng.parse(\"original-triples-RO-mapped.ntriples\", format='nt')\ng.parse(\"inferred-transitive-and-symmetric.ntriples\", format='nt')\nprint('Triple count after loading original and inferred-transitive-and-symmetric.ntriples: {}'.format(len(g)))\ng.parse(\"ro_with_imports_AD_mods.owl\", format='xml')\n\nqres2 = g.query(\"\"\"\nSELECT ?s_lab ?p_lab ?o_lab ?s_type ?o_type ?s ?p ?o\nWHERE {{\n ?s ?p ?o.\n ?s rdfs:label ?s_lab.\n ?s rdf:type ?s_type.\n OPTIONAL{?p rdfs:label ?p_lab.}\n ?o rdfs:label ?o_lab.\n ?o rdf:type ?o_type.\n }}\nLIMIT 200\n\"\"\")\n\nfor row in qres2:\n if row[5].find('cfc') == -1 or row in qres:\n continue\n print(\"\\n\\nTRIPLE:\\n{}\".format('\\n'.join([str(x) for x in row])))", "Triple count after loading original and inferred-transitive-and-symmetric.ntriples: 7234\n\n\nTRIPLE:\nAcid Phosphatase\nnegatively regulates\nCRK protein, human|CRK\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0001109\nhttp://purl.obolibrary.org/obo/RO_0002212\nhttp://dikb.org/cfc#C1451465\n\n\nTRIPLE:\nAssessment procedure\nis substance that treats\nParticipant\nhttp://umls.org/st/#hlca\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C1261322\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0679646\n\n\nTRIPLE:\nAssessment procedure\nis substance that treats\nCostello syndrome\nhttp://umls.org/st/#hlca\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C1261322\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0587248\n\n\nTRIPLE:\nBreast\npart of\nHuman\nhttp://umls.org/st/#bpoc\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C0006141\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0020114\n\n\nTRIPLE:\nCombined Modality Therapy\nis substance that treats\nChronic Obstructive Airway Disease\nhttp://umls.org/st/#topp\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0009429\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0024117\n\n\nTRIPLE:\nCombined Modality Therapy\nis substance that treats\nMalignant neoplasm of thyroid\nhttp://umls.org/st/#topp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0009429\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0007115\n\n\nTRIPLE:\nMLLT7 gene|FOXO4\nregulates\nSenility\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#fndg\nhttp://dikb.org/cfc#C1334518\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0231337\n\n\nTRIPLE:\nDown-Regulation\nexistence overlaps\nCell Cycle Arrest\nhttp://umls.org/st/#celf\nhttp://umls.org/st/#celf\nhttp://dikb.org/cfc#C0013081\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C1155873\n\n\nTRIPLE:\nDown-Regulation\nexistence overlaps\nSkin Papilloma\nhttp://umls.org/st/#celf\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0013081\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0347390\n\n\nTRIPLE:\nDown-Regulation\nexistence overlaps\nNeoplasm progression\nhttp://umls.org/st/#celf\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0013081\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0178874\n\n\nTRIPLE:\nB-Cell Lymphomas\nexistence overlaps\nT-cell childhood acute lymphoblastic leukemia\nhttp://umls.org/st/#neop\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0079731\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0279583\n\n\nTRIPLE:\nProto-Oncogenes\ncorrelated with\nSolid tumor\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0033713\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0280100\n\n\nTRIPLE:\nProto-Oncogenes\npart of\nVertebrates\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#vtbt\nhttp://dikb.org/cfc#C0033713\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0042567\n\n\nTRIPLE:\nHuman papilloma virus infection\nexistence overlaps\nSquamous cell carcinoma\nhttp://umls.org/st/#dsyn\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0343641\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0007137\n\n\nTRIPLE:\nMAP Kinase Gene\ncausal relation between processes\nCessation of life\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#orgf\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0011065\n\n\nTRIPLE:\nMAP Kinase Gene\ncausal relation between processes\nAutistic Disorder\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#mobd\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0004352\n\n\nTRIPLE:\nMAP Kinase Gene\ncorrelated with\nDysmyelopoietic Syndromes\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0026986\n\n\nTRIPLE:\nMAP Kinase Gene\nNone\nMutation\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#genf\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/augments\nhttp://dikb.org/cfc#C0026882\n\n\nTRIPLE:\nMAP Kinase Gene\nregulates\nHuman Development\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#phsf\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0020119\n\n\nTRIPLE:\nMAP Kinase Gene\ncausal relation between processes\nFacial swelling\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#sosy\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0151602\n\n\nTRIPLE:\nMAP Kinase Gene\ncorrelated with\nCarcinoma of the Large Intestine\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0009402\n\n\nTRIPLE:\nMAP Kinase Gene\nregulates\nGrowth\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#orgf\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0018270\n\n\nTRIPLE:\nMAP Kinase Gene\ninteracts with\nN-ras Genes|NRAS\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002434\nhttp://dikb.org/cfc#C0027260\n\n\nTRIPLE:\nMAP Kinase Gene\nexistence overlaps\nPTPN11 gene|PTPN11\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1370600\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nPoint Mutation\ncausal relation between processes\nProgeria\nhttp://umls.org/st/#genf\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0162735\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0033300\n\n\nTRIPLE:\nWiskott-Aldrich Syndrome Protein\ninteracts with\nGuanosine Diphosphate\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#bacs\nhttp://dikb.org/cfc#C0258432\nhttp://purl.obolibrary.org/obo/RO_0002434\nhttp://dikb.org/cfc#C0018340\n\n\nTRIPLE:\nFibrosis\nexistence overlaps\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#patf\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0016059\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nFibrosis\ncausal relation between processes\nDeath, Sudden\nhttp://umls.org/st/#patf\nhttp://umls.org/st/#patf\nhttp://dikb.org/cfc#C0016059\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0011071\n\n\nTRIPLE:\nFibrosis\nregulates\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#patf\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0016059\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nFibrosis\ncausal relation between processes\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#patf\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0016059\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nFibrosis\nregulates\nHeart Diseases\nhttp://umls.org/st/#patf\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0016059\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0018799\n\n\nTRIPLE:\nKDR gene|KDR\nnegatively regulates\nsorafenib\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#orch\nhttp://dikb.org/cfc#C1334306\nhttp://purl.obolibrary.org/obo/RO_0002212\nhttp://dikb.org/cfc#C1516119\n\n\nTRIPLE:\nPremature aging syndrome\nexistence overlaps\nRare Diseases\nhttp://umls.org/st/#dsyn\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0231341\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0678236\n\n\nTRIPLE:\nsorafenib\nis substance that treats\nmelanoma\nhttp://umls.org/st/#orch\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1516119\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0025202\n\n\nTRIPLE:\nsorafenib\ncausal relation between processes\nSquamous cell carcinoma\nhttp://umls.org/st/#orch\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1516119\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0007137\n\n\nTRIPLE:\nsorafenib\nNone\nGrowth\nhttp://umls.org/st/#orch\nhttp://umls.org/st/#orgf\nhttp://dikb.org/cfc#C1516119\nhttp://purl.obolibrary.org/obo/disrupts\nhttp://dikb.org/cfc#C0018270\n\n\nTRIPLE:\nsorafenib\nis substance that treats\nAdvanced cancer\nhttp://umls.org/st/#orch\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1516119\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0877373\n\n\nTRIPLE:\nFatty-acid synthase\nnegatively regulates\nMembrane Transport Proteins\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0015683\nhttp://purl.obolibrary.org/obo/RO_0002212\nhttp://dikb.org/cfc#C0596902\n\n\nTRIPLE:\nRAF1 gene|RAF1\ninteracts with\nExtracellular Signal Regulated Kinases\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002434\nhttp://dikb.org/cfc#C0600388\n\n\nTRIPLE:\nRAF1 gene|RAF1\nexistence overlaps\nN-ras Genes|NRAS\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0027260\n\n\nTRIPLE:\nRAF1 gene|RAF1\nregulates\nmetaplastic cell transformation\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#patf\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C1510411\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncausal relation between processes\nGlioma\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0017638\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncauses or contributes to condition\nleukemia\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0003302\nhttp://dikb.org/cfc#C0023418\n\n\nTRIPLE:\nRAF1 gene|RAF1\npart of\nMus\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#mamm\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0026809\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncorrelated with\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nRAF1 gene|RAF1\nNone\nGlioblastoma\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/augments\nhttp://dikb.org/cfc#C0017636\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncausal relation between processes\nCardiomyopathy, Dilated\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0007193\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncausal relation between processes\nCardiac Hypertrophy\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#patf\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C1383860\n\n\nTRIPLE:\nRAF1 gene|RAF1\nNone\nMAP kinase kinase activity\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#moft\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/augments\nhttp://dikb.org/cfc#C1150604\n\n\nTRIPLE:\nRAF1 gene|RAF1\ncauses or contributes to condition\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0812215\nhttp://purl.obolibrary.org/obo/RO_0003302\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nHydroxyproline\npart of\nCartilage\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#tisu\nhttp://dikb.org/cfc#C0020388\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0007301\n\n\nTRIPLE:\nHydroxyproline\npart of\nCollagen\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0020388\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0009325\n\n\nTRIPLE:\nHydroxyproline\npart of\npyridinoline\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0020388\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0072690\n\n\nTRIPLE:\nHIF1alpha protein\nNone\nPlacenta\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#emst\nhttp://dikb.org/cfc#C0965644\nhttp://purl.obolibrary.org/obo/disrupts\nhttp://dikb.org/cfc#C0032043\n\n\nTRIPLE:\nParaneoplastic Syndromes\npart of\nEpidermis\nhttp://umls.org/st/#neop\nhttp://umls.org/st/#tisu\nhttp://dikb.org/cfc#C0030472\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0014520\n\n\nTRIPLE:\nParaneoplastic Syndromes\npart of\nHuman\nhttp://umls.org/st/#neop\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C0030472\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0020114\n\n\nTRIPLE:\nSTX17 gene|STX17\ncorrelated with\n[X]Malignant melanoma of skin, unspecified\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1420497\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0151779\n\n\nTRIPLE:\nEmbryonic Mosaic\ncausal relation between processes\nFacial swelling\nhttp://umls.org/st/#comd\nhttp://umls.org/st/#sosy\nhttp://dikb.org/cfc#C0392053\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0151602\n\n\nTRIPLE:\nEmbryonic Mosaic\nexistence overlaps\nCostello syndrome\nhttp://umls.org/st/#comd\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0392053\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0587248\n\n\nTRIPLE:\nOrnithine Decarboxylase Inhibitor\nis substance that treats\nMalignant neoplasm of skin\nhttp://umls.org/st/#phsu\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1518618\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0007114\n\n\nTRIPLE:\nFragile skin\nexistence overlaps\nEhlers-Danlos syndrome, hydroxylysine-deficient\nhttp://umls.org/st/#fndg\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0241181\nhttp://purl.obolibrary.org/obo/RO_0002490\nhttp://dikb.org/cfc#C0268342\n\n\nTRIPLE:\nSirolimus (substance)\nis substance that treats\nHypertrophic Cardiomyopathy\nhttp://umls.org/st/#antb\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0072980\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C0007194\n\n\nTRIPLE:\nSirolimus (substance)\nis substance that treats\nMultiple Lentigines\nhttp://umls.org/st/#antb\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C0072980\nhttp://purl.obolibrary.org/obo/RO_0002606\nhttp://dikb.org/cfc#C1328931\n\n\nTRIPLE:\nSirolimus (substance)\nregulates\nTranslation, Genetic\nhttp://umls.org/st/#antb\nhttp://umls.org/st/#genf\nhttp://dikb.org/cfc#C0072980\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0040711\n\n\nTRIPLE:\n9p21\ncorrelated with\nTumor ulceration\nhttp://umls.org/st/#nusq\nhttp://umls.org/st/#dsyn\nhttp://dikb.org/cfc#C1515498\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0919645\n\n\nTRIPLE:\n3' Untranslated Regions\npart of\nZebrafish\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#fish\nhttp://dikb.org/cfc#C0600600\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0043457\n\n\nTRIPLE:\n3' Untranslated Regions\npart of\nMuscle Cells\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#cell\nhttp://dikb.org/cfc#C0600600\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0596981\n\n\nTRIPLE:\n3' Untranslated Regions\npart of\nIndividual\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#humn\nhttp://dikb.org/cfc#C0600600\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0237401\n\n\nTRIPLE:\n3' Untranslated Regions\npart of\nHRAS gene|HRAS\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0600600\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\n3' Untranslated Regions\npart of\nLaryngeal Squamous Cell Carcinoma\nhttp://umls.org/st/#bacs\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0600600\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0280324\n\n\nTRIPLE:\ncDNA Library\npart of\nFetal brain\nhttp://umls.org/st/#gngm\nhttp://umls.org/st/#emst\nhttp://dikb.org/cfc#C0751608\nhttp://purl.obolibrary.org/obo/BFO_0000050\nhttp://dikb.org/cfc#C0440731\n\n\nTRIPLE:\nDNA, Complementary\nregulates\nCells\nhttp://umls.org/st/#nnon\nhttp://umls.org/st/#cell\nhttp://dikb.org/cfc#C0006556\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0007634\n\n\nTRIPLE:\nKnock-out\nregulates\nHeart Septal Defects, Atrial\nhttp://umls.org/st/#emod\nhttp://umls.org/st/#cgab\nhttp://dikb.org/cfc#C1522225\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0018817\n\n\nTRIPLE:\nKnock-out\nregulates\nCarcinogenesis\nhttp://umls.org/st/#emod\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C1522225\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0596263\n\n\nTRIPLE:\nKnock-out\nregulates\nAutism spectrum disorder\nhttp://umls.org/st/#emod\nhttp://umls.org/st/#mobd\nhttp://dikb.org/cfc#C1522225\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C1510586\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\ncorrelated with\nChromosome Deletion\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#comd\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002610\nhttp://dikb.org/cfc#C0008628\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\nregulates\nCartilage\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#tisu\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0007301\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\ncausal relation between processes\nSkin tag\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C0037293\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\ncausal relation between processes\nMultiple basal cell papillomata\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C1275155\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\nregulates\nBone Development\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#ortf\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0005939\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\nregulates\nBreast Carcinoma\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#neop\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002211\nhttp://dikb.org/cfc#C0678222\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\ncausal relation between processes\nCardio-facio-cutaneous syndrome\nhttp://umls.org/st/#aapp\nhttp://umls.org/st/#cgab\nhttp://dikb.org/cfc#C0600388\nhttp://purl.obolibrary.org/obo/RO_0002501\nhttp://dikb.org/cfc#C1275081\n" ] ], [ [ "**E6** - Use the next cells to write a SPARQL queries that count the number of distinct CUIs used for subject and object roles in the merged graph. Also, shown how the counts for subject and objects break down by semantic type. The basic structure of the queries are set up for you.", "_____no_output_____" ] ], [ [ "# Count distinct subject CUIs\nqres = g.query(\"\"\"\nSELECT ?s_type (count(distinct ?s) AS ?count)\nWHERE {{\n ?s ?p ?o.\n ?s rdf:type ?s_type.\n }}\n group by ?s_type\n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))\n", "\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#SymmetricProperty\n77\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Class\n118\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#TransitiveProperty\n47\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#ObjectProperty\n589\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Axiom\n92\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#AnnotationProperty\n102\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Restriction\n45\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#NamedIndividual\n5\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#DatatypeProperty\n1\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#AsymmetricProperty\n5\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#IrreflexiveProperty\n7\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#FunctionalProperty\n2\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Ontology\n1\n" ], [ "# Count subject CUI grouped by semantic type\nqres = g.query(\"\"\"\nSELECT (count(distinct ?s) AS ?count)\nWHERE {{\n ?s ?p ?o.\n }}\n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))\n", "_____no_output_____" ], [ "# Count object CUIs\nqres = g.query(\"\"\"\nSELECT <FILL IN>\nWHERE {{\n <FILL IN>\n }}\n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))\n ", "_____no_output_____" ], [ "# Count object CUI grouped by semantic type\nqres = g.query(\"\"\"\nSELECT ?o_type (count(distinct ?o) AS ?count)\nWHERE {{\n ?s ?p ?o.\n ?o rdf:type ?o_type.\n }}\n group by ?o_type\n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#SymmetricProperty\n73\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Class\n107\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#TransitiveProperty\n37\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#ObjectProperty\n384\n\n\nRESULT:\nhttp://www.w3.org/2003/11/swrl#AtomList\n66\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#Restriction\n44\n\n\nRESULT:\nhttp://www.w3.org/2003/11/swrl#IndividualPropertyAtom\n55\n\n\nRESULT:\nhttp://umls.org/st/#mamm\n10\n\n\nRESULT:\nhttp://umls.org/st/#neop\n95\n\n\nRESULT:\nhttp://umls.org/st/#sosy\n7\n\n\nRESULT:\nhttp://umls.org/st/#celf\n28\n\n\nRESULT:\nhttp://umls.org/st/#humn\n25\n\n\nRESULT:\nhttp://www.w3.org/2003/11/swrl#ClassAtom\n11\n\n\nRESULT:\nhttp://umls.org/st/#dsyn\n92\n\n\nRESULT:\nhttp://umls.org/st/#gngm\n34\n\n\nRESULT:\nhttp://umls.org/st/#patf\n27\n\n\nRESULT:\nhttp://umls.org/st/#fndg\n21\n\n\nRESULT:\nhttp://umls.org/st/#orch\n7\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#AnnotationProperty\n30\n\n\nRESULT:\nhttp://umls.org/st/#aapp\n58\n\n\nRESULT:\nhttp://www.w3.org/2003/11/swrl#Variable\n20\n\n\nRESULT:\nhttp://umls.org/st/#bacs\n14\n\n\nRESULT:\nhttp://umls.org/st/#tisu\n12\n\n\nRESULT:\nhttp://umls.org/st/#cgab\n12\n\n\nRESULT:\nhttp://umls.org/st/#orgf\n17\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#NamedIndividual\n4\n\n\nRESULT:\nhttp://umls.org/st/#ortf\n12\n\n\nRESULT:\nhttp://umls.org/st/#vtbt\n1\n\n\nRESULT:\nhttp://umls.org/st/#mobd\n8\n\n\nRESULT:\nhttp://umls.org/st/#virs\n1\n\n\nRESULT:\nhttp://umls.org/st/#bpoc\n51\n\n\nRESULT:\nhttp://umls.org/st/#npop\n5\n\n\nRESULT:\nhttp://umls.org/st/#cell\n25\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#DatatypeProperty\n1\n\n\nRESULT:\nhttp://umls.org/st/#comd\n10\n\n\nRESULT:\nhttp://umls.org/st/#genf\n16\n\n\nRESULT:\nhttp://umls.org/st/#emst\n8\n\n\nRESULT:\nhttp://umls.org/st/#celc\n10\n\n\nRESULT:\nhttp://umls.org/st/#moft\n15\n\n\nRESULT:\nhttp://umls.org/st/#inch\n2\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#AsymmetricProperty\n2\n\n\nRESULT:\nhttp://umls.org/st/#phsf\n3\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#IrreflexiveProperty\n4\n\n\nRESULT:\nhttp://umls.org/st/#irda\n1\n\n\nRESULT:\nhttp://umls.org/st/#socb\n1\n\n\nRESULT:\nhttp://umls.org/st/#orga\n5\n\n\nRESULT:\nhttp://umls.org/st/#anab\n1\n\n\nRESULT:\nhttp://umls.org/st/#hlca\n1\n\n\nRESULT:\nhttp://umls.org/st/#amph\n1\n\n\nRESULT:\nhttp://umls.org/st/#acab\n2\n\n\nRESULT:\nhttp://umls.org/st/#anim\n2\n\n\nRESULT:\nhttp://umls.org/st/#orgm\n1\n\n\nRESULT:\nhttp://umls.org/st/#menp\n1\n\n\nRESULT:\nhttp://www.w3.org/2002/07/owl#FunctionalProperty\n2\n\n\nRESULT:\nhttp://umls.org/st/#podg\n1\n\n\nRESULT:\nhttp://umls.org/st/#inpo\n2\n\n\nRESULT:\nhttp://umls.org/st/#biof\n2\n\n\nRESULT:\nhttp://umls.org/st/#topp\n1\n\n\nRESULT:\nhttp://umls.org/st/#nusq\n2\n\n\nRESULT:\nhttp://umls.org/st/#popg\n1\n\n\nRESULT:\nhttp://umls.org/st/#fish\n2\n\n\nRESULT:\nhttp://umls.org/st/#rcpt\n1\n\n\nRESULT:\nhttp://umls.org/st/#nnon\n1\n\n\nRESULT:\nhttp://umls.org/st/#phsu\n2\n\n\nRESULT:\nhttp://www.w3.org/2000/01/rdf-schema#Datatype\n1\n\n\nRESULT:\nhttp://umls.org/st/#bdsu\n1\n\n\nRESULT:\nhttp://umls.org/st/#horm\n1\n\n\nRESULT:\nhttp://umls.org/st/#enzy\n1\n" ], [ "## count subject and objects by predication type (RO)\nqres = g.query(\"\"\"\nSELECT ?p ?p_label (count(distinct ?s) AS ?s_count) (count(distinct ?o) AS ?o_count)\nWHERE {{\n ?s ?p ?o.\n ?p rdfs:label ?p_label.\n }}\n group by ?p_label ?p \n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002561\nis symmetric relational form of process class\n4\n4\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000424\nexpand expression to\n22\n22\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002211\nregulates\n278\n292\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasBroadSynonym\nhas_broad_synonym\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002434\ninteracts with\n96\n95\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002560\nis asymmetric relational form of process class\n1\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002212\nnegatively regulates\n49\n47\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002501\ncausal relation between processes\n145\n143\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000426\nfirst order logic expression\n6\n6\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000111\neditor preferred term\n16\n16\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000116\neditor note\n74\n79\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002575\nis direct form of\n6\n6\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000425\nexpand assertion to\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000117\nterm editor\n327\n22\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasExactSynonym\nhas_exact_synonym\n68\n104\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#shorthand\nshorthand\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002581\nis a defining property chain axiom\n3\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000114\nhas curation status\n130\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000115\ndefinition\n530\n530\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000115\ndefinition\n530\n530\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0003302\ncauses or contributes to condition\n59\n39\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002599\ncapable of inhibiting or preventing pathological process\n6\n8\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0040042\nis homeomorphic for\n2\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0004049\nis positive form of\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002579\nis indirect form of\n2\n2\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0001900\ntemporal interpretation\n47\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000232\ncurator note\n65\n61\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002490\nexistence overlaps\n288\n255\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasDbXref\ndatabase_cross_reference\n88\n150\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000112\nexample of usage\n105\n150\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000412\nimported from\n19\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/BFO_0000063\nprecedes\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000589\nOBO foundry unique label\n25\n25\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002582\nis a defining property chain axiom where second argument is reflexive\n5\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002606\nis substance that treats\n165\n146\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0004050\nis negative form of\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000600\nelucidation\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000119\ndefinition source\n101\n48\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002610\ncorrelated with\n188\n181\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002423\nlogical macro assertion on an annotation property\n1\n1\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasRelatedSynonym\nhas_related_synonym\n30\n40\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000118\nalternative term\n104\n134\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#inSubset\nin_subset\n121\n2\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/BFO_0000050\npart of\n336\n260\n" ], [ "## count subject and objects by predication type (RO)\nqres = g.query(\"\"\"\nSELECT ?p ?p_label (count(distinct ?s) AS ?s_count) (count(distinct ?o) AS ?o_count)\nWHERE {{\n ?s ?p ?o.\n ?p rdfs:label ?p_label.\n }}\n group by ?p_label ?p \n\"\"\")\nfor row in qres:\n print(\"\\n\\nRESULT:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002561\nis symmetric relational form of process class\n4\n4\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000424\nexpand expression to\n22\n22\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002211\nregulates\n278\n292\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasBroadSynonym\nhas_broad_synonym\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002434\ninteracts with\n96\n95\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002560\nis asymmetric relational form of process class\n1\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002212\nnegatively regulates\n49\n47\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002501\ncausal relation between processes\n145\n143\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000426\nfirst order logic expression\n6\n6\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000111\neditor preferred term\n16\n16\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000116\neditor note\n74\n79\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002575\nis direct form of\n6\n6\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000425\nexpand assertion to\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000117\nterm editor\n327\n22\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasExactSynonym\nhas_exact_synonym\n68\n104\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#shorthand\nshorthand\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002581\nis a defining property chain axiom\n3\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000114\nhas curation status\n130\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000115\ndefinition\n530\n530\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000115\ndefinition\n530\n530\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0003302\ncauses or contributes to condition\n59\n39\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002599\ncapable of inhibiting or preventing pathological process\n6\n8\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0040042\nis homeomorphic for\n2\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0004049\nis positive form of\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002579\nis indirect form of\n2\n2\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0001900\ntemporal interpretation\n47\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000232\ncurator note\n65\n61\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002490\nexistence overlaps\n288\n255\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasDbXref\ndatabase_cross_reference\n88\n150\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000112\nexample of usage\n105\n150\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000412\nimported from\n19\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/BFO_0000063\nprecedes\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000589\nOBO foundry unique label\n25\n25\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002582\nis a defining property chain axiom where second argument is reflexive\n5\n1\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002606\nis substance that treats\n165\n146\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0004050\nis negative form of\n7\n7\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000600\nelucidation\n3\n3\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000119\ndefinition source\n101\n48\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002610\ncorrelated with\n188\n181\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/RO_0002423\nlogical macro assertion on an annotation property\n1\n1\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#hasRelatedSynonym\nhas_related_synonym\n30\n40\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/IAO_0000118\nalternative term\n104\n134\n\n\nRESULT:\nhttp://www.geneontology.org/formats/oboInOwl#inSubset\nin_subset\n121\n2\n\n\nRESULT:\nhttp://purl.obolibrary.org/obo/BFO_0000050\npart of\n336\n260\n" ] ], [ [ "#### There are many interesting patterns that we can apply to the extracted knowledge. I will discuss examples in class. This first query is simple example that looks for a simple causal chain involving clinically relevant entities ", "_____no_output_____" ] ], [ [ "qres = g.query(\"\"\"\nSELECT ?s_lab ?p_lab ?o_lab ?p_lab ?d_lab ?s ?o ?d \nWHERE {{\n ?s rdf:type ?s_semType.FILTER(?s_semType != <http://umls.org/st/#bacs> && ?s_semType != <http://umls.org/st/#bhvr> && ?s_semType != <http://umls.org/st/#dora> && ?s_semType != <http://umls.org/st/#food> && ?s_semType != <http://umls.org/st/#hops> && ?s_semType != <http://umls.org/st/#horm> && ?s_semType != <http://umls.org/st/#clna> && ?s_semType != <http://umls.org/st/#nnon> && ?s_semType != <http://umls.org/st/#bpoc> && ?s_semType != <http://umls.org/st/#mamm> && ?s_semType != <http://umls.org/st/#bdsu> && ?s_semType != <http://umls.org/st/#qlco> && ?s_semType != <http://umls.org/st/#sosy> && ?s_semType != <http://umls.org/st/#mnob> && ?s_semType != <http://umls.org/st/#popg> && ?s_semType != <http://umls.org/st/#podg> && ?s != <http://dikb.org/cfc#C0338656> && ?s != <http://dikb.org/cfc#C0001811> && ?s != <http://dikb.org/cfc#C0002395> && ?s != <http://dikb.org/cfc#C0494463> && ?s != <http://dikb.org/cfc#C0567416> && ?s != <http://dikb.org/cfc#C0022023> && ?s != <http://dikb.org/cfc#C0025552> && ?s != <http://dikb.org/cfc#C0002520> && ?s != <http://dikb.org/cfc#C0033684> && ?s != <http://dikb.org/cfc#C0012854>) \n\n ?o rdf:type ?o_semType.FILTER(?o != ?s && ?o_semType != <http://umls.org/st/#bacs> && ?o_semType != <http://umls.org/st/#bhvr> && ?o_semType != <http://umls.org/st/#dora> && ?o_semType != <http://umls.org/st/#food> && ?o_semType != <http://umls.org/st/#hops> && ?o_semType != <http://umls.org/st/#horm> && ?o_semType != <http://umls.org/st/#clna> && ?o_semType != <http://umls.org/st/#nnon> && ?o_semType != <http://umls.org/st/#bpoc> && ?o_semType != <http://umls.org/st/#mamm> && ?o_semType != <http://umls.org/st/#bdsu> && ?o_semType != <http://umls.org/st/#qlco> && ?o_semType != <http://umls.org/st/#sosy> && ?o_semType != <http://umls.org/st/#mnob> && ?o_semType != <http://umls.org/st/#popg> && ?o_semType != <http://umls.org/st/#podg> && ?o != <http://dikb.org/cfc#C0338656> && ?o != <http://dikb.org/cfc#C0001811> && ?o != <http://dikb.org/cfc#C0012854>) \n\n ?s <http://purl.obolibrary.org/obo/RO_0003302> ?o.\n \n ?o <http://purl.obolibrary.org/obo/RO_0003302> ?d.\n\n ?s rdfs:label ?s_lab.\n \n OPTIONAL{<http://purl.obolibrary.org/obo/RO_0003302> rdfs:label ?p_lab.}\n \n ?o rdfs:label ?o_lab.\n \n ?d rdfs:label ?d_lab.\n }}\nLIMIT 100\n\"\"\")\n\nfor row in qres:\n \n print(\"\\n\\nTRIPLE:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nTRIPLE:\nSTAT3 gene|STAT3\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C1367307\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nSTAT3 gene|STAT3\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C1367307\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nBRAF gene|BRAF|SNRPE\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C0812241\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nBRAF gene|BRAF|SNRPE\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0812241\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\ncoenzyme Q10\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C0056077\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\ncoenzyme Q10\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0056077\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C1335280\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C1335280\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nNeoplasm\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0027651\n" ] ], [ [ "#### This next query looks for a pattern typical of statistical confounders where there is a common cause to a disorder and a disease where the disorder is also causally related to the disease ", "_____no_output_____" ] ], [ [ "\nqres = g.query(\"\"\"\nSELECT ?s_lab ?p_lab ?o_lab ?s_lab ?p_lab ?d_lab ?o_lab ?p_lab ?d_lab ?s ?o ?d \nWHERE {{\n ?s rdf:type ?s_semType.FILTER(?s_semType != <http://umls.org/st/#bacs> && ?s_semType != <http://umls.org/st/#bhvr> && ?s_semType != <http://umls.org/st/#dora> && ?s_semType != <http://umls.org/st/#food> && ?s_semType != <http://umls.org/st/#hops> && ?s_semType != <http://umls.org/st/#horm> && ?s_semType != <http://umls.org/st/#clna> && ?s_semType != <http://umls.org/st/#nnon> && ?s_semType != <http://umls.org/st/#bpoc> && ?s_semType != <http://umls.org/st/#mamm> && ?s_semType != <http://umls.org/st/#bdsu> && ?s_semType != <http://umls.org/st/#qlco> && ?s_semType != <http://umls.org/st/#sosy> && ?s_semType != <http://umls.org/st/#mnob> && ?s_semType != <http://umls.org/st/#popg> && ?s_semType != <http://umls.org/st/#podg> && ?s != <http://dikb.org/cfc#C0338656> && ?s != <http://dikb.org/cfc#C0001811> && ?s != <http://dikb.org/cfc#C0002395> && ?s != <http://dikb.org/cfc#C0494463> && ?s != <http://dikb.org/cfc#C0567416> && ?s != <http://dikb.org/cfc#C0022023> && ?s != <http://dikb.org/cfc#C0025552> && ?s != <http://dikb.org/cfc#C0002520> && ?s != <http://dikb.org/cfc#C0033684> && ?s != <http://dikb.org/cfc#C0012854>) \n\n ?o rdf:type ?o_semType.FILTER(?o != ?s && ?o_semType != <http://umls.org/st/#bacs> && ?o_semType != <http://umls.org/st/#bhvr> && ?o_semType != <http://umls.org/st/#dora> && ?o_semType != <http://umls.org/st/#food> && ?o_semType != <http://umls.org/st/#hops> && ?o_semType != <http://umls.org/st/#horm> && ?o_semType != <http://umls.org/st/#clna> && ?o_semType != <http://umls.org/st/#nnon> && ?o_semType != <http://umls.org/st/#bpoc> && ?o_semType != <http://umls.org/st/#mamm> && ?o_semType != <http://umls.org/st/#bdsu> && ?o_semType != <http://umls.org/st/#qlco> && ?o_semType != <http://umls.org/st/#sosy> && ?o_semType != <http://umls.org/st/#mnob> && ?o_semType != <http://umls.org/st/#popg> && ?o_semType != <http://umls.org/st/#podg> && ?o != <http://dikb.org/cfc#C0338656> && ?o != <http://dikb.org/cfc#C0001811> && ?o != <http://dikb.org/cfc#C0012854>) \n\n ?s <http://purl.obolibrary.org/obo/RO_0003302> ?o.\n \n ?s <http://purl.obolibrary.org/obo/RO_0003302> ?d.\n \n ?o <http://purl.obolibrary.org/obo/RO_0003302> ?d.\n\n ?s rdfs:label ?s_lab.\n \n OPTIONAL{<http://purl.obolibrary.org/obo/RO_0003302> rdfs:label ?p_lab.}\n \n ?o rdfs:label ?o_lab.\n \n ?d rdfs:label ?d_lab.\n }}\nLIMIT 100\n\"\"\")\n\nfor row in qres:\n \n print(\"\\n\\nTRIPLE:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nTRIPLE:\nSTAT3 gene|STAT3\ncauses or contributes to condition\nPathogenesis\nSTAT3 gene|STAT3\ncauses or contributes to condition\nMalignant Neoplasms\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C1367307\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nBRAF gene|BRAF|SNRPE\ncauses or contributes to condition\nPathogenesis\nBRAF gene|BRAF|SNRPE\ncauses or contributes to condition\nMalignant Neoplasms\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0812241\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\ncoenzyme Q10\ncauses or contributes to condition\nPathogenesis\ncoenzyme Q10\ncauses or contributes to condition\nMalignant Neoplasms\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0056077\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nPathogenesis\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nTumor Initiation\nPathogenesis\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C1335280\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nPathogenesis\nPTPN11 gene|PTPN11\ncauses or contributes to condition\nMalignant Neoplasms\nPathogenesis\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C1335280\nhttp://dikb.org/cfc#C0699748\nhttp://dikb.org/cfc#C0006826\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\nCostello syndrome\ncauses or contributes to condition\nNeoplasm\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nNeoplasm\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0027651\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\nCostello syndrome\ncauses or contributes to condition\nTumor Initiation\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nTumor Initiation\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0598935\n\n\nTRIPLE:\nCostello syndrome\ncauses or contributes to condition\nCongenital musculoskeletal anomalies\nCostello syndrome\ncauses or contributes to condition\nMalignant Neoplasms\nCongenital musculoskeletal anomalies\ncauses or contributes to condition\nMalignant Neoplasms\nhttp://dikb.org/cfc#C0587248\nhttp://dikb.org/cfc#C0151491\nhttp://dikb.org/cfc#C0006826\n" ] ], [ [ "#### The above 2 queries only use the 'causes or contributes to condition' predicate type (RO_0003302) but there are several other predicate types that could be used to create patterns (see the RO predicates in your graph from one of the queries above). \n\n**E7** - Use the next cell block to write a query that uses a different predicate type than RO_0003302. What did you find?", "_____no_output_____" ] ], [ [ "qres = g.query(\"\"\"\nSELECT * \nWHERE {{\n ?s rdf:type ?s_semType.FILTER(?s_semType != <http://umls.org/st/#bacs> && ?s_semType != <http://umls.org/st/#bhvr> && ?s_semType != <http://umls.org/st/#dora> && ?s_semType != <http://umls.org/st/#food> && ?s_semType != <http://umls.org/st/#hops> && ?s_semType != <http://umls.org/st/#horm> && ?s_semType != <http://umls.org/st/#clna> && ?s_semType != <http://umls.org/st/#nnon> && ?s_semType != <http://umls.org/st/#bpoc> && ?s_semType != <http://umls.org/st/#mamm> && ?s_semType != <http://umls.org/st/#bdsu> && ?s_semType != <http://umls.org/st/#qlco> && ?s_semType != <http://umls.org/st/#sosy> && ?s_semType != <http://umls.org/st/#mnob> && ?s_semType != <http://umls.org/st/#popg> && ?s_semType != <http://umls.org/st/#podg> && ?s != <http://dikb.org/cfc#C0338656> && ?s != <http://dikb.org/cfc#C0001811> && ?s != <http://dikb.org/cfc#C0002395> && ?s != <http://dikb.org/cfc#C0494463> && ?s != <http://dikb.org/cfc#C0567416> && ?s != <http://dikb.org/cfc#C0022023> && ?s != <http://dikb.org/cfc#C0025552> && ?s != <http://dikb.org/cfc#C0002520> && ?s != <http://dikb.org/cfc#C0033684> && ?s != <http://dikb.org/cfc#C0012854>) \n\n ?o rdf:type ?o_semType.FILTER(?o != ?s && ?o_semType != <http://umls.org/st/#bacs> && ?o_semType != <http://umls.org/st/#bhvr> && ?o_semType != <http://umls.org/st/#dora> && ?o_semType != <http://umls.org/st/#food> && ?o_semType != <http://umls.org/st/#hops> && ?o_semType != <http://umls.org/st/#horm> && ?o_semType != <http://umls.org/st/#clna> && ?o_semType != <http://umls.org/st/#nnon> && ?o_semType != <http://umls.org/st/#bpoc> && ?o_semType != <http://umls.org/st/#mamm> && ?o_semType != <http://umls.org/st/#bdsu> && ?o_semType != <http://umls.org/st/#qlco> && ?o_semType != <http://umls.org/st/#sosy> && ?o_semType != <http://umls.org/st/#mnob> && ?o_semType != <http://umls.org/st/#popg> && ?o_semType != <http://umls.org/st/#podg> && ?o != <http://dikb.org/cfc#C0338656> && ?o != <http://dikb.org/cfc#C0001811> && ?o != <http://dikb.org/cfc#C0012854>) \n\n ?s <http://purl.obolibrary.org/obo/RO_0002434> ?o.\n \n ?s <http://purl.obolibrary.org/obo/RO_0002434> ?d.FILTER(?s != ?d && ?o != ?d)\n \n ?s rdfs:label ?s_lab.\n \n OPTIONAL{<http://purl.obolibrary.org/obo/RO_0002434> rdfs:label ?p_lab.}\n \n ?o rdfs:label ?o_lab.\n \n ?d rdfs:label ?d_lab.\n }}\nLIMIT 100\n\"\"\")\n\nfor row in qres:\n \n print(\"\\n\\nTRIPLE:\\n{}\".format('\\n'.join([str(x) for x in row])))", "\n\nTRIPLE:\nPTPN11 gene|PTPN11\nOrthologous Gene\ninteracts with\nUbiquinone\nhttp://dikb.org/cfc#C0041536\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335144\n\n\nTRIPLE:\nMAP Kinase Gene\nNeurofibromatosis Type 1 Protein\ninteracts with\nMitogen-Activated Protein Kinases\nhttp://dikb.org/cfc#C0752312\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1370600\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0083725\n\n\nTRIPLE:\nMAP Kinase Gene\nNeurofibromatosis Type 1 Protein\ninteracts with\nras Proteins\nhttp://dikb.org/cfc#C0242853\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1370600\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0083725\n\n\nTRIPLE:\nMAP Kinase Gene\nNeurofibromatosis Type 1 Protein\ninteracts with\nProteins\nhttp://dikb.org/cfc#C0033684\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1370600\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0083725\n\n\nTRIPLE:\ninsulin receptor substrate 1 protein|IRS1\nPTPN11 gene|PTPN11\ninteracts with\nMitogen-Activated Protein Kinases\nhttp://dikb.org/cfc#C0752312\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\ninsulin receptor substrate 1 protein|IRS1\nPTPN11 gene|PTPN11\ninteracts with\nReceptors, Cell Surface\nhttp://dikb.org/cfc#C0034800\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\ninsulin receptor substrate 1 protein|IRS1\nPTPN11 gene|PTPN11\ninteracts with\nSTAT3 gene|STAT3\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\ninsulin receptor substrate 1 protein|IRS1\nPTPN11 gene|PTPN11\ninteracts with\nPhosphopeptides\nhttp://dikb.org/cfc#C0031684\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\ninsulin receptor substrate 1 protein|IRS1\nPTPN11 gene|PTPN11\ninteracts with\nEnzymes\nhttp://dikb.org/cfc#C0014442\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nRAF1 gene|RAF1\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nupstream kinase\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nBRAHMA\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nPTEN gene|PTEN\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMLH1 gene|MLH1\nBRAF gene|BRAF|SNRPE\ninteracts with\nMitogen-Activated Protein Kinase Kinases\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPyrimidine\nUbiquinone\ninteracts with\nFatty Acids\nhttp://dikb.org/cfc#C0015684\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0034284\nhttp://umls.org/st/#orch\nhttp://dikb.org/cfc#C0041536\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nMLH1 gene|MLH1\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nRAF1 gene|RAF1\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nBRAHMA\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nPTEN gene|PTEN\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nupstream kinase\nBRAF gene|BRAF|SNRPE\ninteracts with\nMitogen-Activated Protein Kinase Kinases\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nGlycine\nHRAS gene|HRAS\ninteracts with\nTriphosphatase\nhttp://dikb.org/cfc#C0312235\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0017890\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nGlycine\nHRAS gene|HRAS\ninteracts with\nProto-Oncogene Proteins c-akt|AKT1\nhttp://dikb.org/cfc#C0164786\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0017890\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nGlycine\nHRAS gene|HRAS\ninteracts with\nGuanosine Diphosphate\nhttp://dikb.org/cfc#C0018340\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0017890\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nSTAT3 gene|STAT3\nPTPN11 gene|PTPN11\ninteracts with\nMitogen-Activated Protein Kinases\nhttp://dikb.org/cfc#C0752312\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nSTAT3 gene|STAT3\nPTPN11 gene|PTPN11\ninteracts with\nReceptors, Cell Surface\nhttp://dikb.org/cfc#C0034800\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nSTAT3 gene|STAT3\nPTPN11 gene|PTPN11\ninteracts with\nPhosphopeptides\nhttp://dikb.org/cfc#C0031684\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nSTAT3 gene|STAT3\nPTPN11 gene|PTPN11\ninteracts with\nEnzymes\nhttp://dikb.org/cfc#C0014442\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nSTAT3 gene|STAT3\nPTPN11 gene|PTPN11\ninteracts with\ninsulin receptor substrate 1 protein|IRS1\nhttp://dikb.org/cfc#C0123658\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1367307\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C1335280\n\n\nTRIPLE:\nTriphosphatase\nHRAS gene|HRAS\ninteracts with\nProto-Oncogene Proteins c-akt|AKT1\nhttp://dikb.org/cfc#C0164786\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0312235\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nTriphosphatase\nHRAS gene|HRAS\ninteracts with\nGuanosine Diphosphate\nhttp://dikb.org/cfc#C0018340\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0312235\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nTriphosphatase\nHRAS gene|HRAS\ninteracts with\nGlycine\nhttp://dikb.org/cfc#C0017890\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0312235\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0079471\n\n\nTRIPLE:\nPharmaceutical Preparations\nPROTEIN KINASE\ninteracts with\nHSP90 Heat-Shock Proteins|HSP90AA1\nhttp://dikb.org/cfc#C0243044\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0013227\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0033640\n\n\nTRIPLE:\nPharmaceutical Preparations\nPROTEIN KINASE\ninteracts with\nCDC37 gene|CDC37\nhttp://dikb.org/cfc#C1332722\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0013227\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0033640\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nMLH1 gene|MLH1\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nRAF1 gene|RAF1\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nupstream kinase\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nBRAHMA\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nMitogen-Activated Protein Kinase Kinases\nBRAF gene|BRAF|SNRPE\ninteracts with\nPTEN gene|PTEN\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPhosphotransferases\nPharmaceutical Preparations\ninteracts with\nMutant Proteins\nhttp://dikb.org/cfc#C1564139\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0031727\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0013227\n\n\nTRIPLE:\nPhosphotransferases\nPharmaceutical Preparations\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C0031727\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0013227\n\n\nTRIPLE:\nExtracellular Signal Regulated Kinases\nRAF1 gene|RAF1\ninteracts with\nCDC37 gene|CDC37\nhttp://dikb.org/cfc#C1332722\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0600388\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\n\n\nTRIPLE:\nUbiquinone\nOrthologous Gene\ninteracts with\nPTPN11 gene|PTPN11\nhttp://dikb.org/cfc#C1335280\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0041536\nhttp://umls.org/st/#phsu\nhttp://dikb.org/cfc#C1335144\n\n\nTRIPLE:\nRAF1 gene|RAF1\nFusion Gene\ninteracts with\nMitogen-Activated Protein Kinases\nhttp://dikb.org/cfc#C0752312\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1533585\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nMLH1 gene|MLH1\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nupstream kinase\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nBRAHMA\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nPTEN gene|PTEN\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nRAF1 gene|RAF1\nBRAF gene|BRAF|SNRPE\ninteracts with\nMitogen-Activated Protein Kinase Kinases\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nMLH1 gene|MLH1\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nRAF1 gene|RAF1\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nupstream kinase\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nPTEN gene|PTEN\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nBRAHMA\nBRAF gene|BRAF|SNRPE\ninteracts with\nMitogen-Activated Protein Kinase Kinases\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nMLH1 gene|MLH1\nhttp://dikb.org/cfc#C0879389\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nCarcinogens\nhttp://dikb.org/cfc#C0007090\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nRAF1 gene|RAF1\nhttp://dikb.org/cfc#C0812215\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nupstream kinase\nhttp://dikb.org/cfc#C0541149\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nBRAHMA\nhttp://dikb.org/cfc#C1337113\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nSignaling Molecule\nhttp://dikb.org/cfc#C1519315\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nThreonine\nhttp://dikb.org/cfc#C0040005\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n\n\nTRIPLE:\nPTEN gene|PTEN\nBRAF gene|BRAF|SNRPE\ninteracts with\nMitogen-Activated Protein Kinase Kinases\nhttp://dikb.org/cfc#C0169101\nhttp://umls.org/st/#aapp\nhttp://dikb.org/cfc#C0694888\nhttp://umls.org/st/#gngm\nhttp://dikb.org/cfc#C0812241\n" ] ], [ [ "#### This final cell sets things up for the next part of the assignment (to be assigned next week). It creates mapping files that will be needed to work that crosses the extracted triples to OBO entity identifiers in PheKnowLator", "_____no_output_____" ] ], [ [ "## create a file with triples to map the UMLS CUIs to the target ontologies used in the Pheknowlator KG ()\nall_cui_to_codeD = {}\numls_mapped_cnt = 0\nreach_mapped_cnt = 0\nbioportal_mapped_cnt = 0\n\n# start by loading all GO and HPO UMLS CUIs\n# ex: C2247561|GO|GO:0034023|\"5-(carboxyamino)imidazole ribonucleotide mutase activity\"\nf = open('cui_to_ontology_maps/go_hpo_maps.csv','r')\nbuf = f.read()\nf.close\nlns = buf.split('\\n')\nfor ln in lns[1:]: # skip header\n if ln == \"\":\n continue\n \n (cui, ont, code, string) = ln.replace('\"\"','').split('|')\n if not all_cui_to_codeD.get(cui):\n all_cui_to_codeD[cui] = ['http://purl.obolibrary.org/obo/' + code.replace(':','_')]\n else:\n all_cui_to_codeD[cui].append('http://purl.obolibrary.org/obo/' + code.replace(':','_'))\n\numls_mapped_cnt = len(all_cui_to_codeD.keys())\n \n# add mappings to OBO from REACH \n# ex: quercetin\t\"('CHEBI', 'CHEBI:16243')\"\t\"{'CHEBI': 'CHEBI:16243', 'TEXT': 'Quercetin', 'HMDB': 'HMDB0005794', 'MESH': 'D011794', 'PUBCHEM': '5280343', 'CAS': '117-39-5', 'DRUGBANK': 'DB04216', 'CHEMBL': 'CHEMBL50'}\"\tC0034392\nf = open('cui_to_ontology_maps/reach_bio_mapping.tsv','r')\nbuf = f.read()\nf.close\nlns = buf.split('\\n')\nfor ln in lns[1:]: # skip header\n if ln == \"\":\n continue\n \n (agent_name,grounding,other_mapping,cui) = ln.replace('\"','').split('\\t')\n \n (ont_str,code) = eval(grounding)\n if ont_str == None or (code != None and code.find(':') == -1):\n continue\n \n if not all_cui_to_codeD.get(cui):\n all_cui_to_codeD[cui] = ['http://purl.obolibrary.org/obo/' + code.replace(':','_')]\n else:\n all_cui_to_codeD[cui].append('http://purl.obolibrary.org/obo/' + code.replace(':','_'))\n\nreach_mapped_cnt = len(all_cui_to_codeD.keys()) - umls_mapped_cnt\n \n## now, add mappings that came from a query of the bioportal annotator using the \n## string labels of the UMLS CUIs from the MRCONSO table (see cui_to_ontology_maps/non_go_hpo_strings_annotator_query.curl)\n\n# Bioportal annotator response\nimport json\nf = open('cui_to_ontology_maps/non_go_hpo_strings_annotator_response.json','r')\nmap_d_l = json.load(f)\nf.close()\n\n# all none GO or HPO UMLS CUIs mentioned in the triples and the string labels\nf = open('cui_to_ontology_maps/non_go_hpo_strings.csv','r')\nbuf = f.read()\nf.close\nlns = buf.split('\\n')\nstrToCuiD = {}\ncuiToStrD = {}\nfor ln in lns[1:]: # skip header\n if ln == \"\":\n continue\n \n (cui, s) = ln.replace('\"\"','').split('|')\n strToCuiD[s.upper()] = cui\n cuiToStrD[cui] = s.upper()\n\nfor d in map_d_l:\n mapped_to = d['annotatedClass']['@id']\n for mapped_from_d in d['annotations']:\n if not strToCuiD.get(mapped_from_d['text'].upper()):\n continue\n else:\n cui = strToCuiD[mapped_from_d['text'].upper()]\n if not all_cui_to_codeD.get(cui):\n all_cui_to_codeD[cui] = [mapped_to]\n else:\n if mapped_to not in all_cui_to_codeD[cui]:\n all_cui_to_codeD[cui].append(mapped_to)\n\nbioportal_mapped_cnt = len(all_cui_to_codeD.keys()) - reach_mapped_cnt\n \nprint('CUI mapping total: {} ({} from umls, {} from reach-to-obo, {} from bioportal)'.format(len(all_cui_to_codeD.values()),\n umls_mapped_cnt,\n reach_mapped_cnt,\n bioportal_mapped_cnt))\nontCounts = {}\nfor cui,m in all_cui_to_codeD.items():\n for elt in m:\n ontId = elt.split('/')[-1].split('_')[0]\n if ontCounts.get(ontId):\n ontCounts[ontId] += 1\n else:\n ontCounts[ontId] = 1\n\nfor ontId,count in ontCounts.items():\n print('{}:{}'.format(ontId,count))\n \n\nf = open('cui_mapping_triples.ntriples','w')\nfor cui,m in all_cui_to_codeD.items():\n for elt in m:\n f.write(\"<http://dikb.org/cfc#{}> <http://dikb.org/cfc#obo_mapping> <{}>.\\n\".format(cui,elt))\nf.close() ", "_____no_output_____" ] ], [ [ "#### Notice how many different OBO ontologies the CUIs map to. There also is very likely some loss of information since CUI to OBO mappings are incomplete. We will discuss more about this in class.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7284a1f8fe2ab6374060f703c48e7d1f4aab51f
37,601
ipynb
Jupyter Notebook
tf/.ipynb_checkpoints/Text Classification-checkpoint.ipynb
rishuatgithub/MLPy
603fdc86a1d56c41e8199b94f96a19f35c719586
[ "Apache-2.0" ]
null
null
null
tf/.ipynb_checkpoints/Text Classification-checkpoint.ipynb
rishuatgithub/MLPy
603fdc86a1d56c41e8199b94f96a19f35c719586
[ "Apache-2.0" ]
1
2022-03-12T00:55:20.000Z
2022-03-12T00:55:20.000Z
tf/.ipynb_checkpoints/Text Classification-checkpoint.ipynb
rishuatgithub/MLPy
603fdc86a1d56c41e8199b94f96a19f35c719586
[ "Apache-2.0" ]
3
2021-04-15T08:10:01.000Z
2021-11-04T17:57:51.000Z
66.905694
15,816
0.694529
[ [ [ "import tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\n\nprint(tf.__version__)", "1.13.1\n" ] ], [ [ "### Import wiki dataset", "_____no_output_____" ] ], [ [ "imdb = keras.datasets.imdb\n\n(train_data, train_label),(test_data,test_label) = imdb.load_data(num_words=10000) ", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz\n17465344/17464789 [==============================] - 3s 0us/step\n" ] ], [ [ "The argument num_words=10000 keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable.", "_____no_output_____" ] ], [ [ "print(\"Train data shape:\",train_data.shape)\nprint(\"Test data shape:\",test_data.shape)\nprint(\"Train label :\",len(train_label))\n\nprint(\"First Imdb review: \",train_data[0]) ## review data for the first review\n\n## notice the difference in length of 2 reviews\nprint(\"length of first and second review:\",len(train_data[0]),\" \",len(test_data[1])) ", "Train data shape: (25000,)\nTest data shape: (25000,)\nTrain label : 25000\nFirst Imdb review: [1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]\nlength of first and second review: 218 260\n" ] ], [ [ "#### Convert integers to String from the dictonary of words", "_____no_output_____" ] ], [ [ "## A dictionary mapping of a word to a integer index\nword_index = imdb.get_word_index()\n\n## The first indices are reserved\nword_index[\"<PAD>\"] = 0\nword_index[\"<START>\"] = 1\nword_index[\"<UNK>\"] = 2 ## unknown\nword_index[\"<UNUSED>\"] = 3\nword_index = {k:(v+3) for k,v in word_index.items()}\n\nreverse_word_index = dict([(value, key) for (key,value) in word_index.items()])\n\ndef decode_review(text):\n return ' '.join([reverse_word_index.get(i,'?') for i in text])", "_____no_output_____" ], [ "decode_review(train_data[0])", "_____no_output_____" ] ], [ [ "### Preparing the data ", "_____no_output_____" ], [ " we can pad the arrays so they all have the same length, then create an integer tensor of shape max_length * num_reviews. We can use an embedding layer capable of handling this shape as the first layer in our network.\n \n Since the movie reviews must be the same length, we will use the pad_sequences function to standardize the lengths", "_____no_output_____" ] ], [ [ "train_data = keras.preprocessing.sequence.pad_sequences(train_data, \n value = word_index[\"<PAD>\"],\n padding='post',\n maxlen = 256)\n\ntest_data = keras.preprocessing.sequence.pad_sequences(test_data,\n value = word_index[\"<PAD>\"],\n padding = 'post',\n maxlen = 256)", "_____no_output_____" ], [ "print(len(train_data[0]),\" \",len(test_data[1]))", "256 256\n" ], [ "print(train_data[0])", "[ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941\n 4 173 36 256 5 25 100 43 838 112 50 670 2 9\n 35 480 284 5 150 4 172 112 167 2 336 385 39 4\n 172 4536 1111 17 546 38 13 447 4 192 50 16 6 147\n 2025 19 14 22 4 1920 4613 469 4 22 71 87 12 16\n 43 530 38 76 15 13 1247 4 22 17 515 17 12 16\n 626 18 2 5 62 386 12 8 316 8 106 5 4 2223\n 5244 16 480 66 3785 33 4 130 12 16 38 619 5 25\n 124 51 36 135 48 25 1415 33 6 22 12 215 28 77\n 52 5 14 407 16 82 2 8 4 107 117 5952 15 256\n 4 2 7 3766 5 723 36 71 43 530 476 26 400 317\n 46 7 4 2 1029 13 104 88 4 381 15 297 98 32\n 2071 56 26 141 6 194 7486 18 4 226 22 21 134 476\n 26 480 5 144 30 5535 18 51 36 28 224 92 25 104\n 4 226 65 16 38 1334 88 12 16 283 5 16 4472 113\n 103 32 15 16 5345 19 178 32 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0]\n" ] ], [ [ "### Building the model", "_____no_output_____" ] ], [ [ "# input shape is the vocabulary count used in the reviews i.e. word count = 10,000\n\nvocab_size = 10000\n\nmodel = keras.Sequential()\nmodel.add(keras.layers.Embedding(vocab_size, 16))\nmodel.add(keras.layers.GlobalAveragePooling1D())\nmodel.add(keras.layers.Dense(16, activation = tf.nn.relu))\nmodel.add(keras.layers.Dense(1, activation = tf.nn.sigmoid))\n\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_2 (Embedding) (None, None, 16) 160000 \n_________________________________________________________________\nglobal_average_pooling1d (Gl (None, 16) 0 \n_________________________________________________________________\ndense (Dense) (None, 16) 272 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 17 \n=================================================================\nTotal params: 160,289\nTrainable params: 160,289\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "### adding the loss function and optimizer\n\nmodel.compile(optimizer = 'adam',\n loss = 'binary_crossentropy',\n metrics = ['acc'])", "_____no_output_____" ], [ "### creating a validation data set to test the training accuracy\n\nx_val = train_data[:10000]\npartial_x_train = train_data[10000:]\n\ny_val = train_label[:10000]\npartial_y_train = train_label[10000:]", "_____no_output_____" ] ], [ [ "### Training the model", "_____no_output_____" ] ], [ [ "history = model.fit(partial_x_train,\n partial_y_train,\n epochs=40,\n batch_size=512,\n validation_data=(x_val, y_val),\n verbose=1)", "Train on 15000 samples, validate on 10000 samples\nWARNING:tensorflow:From /anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nEpoch 1/40\n15000/15000 [==============================] - 4s 295us/sample - loss: 0.6917 - acc: 0.5573 - val_loss: 0.6894 - val_acc: 0.6509\nEpoch 2/40\n15000/15000 [==============================] - 4s 236us/sample - loss: 0.6849 - acc: 0.7289 - val_loss: 0.6798 - val_acc: 0.7407\nEpoch 3/40\n15000/15000 [==============================] - 4s 235us/sample - loss: 0.6700 - acc: 0.7575 - val_loss: 0.6614 - val_acc: 0.7565\nEpoch 4/40\n15000/15000 [==============================] - 4s 240us/sample - loss: 0.6449 - acc: 0.7678 - val_loss: 0.6330 - val_acc: 0.7589\nEpoch 5/40\n15000/15000 [==============================] - 4s 243us/sample - loss: 0.6088 - acc: 0.7948 - val_loss: 0.5958 - val_acc: 0.7875\nEpoch 6/40\n15000/15000 [==============================] - 4s 244us/sample - loss: 0.5642 - acc: 0.8169 - val_loss: 0.5535 - val_acc: 0.8072\nEpoch 7/40\n15000/15000 [==============================] - 4s 246us/sample - loss: 0.5155 - acc: 0.8320 - val_loss: 0.5084 - val_acc: 0.8241\nEpoch 8/40\n15000/15000 [==============================] - 4s 238us/sample - loss: 0.4678 - acc: 0.8499 - val_loss: 0.4676 - val_acc: 0.8366\nEpoch 9/40\n15000/15000 [==============================] - 4s 238us/sample - loss: 0.4243 - acc: 0.8648 - val_loss: 0.4317 - val_acc: 0.8472\nEpoch 10/40\n15000/15000 [==============================] - 4s 244us/sample - loss: 0.3864 - acc: 0.8762 - val_loss: 0.4025 - val_acc: 0.8541\nEpoch 11/40\n15000/15000 [==============================] - 4s 241us/sample - loss: 0.3547 - acc: 0.8846 - val_loss: 0.3787 - val_acc: 0.8608\nEpoch 12/40\n15000/15000 [==============================] - 4s 239us/sample - loss: 0.3281 - acc: 0.8915 - val_loss: 0.3604 - val_acc: 0.8640\nEpoch 13/40\n15000/15000 [==============================] - 4s 244us/sample - loss: 0.3063 - acc: 0.8977 - val_loss: 0.3444 - val_acc: 0.8695\nEpoch 14/40\n15000/15000 [==============================] - 4s 241us/sample - loss: 0.2866 - acc: 0.9029 - val_loss: 0.3326 - val_acc: 0.8730\nEpoch 15/40\n15000/15000 [==============================] - 4s 243us/sample - loss: 0.2701 - acc: 0.9074 - val_loss: 0.3230 - val_acc: 0.8752\nEpoch 16/40\n15000/15000 [==============================] - 4s 245us/sample - loss: 0.2553 - acc: 0.9130 - val_loss: 0.3149 - val_acc: 0.8759\nEpoch 17/40\n15000/15000 [==============================] - 4s 247us/sample - loss: 0.2415 - acc: 0.9169 - val_loss: 0.3082 - val_acc: 0.8785\nEpoch 18/40\n15000/15000 [==============================] - 4s 243us/sample - loss: 0.2294 - acc: 0.9220 - val_loss: 0.3026 - val_acc: 0.8813\nEpoch 19/40\n15000/15000 [==============================] - 4s 247us/sample - loss: 0.2183 - acc: 0.9244 - val_loss: 0.2977 - val_acc: 0.8817\nEpoch 20/40\n15000/15000 [==============================] - 4s 242us/sample - loss: 0.2084 - acc: 0.9283 - val_loss: 0.2945 - val_acc: 0.8819\nEpoch 21/40\n15000/15000 [==============================] - 4s 242us/sample - loss: 0.1983 - acc: 0.9331 - val_loss: 0.2916 - val_acc: 0.8824\nEpoch 22/40\n15000/15000 [==============================] - 4s 255us/sample - loss: 0.1898 - acc: 0.9365 - val_loss: 0.2893 - val_acc: 0.8849\nEpoch 23/40\n15000/15000 [==============================] - 4s 258us/sample - loss: 0.1812 - acc: 0.9408 - val_loss: 0.2885 - val_acc: 0.8843\nEpoch 24/40\n15000/15000 [==============================] - 4s 235us/sample - loss: 0.1738 - acc: 0.9443 - val_loss: 0.2870 - val_acc: 0.8846\nEpoch 25/40\n15000/15000 [==============================] - 4s 245us/sample - loss: 0.1662 - acc: 0.9469 - val_loss: 0.2857 - val_acc: 0.8861\nEpoch 26/40\n15000/15000 [==============================] - 4s 265us/sample - loss: 0.1594 - acc: 0.9497 - val_loss: 0.2864 - val_acc: 0.8836\nEpoch 27/40\n15000/15000 [==============================] - 4s 245us/sample - loss: 0.1530 - acc: 0.9521 - val_loss: 0.2860 - val_acc: 0.8844\nEpoch 28/40\n15000/15000 [==============================] - 4s 252us/sample - loss: 0.1468 - acc: 0.9550 - val_loss: 0.2866 - val_acc: 0.8852\nEpoch 29/40\n15000/15000 [==============================] - 4s 240us/sample - loss: 0.1414 - acc: 0.9579 - val_loss: 0.2884 - val_acc: 0.8844\nEpoch 30/40\n15000/15000 [==============================] - 4s 240us/sample - loss: 0.1357 - acc: 0.9594 - val_loss: 0.2878 - val_acc: 0.8858\nEpoch 31/40\n15000/15000 [==============================] - 4s 243us/sample - loss: 0.1300 - acc: 0.9619 - val_loss: 0.2890 - val_acc: 0.8870\nEpoch 32/40\n15000/15000 [==============================] - 4s 249us/sample - loss: 0.1249 - acc: 0.9651 - val_loss: 0.2906 - val_acc: 0.8859\nEpoch 33/40\n15000/15000 [==============================] - 4s 245us/sample - loss: 0.1199 - acc: 0.9664 - val_loss: 0.2929 - val_acc: 0.8851\nEpoch 34/40\n15000/15000 [==============================] - 4s 246us/sample - loss: 0.1154 - acc: 0.9675 - val_loss: 0.2952 - val_acc: 0.8844\nEpoch 35/40\n15000/15000 [==============================] - 4s 243us/sample - loss: 0.1113 - acc: 0.9685 - val_loss: 0.2978 - val_acc: 0.8844\nEpoch 36/40\n15000/15000 [==============================] - 4s 246us/sample - loss: 0.1069 - acc: 0.9708 - val_loss: 0.2996 - val_acc: 0.8843\nEpoch 37/40\n15000/15000 [==============================] - 4s 244us/sample - loss: 0.1026 - acc: 0.9717 - val_loss: 0.3024 - val_acc: 0.8831\nEpoch 38/40\n15000/15000 [==============================] - 4s 255us/sample - loss: 0.0987 - acc: 0.9734 - val_loss: 0.3060 - val_acc: 0.8818\nEpoch 39/40\n15000/15000 [==============================] - 4s 247us/sample - loss: 0.0955 - acc: 0.9745 - val_loss: 0.3098 - val_acc: 0.8810\nEpoch 40/40\n15000/15000 [==============================] - 4s 238us/sample - loss: 0.0914 - acc: 0.9766 - val_loss: 0.3126 - val_acc: 0.8822\n" ] ], [ [ "### Evaluate the model", "_____no_output_____" ] ], [ [ "results = model.evaluate(test_data, test_label)\n\nprint(results)", "25000/25000 [==============================] - 1s 29us/sample - loss: 0.3339 - acc: 0.8714\n[0.33389624541282653, 0.8714]\n" ] ], [ [ "### Create a graph of accuracy over time", "_____no_output_____" ] ], [ [ "history_dict = history.history\n\nhistory_dict.keys()", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nacc = history_dict['acc']\nval_acc = history_dict['val_acc']\nloss = history_dict['loss']\nval_loss = history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\n# \"bo\" is for \"blue dot\"\nplt.plot(epochs, loss, 'bo', label='Training loss')\n# b is for \"solid blue line\"\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.show()", "_____no_output_____" ], [ "plt.clf() # clear figure\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend()\n\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e72850ef0022d87b547308b6a65ce4001d22c935
15,942
ipynb
Jupyter Notebook
training/auto_notebooks/rl/Reaching1DWithSelfDistraction-v0.ipynb
ZhongxuanWu/ngym_usage
b29123888f423c32bdf8627951e173dbebdda216
[ "MIT" ]
5
2021-03-01T23:10:32.000Z
2021-10-04T08:32:03.000Z
training/auto_notebooks/rl/Reaching1DWithSelfDistraction-v0.ipynb
ZhongxuanWu/ngym_usage
b29123888f423c32bdf8627951e173dbebdda216
[ "MIT" ]
1
2021-11-29T15:49:19.000Z
2021-11-29T15:49:19.000Z
training/auto_notebooks/rl/Reaching1DWithSelfDistraction-v0.ipynb
ZhongxuanWu/ngym_usage
b29123888f423c32bdf8627951e173dbebdda216
[ "MIT" ]
6
2020-11-19T02:10:48.000Z
2022-02-23T16:33:45.000Z
33.847134
224
0.512985
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e72857c26b7f1b3ea12ba39046cf7a94092a042f
20,114
ipynb
Jupyter Notebook
Lab_2/Practice_with_Visualizations.ipynb
ds-modules/SOC-88-SP21
e45e22702eb16a613266d00f7cf120d486c0862e
[ "BSD-3-Clause" ]
null
null
null
Lab_2/Practice_with_Visualizations.ipynb
ds-modules/SOC-88-SP21
e45e22702eb16a613266d00f7cf120d486c0862e
[ "BSD-3-Clause" ]
null
null
null
Lab_2/Practice_with_Visualizations.ipynb
ds-modules/SOC-88-SP21
e45e22702eb16a613266d00f7cf120d486c0862e
[ "BSD-3-Clause" ]
null
null
null
32.441935
574
0.622104
[ [ [ "# [SOC-88] Practice with Visualizations\n\n### Professor David Harding\n\n## Table of Contents\n\n[Introduction](#intro)\n\n[Using Tables with Bigger Data](#1)\n- [Question 1: Who is Represented?](#q1)\n- [Question 2: Differences by Race](#q2)\n- [Question 3: Minimum Sentence Lengths](#q3)\n- [Question 4: Sentence Lengths across Different Sentence Types](#q4)\n- [Bonus: Sentence Lengths across different races](#bonus)\n - [Bonus Challenge Question](#bonusq)", "_____no_output_____" ], [ "## Introduction <a id='intro'></a>\n\nWelcome to the notebook! \n\nIn this notebook, we'll be exploring data visualizations with data from one of Professor Harding's papers.\n\nWe will use the `datascience` library which is a library in Python that we can use to manage data, but this could be done with other libraries such as [pandas](https://pandas.pydata.org/). The datascience library holds data in structures called Tables.\n\nTo use the library, we have to import it. We'll also import numpy as np. For data visualization, we will be using the library [matplotlib](https://matplotlib.org/). We'll import this as plt for ease of use.\n", "_____no_output_____" ] ], [ [ "# Data library\nfrom datascience import *\n\nimport numpy as np\n\n# Plotting libraries\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "Great! Let's explore the different ways we can use Tables to create data visualizations.\n", "_____no_output_____" ], [ "## Using Tables with Bigger Data <a id='1'></a>\n\nLet's explore some ways to manage bigger data and break it down into something we can visualize.\n\nFirst, we need some big data. Here, we'll use some data from Professor Harding's own research about the [*Short- and long-term effects of imprisonment on future felony convictions and prison admissions*](https://www.ncbi.nlm.nih.gov/pubmed/28973924).\n\nThis dataset contains information about people who were sentenced in Michigan between 2003 and 2006. It contains information such as the judge id number, sentence date, sentencing type, age of the person sentenced and other important variables. You can see what each column corresponds to in the codebook below.\n\n#### <center>Codebook</center>\n\n|Column Name | Description |\n|:------------:|:---------|\n|SCRSIDs |Individual defendant ID|\n|r_judge_id |Judge ID|\n|r_sendate | Date of sentence|\n|alt_start_date_all | Date of first exposure to community (Sentence data for probationers or release date for those sentenced to prison or jail)|\n|cum_min_length_months | Minimum sentence imposed by the judge (in months) |\n|sent_type | Type of Sentence (Prison, Probation, Jail only, Jail followed by Probation)|\n|pre_employ23 |Proportion of the 23 calendar quarters prior to sentence in which the individual had any employment in the formal labor market |\n|pre_employ12 |Proportion of the 12 calendar quarters prior to sentence in which the individual had any employment in the formal labor market|\n|r_age | Age in years |\n|race_cat | Race in categories |\n\nTo load the data, we will use `Table().read_table(\"name of csv file\")` which reads a .csv file into Python.", "_____no_output_____" ] ], [ [ "sentencing = Table().read_table(\"harding_new.csv\")\nsentencing", "_____no_output_____" ] ], [ [ "### Question 1: Who is represented? <a id='q1'></a>\n\nCool, now we have data. Let's see if we can figure out the following questions:\n- Who is represented in this dataset? Groups or individuals?\n- What does each row of the table represent? And, how many people are represented in this dataset?\n\n\nWrite in the cell below any code that you would need to answer the questions above. ", "_____no_output_____" ], [ "**Write your comment here**", "_____no_output_____" ] ], [ [ "# write any code here", "_____no_output_____" ] ], [ [ "When starting to analyze your data, it is good to think about how many unique values exist in your dataset. The following code checks how many unique values there are in each column of your table. A way to understand what these numbers mean is if there are 140,267 unique values and there are 140,267 rows in our table, then that means every value in that column is unique.", "_____no_output_____" ] ], [ [ "for i in sentencing.labels:\n print('Number of unique values in column',i,\":\", len(np.unique(sentencing.column(i))))", "_____no_output_____" ] ], [ [ "For columns with low numbers of unique values, it is useful to try to group them and determine which plot is best to plot the differences between the groups. Let's try with `race_cat` first, and then for other variables.", "_____no_output_____" ], [ "To group by the `race_cat` column, we use the `table.group()` function. We will define this new table as `race_categories`.", "_____no_output_____" ] ], [ [ "race_categories = sentencing.group('race_cat')\nrace_categories", "_____no_output_____" ] ], [ [ "### Question 2: Differences by Race <a id='q2'></a>\n\nLet's represent this data to see the disparities between races. We have one categorical variable and one numerical variable. Here is an example plot:", "_____no_output_____" ] ], [ [ "race_categories.barh('race_cat', width=15)", "_____no_output_____" ] ], [ [ "Now, it is your turn! Improve this bar chart with the methods from the first notebook. Then, describe why you choose to add or not add certain features to the chart. Think about fonts, colors, titles, clear labeling, avoiding too much \"ink\". Be prepared to explain your design choices below!", "_____no_output_____" ] ], [ [ "# your code here", "_____no_output_____" ] ], [ [ "**Desribe the features you choose.** What does your plot argue? What is the main point? What is the general message of the plot?", "_____no_output_____" ], [ "*Type your answer here*", "_____no_output_____" ], [ "One thing to consider when grouping by race is that the data we have is for all individuals sentenced in Michigan between 2003 and 2006. This is the race breakdown for those who were sentenced, but it is important to also look at the race breakdown of the overall Michigan population. In order to do this, we will look at the [Michigan Census Data](https://www.census.gov/prod/2002pubs/c2kprof00-mi.pdf) from 2000. In the following table, we've included both the count and percentage for the race categories White, Black, and Other (includes people of multiple races).", "_____no_output_____" ] ], [ [ "races = ['Black', 'Other', 'White']\ncounts = [1412742, 559649, 7966053]\npercent = [14.2, 5.3, 80.5]\nrace_census = Table().with_columns('Race', races, 'Count', counts, 'Percent', percent)\nrace_census", "_____no_output_____" ] ], [ [ "The table `race_full` below contains both the sentencing counts and the census counts, along with the percentage of the population sentenced for each race. We will use this to show the differences in race for the individuals sentenced.", "_____no_output_____" ] ], [ [ "race_sentencing = race_categories.where('race_cat', are.not_equal_to('Race Unknown'))\nrace_full = race_sentencing.with_column('census_count', race_census['Count']).relabeled('count', 'sentencing_count')\nrace_full = race_full.with_column('sentenced_pct', race_full['sentencing_count']/race_full['census_count']*100)\nrace_full", "_____no_output_____" ], [ "race_full_sorted = race_full.sort('sentenced_pct', False)\nplt.barh(race_full_sorted['race_cat'], race_full_sorted['sentenced_pct'])\nplt.title('Distribution of Sentencing by Race')\nplt.xlabel('Percent of Population Sentenced')\nplt.ylabel('Race');", "_____no_output_____" ] ], [ [ "**Question:** What do you notice that is different about this plot than the plot you created with the sentencing counts? What does this tell us about context and population size when creating visualizations?", "_____no_output_____" ], [ "*Type your answer here*", "_____no_output_____" ], [ "### Question 3: Minimum Sentence Lengths <a id='q3'></a>", "_____no_output_____" ], [ "Now, let's investigate the distribution of `cum_min_length_months` across different sentence types. What type of graph should we use? Here is an example of a plot that looks at all `cum_min_length_months`. It is very skewed, because there are some outliers in our data. Recall that the y-axis represents the counts of people, not the percent per unit.", "_____no_output_____" ] ], [ [ "plt.hist(sentencing.column('cum_min_length_months'))\nplt.xlabel(\"Minimum Length (Months)\")\nplt.ylabel(\"Number of occurrences\");", "_____no_output_____" ] ], [ [ "**3a.** How could we find the outliers in our dataset? Let's define outliers as months where the number of occurrences are below 200. What is the largest minimum length that has over 200 occurrences? To start, we've grouped the occurrences of `cum_min_length_months`. Build off of the existing code to find the largest number of months that has a count over 200.", "_____no_output_____" ] ], [ [ "sentencing.group('cum_min_length_months')...", "_____no_output_____" ] ], [ [ "**3b.** We've plotted the original data with new bins to compare this distribution to one without outliers. Plot the same data below with bins widths of 20, but filter out minimum sentence lengths over 180 months. Make sure to contain the bin from 160 to 180. Hint: *Use the `bins` argument of histograms to display only part of the data.* ", "_____no_output_____" ] ], [ [ "# Do not change the code in this cell!\nplt.hist(sentencing.column('cum_min_length_months'), bins=np.arange(0, 1200,20))\nplt.xlabel(\"Minimum Length (Months)\")\nplt.ylabel(\"Number of occurrences\")\nplt.title('Distribution of Minimum Sentence Lengths');", "_____no_output_____" ], [ "# your code here", "_____no_output_____" ] ], [ [ "### Question 4: Sentence Lengths across Different Sentence Types <a id='q4'></a>", "_____no_output_____" ], [ "To compare the different types of sentences and their lengths, we will have to filter tables so that they only contain one sentencing type and then create a histogram of the lengths within that table. You will need to make 4 different tables for the different sentence types: 'Jail Only', 'Jail with Probation', 'Prison', and 'Probation'. We will ignore the other sentence category, since there are very few people represented.\n\nTo create these subtables, you will need to use `tbl.where(...)` from Data 8. If you get stuck on the syntax, please reference [here](http://data8.org/sp20/python-reference.html) for more on how to use the function. \n\n\n\n**4a.** We have provided the start to most of these plots, but they are missing the proper bins and labels. Please add to them! For the sake of comparison, ignore outliers (focus on months from 0 to 150), and make all of your plots have the same bins, so we can compare across all sentence types. Choose to represent counts or density in your plot, and explain your decision in the text box after the graphs.", "_____no_output_____" ] ], [ [ "jail_only = sentencing.where('sent_type', 'Jail Only')\nplt.hist(jail_only.column('cum_min_length_months'));\n# your code below", "_____no_output_____" ], [ "jail_prob = sentencing.where('sent_type', 'Jail with Probation')\nplt.hist(jail_prob.column('cum_min_length_months'));\n# your code below", "_____no_output_____" ], [ "prison = sentencing.where('sent_type', 'Prison')\nplt.hist(prison.column('cum_min_length_months'));\n# your code below", "_____no_output_____" ], [ "probation = sentencing.where('sent_type', 'Probation')\nplt.hist(probation.column('cum_min_length_months'));\n# your code below", "_____no_output_____" ] ], [ [ "**Desribe the features you choose below.** Think about the choices you made regarding fonts, colors, titles, clear labeling. Why did you choose what you did? How does this help your plot give a message? What is the type of message this plot gives?", "_____no_output_____" ], [ "*Type your answer here*", "_____no_output_____" ], [ "**4b.** What do we learn from these graphs? Which sentence type has the longest lengths? How do these plots compare with the cumulative plot?", "_____no_output_____" ], [ "*Type your answer here*", "_____no_output_____" ] ], [ [ "# write any code used to help answer the question here ", "_____no_output_____" ] ], [ [ "### [Bonus] Sentence Lengths across different Races <a id='bonus'></a>", "_____no_output_____" ], [ "Another thing we can compare is the length of sentences based on race. Here we've plotted the distribution of sentence lengths for white and black individuals.", "_____no_output_____" ] ], [ [ "white_sentence_lengths = sentencing.where('race_cat', 'White') \nblack_sentence_lengths = sentencing.where('race_cat', 'Black') \n\nfig = plt.figure(figsize=(15,8))\nax = fig.add_subplot(221)\nplt.hist(white_sentence_lengths.column('cum_min_length_months'), bins=np.arange(0,200,12), density=True)\nplt.ylim(0, 0.04)\nax.set(title='Length of Sentence for White Individuals', xlabel = 'Months', ylabel='Percent Per Unit',\n xticks = np.arange(0,200,12))\n\nax1 = fig.add_subplot(222)\nplt.hist(black_sentence_lengths.column('cum_min_length_months'), bins=np.arange(0,200,12), density=True)\nplt.ylim(0, 0.04)\nax1.set(title='Length of Sentence for Black Individuals', xlabel = 'Months', ylabel='Percent Per Unit',\n xticks = np.arange(0,200,12));", "_____no_output_____" ] ], [ [ "#### Bonus Challenge Question <a id='bonusq'></a>\n\nHow would you improve upon this data visualization of two side by side histograms comparing sentence length between races?", "_____no_output_____" ], [ "*Type your answer here*", "_____no_output_____" ], [ "One possible way to visualize the length of sentence for different races is with a superimposed kernel density estimate plot. Below, we've generated a KDE with seaborn. What do you notice?", "_____no_output_____" ] ], [ [ "import seaborn as sns\n\nsns.kdeplot(white_sentence_lengths.column('cum_min_length_months'), gridsize=1000, label = 'White')\nsns.kdeplot(black_sentence_lengths.column('cum_min_length_months'), gridsize=1000, label = 'Black')\nplt.xlim(0,150)\nplt.title('Length of Sentence by Race')\nplt.xlabel('Months')\nplt.ylabel('Percent')\nplt.xticks(np.arange(0,150,12));", "_____no_output_____" ] ], [ [ "### You made it to the end!\n\n---\n\nNotebook developed by: Ollie Downs, Ashley Quiterio, Keilyn Yuzuki\n\nData Science Modules: http://data.berkeley.edu/education/modules\n\nData Science Offerings at Berkeley: https://data.berkeley.edu/academics/undergraduate-programs/data-science-offerings\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7285bc50a7a454f6922400de811ba817fc9a3bc
8,710
ipynb
Jupyter Notebook
ya.learning/1c2w Nearest sentences.ipynb
aosorgin/ml-notebooks
e0fd7a64049210e2654f65997165d43e39f0bf3f
[ "BSD-2-Clause" ]
null
null
null
ya.learning/1c2w Nearest sentences.ipynb
aosorgin/ml-notebooks
e0fd7a64049210e2654f65997165d43e39f0bf3f
[ "BSD-2-Clause" ]
null
null
null
ya.learning/1c2w Nearest sentences.ipynb
aosorgin/ml-notebooks
e0fd7a64049210e2654f65997165d43e39f0bf3f
[ "BSD-2-Clause" ]
null
null
null
35.696721
520
0.59667
[ [ [ "# Finding the nearest sentences about *cat*", "_____no_output_____" ] ], [ [ "import numpy as np\nimport re\nfrom scipy.spatial import distance", "_____no_output_____" ] ], [ [ "### Reading sentences from file and place them to dict and list\nWe use the following structure to store words:\n\n```struct Word {\n Index int,\n Count int\n}\ndict{\\<word\\>: @Word}```", "_____no_output_____" ] ], [ [ "class Words:\n def __init__(self):\n self._words = dict()\n \n def add(self, word):\n if word in self._words:\n self._words[word]['count'] += 1\n else:\n self._words[word] = {'index': len(self._words), 'count': 1}\n \n def index(self, word):\n if word in self._words:\n return self._words[word]['index']\n return None\n \n def count(self, word):\n if word in self._words:\n return self._words[word]['count']\n return None\n \n def capacity(self):\n return len(self._words)\n \n def print_words(self):\n print self._words", "_____no_output_____" ] ], [ [ "Reading sentances from file", "_____no_output_____" ] ], [ [ "allWords = Words()\nlistOfSentenceFeatures = []\nwith open('data/1c2s_sentences.txt', 'r') as f:\n for sentence in f.readlines():\n sentenceWords = Words()\n words = np.array([x.lower() for x in re.split('[^A-Za-z]', sentence.strip())])\n words = words[words != '']\n for word in words:\n allWords.add(word)\n sentenceWords.add(word)\n sentenceFeatures = np.zeros(allWords.capacity())\n for word in words:\n sentenceFeatures[allWords.index(word)] = sentenceWords.count(word)\n listOfSentenceFeatures.append(sentenceFeatures)", "_____no_output_____" ] ], [ [ "Construct matrix of sentance features", "_____no_output_____" ] ], [ [ "features = np.zeros((len(listOfSentenceFeatures), len(listOfSentenceFeatures[-1])))\nfor enu, row in enumerate(listOfSentenceFeatures):\n features[enu, :len(row)] += row", "_____no_output_____" ] ], [ [ "### Calculating cosine distance for all sentences from the first one", "_____no_output_____" ] ], [ [ "distances = dict()\nfor num, row in enumerate(features[1:]):\n distances[distance.cosine(features[0], row)] = num + 1\nprint [(distances[key], key) for key in sorted(distances.keys())]", "[(6, 0.7327387580875756), (4, 0.7770887149698589), (21, 0.8250364469440588), (10, 0.8328165362273942), (12, 0.8396432548525454), (16, 0.8406361854220809), (20, 0.8427572744917122), (2, 0.8644738145642124), (13, 0.8703592552895671), (14, 0.8740118423302576), (11, 0.8804771390665607), (8, 0.8842724875284311), (19, 0.8885443574849294), (3, 0.8951715163278082), (9, 0.9055088817476932), (7, 0.9258750683338899), (5, 0.9402385695332803), (18, 0.9442721787424647), (1, 0.9527544408738466), (17, 0.956644501523794)]\n" ] ], [ [ "### Writing results to file", "_____no_output_____" ] ], [ [ "with open('data/1c2s_result1.txt', 'w') as f:\n f.write(' '.join((str(distances[key]) for key in sorted(distances.keys()))))", "_____no_output_____" ], [ "!cat data/1c2s_result1.txt", "6 4 21 10 12 16 20 2 13 14 11 8 19 3 9 7 5 18 1 17" ], [ "!cat data/1c2s_sentences.txt", "In comparison to dogs, cats have not undergone major changes during the domestication process.\r\nAs cat simply catenates streams of bytes, it can be also used to concatenate binary files, where it will just concatenate sequence of bytes.\r\nA common interactive use of cat for a single file is to output the content of a file to standard output.\r\nCats can hear sounds too faint or too high in frequency for human ears, such as those made by mice and other small animals.\r\nIn one, people deliberately tamed cats in a process of artificial selection, as they were useful predators of vermin.\r\nThe domesticated cat and its closest wild ancestor are both diploid organisms that possess 38 chromosomes and roughly 20,000 genes.\r\nDomestic cats are similar in size to the other members of the genus Felis, typically weighing between 4 and 5 kg (8.8 and 11.0 lb).\r\nHowever, if the output is piped or redirected, cat is unnecessary.\r\ncat with one named file is safer where human error is a concern - one wrong use of the default redirection symbol \">\" instead of \"<\" (often adjacent on keyboards) may permanently delete the file you were just needing to read.\r\nIn terms of legibility, a sequence of commands starting with cat and connected by pipes has a clear left-to-right flow of information.\r\nCat command is one of the basic commands that you learned when you started in the Unix / Linux world.\r\nUsing cat command, the lines received from stdin can be redirected to a new file using redirection symbols.\r\nWhen you type simply cat command without any arguments, it just receives the stdin content and displays it in the stdout.\r\nLeopard was released on October 26, 2007 as the successor of Tiger (version 10.4), and is available in two editions.\r\nAccording to Apple, Leopard contains over 300 changes and enhancements over its predecessor, Mac OS X Tiger.\r\nAs of Mid 2010, some Apple computers have firmware factory installed which will no longer allow installation of Mac OS X Leopard.\r\nSince Apple moved to using Intel processors in their computers, the OSx86 community has developed and now also allows Mac OS X Tiger and later releases to be installed on non-Apple x86-based computers.\r\nOS X Mountain Lion was released on July 25, 2012 for purchase and download through Apple's Mac App Store, as part of a switch to releasing OS X versions online and every year.\r\nApple has released a small patch for the three most recent versions of Safari running on OS X Yosemite, Mavericks, and Mountain Lion.\r\nThe Mountain Lion release marks the second time Apple has offered an incremental upgrade, rather than releasing a new cat entirely.\r\nMac OS X Mountain Lion installs in place, so you won't need to create a separate disk or run the installation off an external drive.\r\nThe fifth major update to Mac OS X, Leopard, contains such a mountain of features - more than 300 by Apple's count.\r\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e72861ff7c09d61ffb2b7e4c421b0aac54ff5f92
817,570
ipynb
Jupyter Notebook
notebook/12.3 Matplotlib02 Subplots.ipynb
sonnymetvn/Basic-Python-for-Meteorology
cf98ffe4f92e76b746c1de253c34ef50835bbf26
[ "MIT" ]
8
2021-09-22T00:39:31.000Z
2022-03-31T22:49:43.000Z
notebook/12.3 Matplotlib02 Subplots.ipynb
sonnymetvn/Basic-Python-for-Meteorology
cf98ffe4f92e76b746c1de253c34ef50835bbf26
[ "MIT" ]
null
null
null
notebook/12.3 Matplotlib02 Subplots.ipynb
sonnymetvn/Basic-Python-for-Meteorology
cf98ffe4f92e76b746c1de253c34ef50835bbf26
[ "MIT" ]
3
2021-12-29T11:13:33.000Z
2022-02-01T11:13:18.000Z
2,329.259259
468,316
0.960627
[ [ [ "from IPython.display import Image\nImage('mat02.png')", "_____no_output_____" ], [ "import matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nmpl.rcParams['font.family'] = 'serif'", "_____no_output_____" ], [ "x = np.linspace(-2*np.pi, 2*np.pi, 128)\ny1 = np.sin(x)\ny2 = np.cos(x)\ny3 = np.cos(2*x) + np.sin(2*x)\n\nncols, nrows = 3,1\n\nfig, ax = plt.subplots(nrows, ncols, figsize=(10, 4))\n\nax[0].plot(x, y1, color='r')\nax[0].axvline(x=0, color = 'k')\nax[0].axhline(y=0, color = 'k') \n\nax[1].plot(x, y2, color='b')\nax[1].axvline(x=0, color = 'k')\nax[1].axhline(y=0, color = 'k') \n\nax[2].plot(x, y3, color='m')\nax[2].axvline(x=0, color = 'k')\nax[2].axhline(y=0, color = 'k') ", "_____no_output_____" ], [ "x = np.linspace(-2*np.pi, 2*np.pi, 128)\ny1 = np.sin(x)\ny2 = np.cos(x)\ny3 = np.cos(2*x) + np.sin(2*x)\n\nncols, nrows = 3,1\n\nfig, ax = plt.subplots(nrows, ncols, figsize=(10, 4))\n\ncolor = ('r', 'b', 'm')\ny_ = (y1, y2, y3)\n\nfor i in range(ncols):\n ax[i].plot(x, y_[i], color=color[i])\n ax[i].axvline(x=0, color = 'k')\n ax[i].axhline(y=0, color = 'k') ", "_____no_output_____" ], [ "ncols, nrows = 3, 3\n\nfig, axes = plt.subplots(nrows, ncols, figsize=(10, 8))\n\nfor i in range(nrows):\n for j in range(ncols):\n axes[i, j].set_xticks([])\n axes[i, j].set_yticks([])\n axes[i, j].text(0.6, 0.5, \"axes[%d, %d]\" % (i, j),\n horizontalalignment='right')", "_____no_output_____" ], [ "fig = plt.figure()\n\ndef remove_ticklabels(ax):\n ax.set_yticklabels([])\n ax.set_xticklabels([])\n\nax0 = plt.subplot2grid((2, 3), (0, 0))\nax1 = plt.subplot2grid((2, 3), (0, 1))\nax2 = plt.subplot2grid((2, 3), (1, 0), colspan=2)\nax3 = plt.subplot2grid((2, 3), (0, 2), rowspan=2)\n\naxes = [ax0, ax1, ax2, ax3]\n[ax.text(0.5, 0.5, \"ax%d\" % i, horizontalalignment='center') for i, ax in enumerate(axes)]\n[remove_ticklabels(ax) for ax in axes]", "_____no_output_____" ], [ "fig = plt.figure()\n\ndef remove_ticklabels(ax):\n ax.set_yticklabels([])\n ax.set_xticklabels([])\n\nax0 = plt.subplot2grid((3, 3), (0, 0))\nax1 = plt.subplot2grid((3, 3), (0, 2))\nax2 = plt.subplot2grid((3, 3), (1, 0), colspan=2)\nax3 = plt.subplot2grid((3, 3), (2, 1))\nax4 = plt.subplot2grid((3, 3), (1, 2), rowspan=2)\n\naxes = [ax0, ax1, ax2, ax3, ax4]\n[ax.text(0.5, 0.5, \"ax%d\" % i, horizontalalignment='center') for i, ax in enumerate(axes)]\n[remove_ticklabels(ax) for ax in axes]", "_____no_output_____" ], [ "fig = plt.figure(figsize=(4, 4))\n\nso = mpl.gridspec.GridSpec(2, 2,\n width_ratios=[4, 1],\n height_ratios=[1, 4],\n wspace=0.1, hspace=0.5\n )\n\nax0 = fig.add_subplot(so[1, 0])\nax1 = fig.add_subplot(so[0, 0])\nax2 = fig.add_subplot(so[1, 1])\nax3 = fig.add_subplot(so[0, 1])\n\ndef remove_ticklabels(ax):\n ax.set_yticklabels([])\n ax.set_xticklabels([])\n\naxes = [ax0, ax1, ax2]\n[ax.text(0.5, 0.5, \"ax%d\" % i, horizontalalignment='center') for i, ax in enumerate(axes)]\n[remove_ticklabels(ax) for ax in axes]", "_____no_output_____" ], [ "from IPython.display import Image\nImage('mat_022.png')", "_____no_output_____" ] ], [ [ "## All done !!!\n- Please feel free to let me know if there is any questions\n- Please subscribe my youtube channel too\n- Thank you very much", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7286349921c2bcd768f4f457b06c81f3bf8f467
150,500
ipynb
Jupyter Notebook
Ch12_Optimization_Algorithms/Mini-batch_Stochastic_Gradient_Descent.ipynb
jasson31/d2l-pytorch
8881d23d9a5b2c8da1840bd61bf7103d34900375
[ "Apache-2.0" ]
null
null
null
Ch12_Optimization_Algorithms/Mini-batch_Stochastic_Gradient_Descent.ipynb
jasson31/d2l-pytorch
8881d23d9a5b2c8da1840bd61bf7103d34900375
[ "Apache-2.0" ]
null
null
null
Ch12_Optimization_Algorithms/Mini-batch_Stochastic_Gradient_Descent.ipynb
jasson31/d2l-pytorch
8881d23d9a5b2c8da1840bd61bf7103d34900375
[ "Apache-2.0" ]
null
null
null
47.551343
2,257
0.513608
[ [ [ "# Mini-batch Stochastic Gradient Descent", "_____no_output_____" ], [ "In each iteration, the gradient descent uses the entire training data set to compute the gradient, so it is sometimes referred to as batch gradient descent. Stochastic gradient descent (SGD) only randomly select one example in each iteration to compute the gradient. Just like in the previous chapters, we can perform random uniform sampling for each iteration to form a mini-batch and then use this mini-batch to compute the gradient. Now, we are going to discuss mini-batch stochastic gradient descent.\n\nSet objective function $f(\\boldsymbol{x}): \\mathbb{R}^d \\rightarrow \\mathbb{R}$. The time step before the start of iteration is set to 0. The independent variable of this time step is $\\boldsymbol{x}_0\\in \\mathbb{R}^d$ and is usually obtained by random initialization. In each subsequent time step $t>0$, mini-batch SGD uses random uniform sampling to get a mini-batch $\\mathcal{B}_t$ made of example indices from the training data set. We can use sampling with replacement or sampling without replacement to get a mini-batch example. The former method allows duplicate examples in the same mini-batch, the latter does not and is more commonly used. We can use either of the two methods\n\n$$\\boldsymbol{g}t \\leftarrow \\nabla f{\\mathcal{B}t}(\\boldsymbol{x}{t-1}) = \\frac{1}{|\\mathcal{B}|} \\sum_{i \\in \\mathcal{B}t}\\nabla f_i(\\boldsymbol{x}{t-1})$$\n\nto compute the gradient $\\boldsymbol{g}t$ of the objective function at $\\boldsymbol{x}{t-1}$ with mini-batch $\\mathcal{B}_t$ at time step $t$. Here, $|\\mathcal{B}|$ is the size of the batch, which is the number of examples in the mini-batch. This is a hyper-parameter. Just like the stochastic gradient, the mini-batch SGD $\\boldsymbol{g}t$ obtained by sampling with replacement is also the unbiased estimate of the gradient $\\nabla f(\\boldsymbol{x}{t-1})$. Given the learning rate $\\eta_t$ (positive), the iteration of the mini-batch SGD on the independent variable is as follows:\n\n$$\\boldsymbol{x}t \\leftarrow \\boldsymbol{x}{t-1} - \\eta_t \\boldsymbol{g}_t.$$\n\nThe variance of the gradient based on random sampling cannot be reduced during the iterative process, so in practice, the learning rate of the (mini-batch) SGD can self-decay during the iteration, such as $\\eta_t=\\eta t^\\alpha$ (usually $\\alpha=-1$ or $-0.5$), $\\eta_t = \\eta \\alpha^t$ (e.g $\\alpha=0.95$), or learning rate decay once per iteration or after several iterations. As a result, the variance of the learning rate and the (mini-batch) SGD will decrease. Gradient descent always uses the true gradient of the objective function during the iteration, without the need to self-decay the learning rate.\n\nThe cost for computing each iteration is $\\mathcal{O}(|\\mathcal{B}|)$. When the batch size is 1, the algorithm is an SGD; when the batch size equals the example size of the training data, the algorithm is a gradient descent. When the batch size is small, fewer examples are used in each iteration, which will result in parallel processing and reduce the RAM usage efficiency. This makes it more time consuming to compute examples of the same size than using larger batches. When the batch size increases, each mini-batch gradient may contain more redundant information. To get a better solution, we need to compute more examples for a larger batch size, such as increasing the number of epochs.", "_____no_output_____" ], [ "## Reading Data", "_____no_output_____" ], [ "In this chapter, we will use a data set developed by NASA to test the wing noise from different aircraft to compare these optimization algorithms. We will use the first 1500 examples of the data set, 5 features, and a normalization method to preprocess the data.", "_____no_output_____" ] ], [ [ "import sys\nsys.path.insert(0, '..')\n%matplotlib inline\nimport d2l\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport time", "_____no_output_____" ], [ "def get_data():\n data = np.genfromtxt('../data/airfoil_self_noise.dat', delimiter='\\t')\n data = (data - data.mean(axis=0)) / data.std(axis=0)\n return torch.Tensor(data[:1500, :-1]), torch.Tensor(data[:1500, -1])\n\nfeatures, labels =get_data()\nfeatures.shape", "_____no_output_____" ] ], [ [ "## Implementation from Scratch", "_____no_output_____" ], [ "We have already implemented the mini-batch SGD algorithm in the chapter linear cratch. We have made its input parameters more generic here, so that we can conveniently use the same input for the other optimization algorithms introduced later in this chapter. Specifically, we add the status input states and place the hyper-parameter in dictionary hyperparams. In addition, we will average the loss of each mini-batch example in the training function, so the gradient in the optimization algorithm does not need to be divided by the batch size.", "_____no_output_____" ] ], [ [ "def sgd(params, states, hyperparams,batch_size):\n for p in params:\n print('p:',p.grad)\n p.data.sub_(hyperparams['lr'] * p.grad)\n p.grad.data.zero_()", "_____no_output_____" ] ], [ [ "Next, we are going to implement a generic training function to facilitate the use of the other optimization algorithms introduced later in this chapter. It initializes a linear regression model and can then be used to train the model with the mini-batch SGD and other algorithms introduced in subsequent sections.", "_____no_output_____" ] ], [ [ "def train(trainer_fn, states, hyperparams, features, labels,batch_size=10, num_epochs=2):\n \n # Initialize model parameters\n net, loss = d2l.linreg, d2l.squared_loss\n w = torch.ones(()).new_empty((features.shape[1], 1),requires_grad=True)\n b= torch.zeros((1,),requires_grad=True)\n l= torch.zeros((1500,1),requires_grad=True)\n def eval_loss():\n return loss(net(features, w, b), labels).mean()\n ls, ts = [eval_loss()], [0,]\n data_iter = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)\n start = time.time()\n for _ in range(num_epochs):\n for batch_i, (X, y) in enumerate(data_iter):\n l = loss(torch.mm(X,w)+b,y).mean()\n l.backward()\n trainer_fn([w, b], states, hyperparams,batch_size)\n if (batch_i + 1) * batch_size % 10 == 0:\n ts.append(time.time() - start + ts[-1])\n ls.append(eval_loss())\n start = time.time()\n print('loss: %f, %f sec per epoch' % (ls[-1], ts[-1]/num_epochs))\n d2l.set_figsize()\n d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)\n d2l.plt.xlabel('epoch')\n d2l.plt.ylabel('loss')\n return ts, ls ", "_____no_output_____" ] ], [ [ "When the batch size equals 1500 (the total number of examples), we use gradient descent for optimization. The model parameters will be iterated only once for each epoch of the gradient descent. As we can see, the downward trend of the value of the objective function (training loss) flattened out after 6 iterations.", "_____no_output_____" ] ], [ [ "def train_sgd(lr, batch_size, num_epochs=2):\n return train(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)\ngd_res = train_sgd(1, 1500,4)\n", "p: None\n" ] ], [ [ "When the batch size equals 1, we use SGD for optimization. In order to simplify the implementation, we did not self-decay the learning rate. Instead, we simply used a small constant for the learning rate in the (mini-batch) SGD experiment. In SGD, the independent variable (model parameter) is updated whenever an example is processed. Thus it is updated 1500 times in one epoch. As we can see, the decline in the value of the objective function slows down after one epoch.\n\nAlthough both the procedures processed 1500 examples within one epoch, SGD consumes more time than gradient descent in our experiment. This is because SGD performed more iterations on the independent variable within one epoch, and it is harder for single-example gradient computation to use parallel computing effectively.", "_____no_output_____" ] ], [ [ "sgd_res = train_sgd(0.005, 1)", "p: None\n" ] ], [ [ "When the batch size equals 100, we use mini-batch SGD for optimization. The time required for one epoch is between the time needed for gradient descent and SGD to complete the same epoch.", "_____no_output_____" ] ], [ [ "mini1_res = train_sgd(.4, 100)", "loss: 0.246469, 0.086600 sec per epoch\n" ] ], [ [ "Reduce the batch size to 10, the time for each epoch increases because the workload for each batch is less efficient to execute.", "_____no_output_____" ] ], [ [ "mini2_res = train_sgd(.05, 10)", "loss: 0.243249, 0.310116 sec per epoch\n" ] ], [ [ "Finally, we compare the time versus loss for the preview four experiments. As can be seen, despite SGD converges faster than GD in terms of number of examples processed, it uses more time to reach the same loss than GD because that computing gradient example by example is not efficient. Mini-batch SGD is able to trade-off the convergence speed and computation efficiency. Here, a batch size 10 improves SGD, and a batch size 100 even outperforms GD.", "_____no_output_____" ] ], [ [ "d2l.set_figsize([6, 3])\nfor res in [gd_res, sgd_res, mini1_res, mini2_res]:\n d2l.plt.plot(res[0], res[1])\n d2l.plt.xlabel('time (sec)')\n d2l.plt.ylabel('loss')\n d2l.plt.xscale('log')\n d2l.plt.xlim([1e-3, 1])\n d2l.plt.legend(['gd', 'sgd', 'batch size=100', 'batch size=10']);", "_____no_output_____" ] ], [ [ "## Summary\n* Mini-batch stochastic gradient uses random uniform sampling to get a mini-batch training example for gradient computation.\n* In practice, learning rates of the (mini-batch) SGD can self-decay during iteration.\n* In general, the time consumption per epoch for mini-batch stochastic gradient is between what takes for gradient descent and SGD to complete the same epoch.\n\n## Exercises\n* Modify the batch size and learning rate and observe the rate of decline for the value of the objective function and the time consumed in each epoch.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7286834fffceb584b027f379e7a6c1b5ef41531
6,152
ipynb
Jupyter Notebook
docs_src/utils.ipython.ipynb
mogwai/fastai
df38e530bd395a53c2c9b89b8950684ba3a2270e
[ "Apache-2.0" ]
11
2019-08-06T11:44:24.000Z
2022-03-12T20:04:56.000Z
docs_src/utils.ipython.ipynb
mogwai/fastai
df38e530bd395a53c2c9b89b8950684ba3a2270e
[ "Apache-2.0" ]
4
2019-11-22T20:48:34.000Z
2022-02-26T06:04:09.000Z
docs_src/utils.ipython.ipynb
mogwai/fastai
df38e530bd395a53c2c9b89b8950684ba3a2270e
[ "Apache-2.0" ]
9
2018-11-03T10:56:17.000Z
2020-10-19T20:44:33.000Z
34.561798
537
0.608095
[ [ [ "## IPython Utilities", "_____no_output_____" ], [ "Utilities to help work with ipython/jupyter environment.", "_____no_output_____" ], [ "To import from [`fastai.utils.ipython`](/utils.ipython.html#utils.ipython) do:", "_____no_output_____" ] ], [ [ "from fastai.gen_doc.nbdoc import *", "_____no_output_____" ], [ "from fastai.utils.ipython import * ", "_____no_output_____" ] ], [ [ "## Workarounds to the leaky ipython traceback on exception\n\nipython has a feature where it stores tb with all the `locals()` tied in, which\nprevents `gc.collect()` from freeing those variables and leading to a leakage.\n\nTherefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.ipython.html#gpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.ipython.html#gpu_mem_restore_ctx) context manager which are described next:", "_____no_output_____" ] ], [ [ "show_doc(gpu_mem_restore)", "_____no_output_____" ] ], [ [ "[`gpu_mem_restore`](/utils.ipython.html#gpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)\n\n* under non-ipython environment it doesn't do anything.\n* under ipython currently it strips tb by default only for the \"CUDA out of memory\" exception.\n\nThe env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,\ndepending on its value: \n\n* \"0\": never strip tb (makes it possible to always use `%debug` magic, but with leaks)\n* \"1\": always strip tb (never need to worry about leaks, but `%debug` won't work)\n\ne.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']=\"0\"` will set it to 0.\n", "_____no_output_____" ] ], [ [ "show_doc(gpu_mem_restore_ctx, title_level=4)", "_____no_output_____" ] ], [ [ "if function decorator is not a good option, you can use a context manager instead. For example:\n```\nwith gpu_mem_restore_ctx():\n learn.fit_one_cycle(1,1e-2)\n```\nThis particular one will clear tb on any exception.", "_____no_output_____" ] ], [ [ "from fastai.gen_doc.nbdoc import *\nfrom fastai.utils.ipython import * ", "_____no_output_____" ] ], [ [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7286d08987d450d17bd1a47667dd3510c84a177
103,003
ipynb
Jupyter Notebook
22/22.ipynb
iimog/aoc2021
17274e74b74e2765142a65d55f6b3212a49318ac
[ "Unlicense" ]
null
null
null
22/22.ipynb
iimog/aoc2021
17274e74b74e2765142a65d55f6b3212a49318ac
[ "Unlicense" ]
null
null
null
22/22.ipynb
iimog/aoc2021
17274e74b74e2765142a65d55f6b3212a49318ac
[ "Unlicense" ]
1
2021-12-11T16:50:26.000Z
2021-12-11T16:50:26.000Z
76.242043
15,973
0.722367
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7287532c3cec7019b0a2fd5a42ce2b3a0cde4fc
6,124
ipynb
Jupyter Notebook
team7_proposal (1).ipynb
CSCI4850/s21-team7-project
c3652a117efacd166f2074b228f2f4c303131e4e
[ "MIT" ]
null
null
null
team7_proposal (1).ipynb
CSCI4850/s21-team7-project
c3652a117efacd166f2074b228f2f4c303131e4e
[ "MIT" ]
null
null
null
team7_proposal (1).ipynb
CSCI4850/s21-team7-project
c3652a117efacd166f2074b228f2f4c303131e4e
[ "MIT" ]
2
2021-04-12T20:48:06.000Z
2021-05-05T01:46:22.000Z
40.289474
145
0.674722
[ [ [ "--Team 7: LSTM--\nGrayson Cordell\nJacob Swindell\nJesse Gailbreath\nNoah Norrod", "_____no_output_____" ], [ "Neural networks generally rely on extensive datasets to be most effective.\nOne hurdle that many data scientists face is a lack of data to work with.\nWhat if you want to classify something that has a limited dataset available?\nOver the past 2 decades there have been numerous studies done related to this topic,\nespecially those involving automatic speech recognition (ASR).\nVarying methods have been applied in previous research to improve performance of networks working with limited datasets.\n", "_____no_output_____" ], [ "Python has numerous libraries available now to aid in making all kinds of alterations to audio datasets just for this purpose. \nBy making small changes to the audio itself (augmenting the data), you can create a larger dataset.\nThis will hopefully lead to training a neural network that has an increased capacity to generalize the data.\nIn doing so, you should get a network that performs better.", "_____no_output_____" ], [ "The basic concept of this project is to determine the viability of certain data augmentation techniques to bolster the performance of a\nneural network used to analyze small datasets.\nWe intend to use datasets comprised of audio files.\nThe data will be augmented over two primary categories, pitch and time.\nNew datasets will be derived from an existing set.\nEach new dataset will have specific measurable changes to the pitch and/or the time length of each sample.\nAn increase in the size of the datasets will allow us to create larger training models.\nThis is turn should yield a more robust and accurate neural network.", "_____no_output_____" ], [ "Testing will be performed with one or more datasets readily available for analysis.\nThe spoken_digit (availble within TensorFlow) catalog contains a total 2,500 audio samples.\nThis will be the primary dataset used throughout the project.\nCompared to many other freely available datasets, this would be considered small.\nThis project will track and record the results from the network, and be able to compare\nthe results against the base case using the original dataset through the same network.\nIf we are able to identify measurable improvement, it would be interesting to see if this model\ncould effectively be applied to other audio datasets that have a different makeup.", "_____no_output_____" ], [ "We will be implementing a neural network using a CNN LSTM. \nThe input will be fed to a CNN model, then a LSTM model.\nFrom there it will go through dense layers before output.\nAs of now, the plan is to use librosa library in Python to work with the audio samples.\nThis library offers a wide variety of manipulation ability that aligns well with the goals\nof this project.", "_____no_output_____" ], [ "The base case will be established by using the original dataset at a 70/30 split for training/validation.\nWe will also use a 90/10 set for comparisson against potential overfitting.\nWe will work to create at least 5 new datasets for overall testing in the network.\n1) Augmented time, set size increased by 25%\n2) Augmented pitch, set size increased by 25%\n3) Augmented time, set size increased by 50%\n4) Augmented pitch, set size increased by 50%\n5) Combination set using augmented time and pitch, set size increased by 50%", "_____no_output_____" ], [ "Some of the key aspects that will be tracked and reproted on:\n1) The number of augmentations performed on individual samples used to build new datasets.\n2) Variances in results between newly derived datasets versus the base case.\n3) Comparisson of results of the newly derived datasets of unique sizes versus one another.\n4) Whether one type of augmentation drives greater improvement over the other.\n5) Whether augmented, original, or mixed datasets produce a higher quality network.", "_____no_output_____" ], [ "This project may draw further inspiration from previous research done on the subject ( if so, it will\nappropriately document the research).", "_____no_output_____" ], [ "The expected outcome is that by expanding the dataset, we will of course be able to improve the quality of the network and its accuracy.\nWe will be looking to see if there are any evident differneces in network performance based on the various alterations to the data.\nAt this time, we are unsure of which augmentation type, or combination of types, will be most effective.", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e72875e8d3163ce0ba32e1d6ab4ef2b16ea0053d
3,144
ipynb
Jupyter Notebook
0.15/_downloads/plot_brainstorm_data.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_brainstorm_data.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_brainstorm_data.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
58.222222
1,640
0.628181
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\n# Brainstorm tutorial datasets\n\n\nHere we compute the evoked from raw for the Brainstorm\ntutorial dataset. For comparison, see [1]_ and:\n\n http://neuroimage.usc.edu/brainstorm/Tutorials/MedianNerveCtf\n\nReferences\n----------\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716\n\n", "_____no_output_____" ] ], [ [ "# Authors: Mainak Jas <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets.brainstorm import bst_raw\n\nprint(__doc__)\n\ntmin, tmax, event_id = -0.1, 0.3, 2 # take right-hand somato\nreject = dict(mag=4e-12, eog=250e-6)\n\ndata_path = bst_raw.data_path()\n\nraw_fname = data_path + '/MEG/bst_raw/' + \\\n 'subj001_somatosensory_20111109_01_AUX-f_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.plot()\n\n# set EOG channel\nraw.set_channel_types({'EEG058': 'eog'})\nraw.set_eeg_reference('average', projection=True)\n\n# show power line interference and remove it\nraw.plot_psd(tmax=60., average=False)\nraw.notch_filter(np.arange(60, 181, 60), fir_design='firwin')\n\nevents = mne.find_events(raw, stim_channel='UPPT001')\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Compute epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject, preload=False)\n\n# compute evoked\nevoked = epochs.average()\n\n# remove physiological artifacts (eyeblinks, heartbeats) using SSP on baseline\nevoked.add_proj(mne.compute_proj_evoked(evoked.copy().crop(tmax=0)))\nevoked.apply_proj()\n\n# fix stim artifact\nmne.preprocessing.fix_stim_artifact(evoked)\n\n# correct delays due to hardware (stim artifact is at 4 ms)\nevoked.shift_time(-0.004)\n\n# plot the result\nevoked.plot()\n\n# show topomaps\nevoked.plot_topomap(times=np.array([0.016, 0.030, 0.060, 0.070]))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
e7287ba378df376d92eeb3afaedb54e58790d1fa
37,994
ipynb
Jupyter Notebook
notebooks/ScalarVector.ipynb
gbiomech/BMC
fec9413b17a54f00ba6818438f7a50b132353e42
[ "CC-BY-4.0" ]
1
2022-01-07T22:30:39.000Z
2022-01-07T22:30:39.000Z
notebooks/ScalarVector.ipynb
gbiomech/BMC
fec9413b17a54f00ba6818438f7a50b132353e42
[ "CC-BY-4.0" ]
null
null
null
notebooks/ScalarVector.ipynb
gbiomech/BMC
fec9413b17a54f00ba6818438f7a50b132353e42
[ "CC-BY-4.0" ]
null
null
null
33.892953
502
0.514687
[ [ [ "# Scalar and vector\n\n> Marcos Duarte \n> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) \n> Federal University of ABC, Brazil", "_____no_output_____" ], [ "Python handles very well all mathematical operations with numeric scalars and vectors and you can use [Sympy](http://sympy.org) for similar stuff but with abstract symbols. Let's briefly review scalars and vectors and show how to use Python for numerical calculation. \n\nFor a review about scalars and vectors, see chapter 2 of [Ruina and Rudra's book](http://ruina.tam.cornell.edu/Book/index.html).", "_____no_output_____" ], [ "## Scalar\n\n>A **scalar** is a one-dimensional physical quantity, which can be described by a single real number. \nFor example, time, mass, and energy are examples of scalars.\n\n### Scalar operations in Python\n\nSimple arithmetic operations with scalars are indeed simple:", "_____no_output_____" ] ], [ [ "import math\n\na = 2\nb = 3\nprint('a =', a, ', b =', b)\nprint('a + b =', a + b)\nprint('a - b =', a - b)\nprint('a * b =', a * b)\nprint('a / b =', a / b)\nprint('a ** b =', a ** b)\nprint('sqrt(b) =', math.sqrt(b))", "a = 2 , b = 3\na + b = 5\na - b = -1\na * b = 6\na / b = 0.6666666666666666\na ** b = 8\nsqrt(b) = 1.7320508075688772\n" ] ], [ [ "If you have a set of numbers, or an array, it is probably better to use Numpy; it will be faster for large data sets, and combined with Scipy, has many more mathematical funcions.", "_____no_output_____" ] ], [ [ "import numpy as np\n\na = 2\nb = [3, 4, 5, 6, 7, 8]\nb = np.array(b)\nprint('a =', a, ', b =', b)\nprint('a + b =', a + b)\nprint('a - b =', a - b)\nprint('a * b =', a * b)\nprint('a / b =', a / b)\nprint('a ** b =', a ** b)\nprint('np.sqrt(b) =', np.sqrt(b)) # use numpy functions for numpy arrays", "a = 2 , b = [3 4 5 6 7 8]\na + b = [ 5 6 7 8 9 10]\na - b = [-1 -2 -3 -4 -5 -6]\na * b = [ 6 8 10 12 14 16]\na / b = [ 0.66666667 0.5 0.4 0.33333333 0.28571429 0.25 ]\na ** b = [ 8 16 32 64 128 256]\nnp.sqrt(b) = [ 1.73205081 2. 2.23606798 2.44948974 2.64575131 2.82842712]\n" ] ], [ [ "Numpy performs the arithmetic operations of the single number in `a` with all the numbers of the array `b`. This is called broadcasting in computer science. \nEven if you have two arrays (but they must have the same size), Numpy handles for you:", "_____no_output_____" ] ], [ [ "a = np.array([1, 2, 3])\nb = np.array([4, 5, 6])\nprint('a =', a, ', b =', b)\nprint('a + b =', a + b)\nprint('a - b =', a - b)\nprint('a * b =', a * b)\nprint('a / b =', a / b)\nprint('a ** b =', a ** b)", "a = [1 2 3] , b = [4 5 6]\na + b = [5 7 9]\na - b = [-3 -3 -3]\na * b = [ 4 10 18]\na / b = [ 0.25 0.4 0.5 ]\na ** b = [ 1 32 729]\n" ] ], [ [ "## Vector\n\n>A **vector** is a quantity with magnitude (or length) and direction expressed numerically as an ordered list of values according to a coordinate reference system. \nFor example, position, force, and torque are physical quantities defined by vectors.\n\nFor instance, consider the position of a point in space represented by a vector: \n<br>\n<figure><img src=\"./../images/vector3D.png\" width=300/><figcaption><center><i>Figure. Position of a point represented by a vector in a Cartesian coordinate system.</i></center></figcaption></figure> \n\nThe position of the point (the vector) above can be represented as a tuple of values:\n\n$$ (x,\\: y,\\: z) \\; \\Rightarrow \\; (1, 3, 2) $$ \n\nor in matrix form:\n\n$$ \\begin{bmatrix} x \\\\y \\\\z \\end{bmatrix} \\;\\; \\Rightarrow \\;\\; \\begin{bmatrix} 1 \\\\3 \\\\2 \\end{bmatrix}$$\n\nWe can use the Numpy array to represent the components of vectors. \nFor instance, for the vector above is expressed in Python as:", "_____no_output_____" ] ], [ [ "a = np.array([1, 3, 2])\nprint('a =', a)", "a = [1 3 2]\n" ] ], [ [ "Exactly like the arrays in the last example for scalars, so all operations we performed will result in the same values, of course. \nHowever, as we are now dealing with vectors, now some of the operations don't make sense. For example, for vectors there are no multiplication, division, power, and square root in the way we calculated.\n\nA vector can also be represented as:\n\n$$ \\overrightarrow{\\mathbf{a}} = a_x\\hat{\\mathbf{i}} + a_y\\hat{\\mathbf{j}} + a_z\\hat{\\mathbf{k}} $$ \n<br>\n<figure><img src=\"./../images/vector3Dijk.png\" width=300/><figcaption><center><i>Figure. A vector representation in a Cartesian coordinate system. The versors $\\hat{\\mathbf{i}},\\, \\hat{\\mathbf{j}},\\, \\hat{\\mathbf{k}}\\,$ are usually represented in the color sequence <b>rgb</b> (red, green, blue) for easier visualization.</i></center></figcaption></figure>\n\nWhere $\\hat{\\mathbf{i}},\\, \\hat{\\mathbf{j}},\\, \\hat{\\mathbf{k}}\\,$ are unit vectors, each representing a direction and $ a_x\\hat{\\mathbf{i}},\\: a_y\\hat{\\mathbf{j}},\\: a_z\\hat{\\mathbf{k}} $ are the vector components of the vector $\\overrightarrow{\\mathbf{a}}$. \nA unit vector (or versor) is a vector whose length (or norm) is 1. \nThe unit vector of a non-zero vector $\\overrightarrow{\\mathbf{a}}$ is the unit vector codirectional with $\\overrightarrow{\\mathbf{a}}$:\n\n$$ \\mathbf{\\hat{u}} = \\frac{\\overrightarrow{\\mathbf{a}}}{||\\overrightarrow{\\mathbf{a}}||} = \\frac{a_x\\,\\hat{\\mathbf{i}} + a_y\\,\\hat{\\mathbf{j}} + a_z\\, \\hat{\\mathbf{k}}}{\\sqrt{a_x^2+a_y^2+a_z^2}} $$", "_____no_output_____" ], [ "### Magnitude (length or norm) of a vector\n\nThe magnitude (length) of a vector is often represented by the symbol $||\\;||$, also known as the norm (or Euclidean norm) of a vector and it is defined as:\n\n$$ ||\\overrightarrow{\\mathbf{a}}|| = \\sqrt{a_x^2+a_y^2+a_z^2} $$\n\nThe function `numpy.linalg.norm` calculates the norm:", "_____no_output_____" ] ], [ [ "a = np.array([1, 2, 3])\nnp.linalg.norm(a)", "_____no_output_____" ] ], [ [ "Or we can use the definition and compute directly:", "_____no_output_____" ] ], [ [ "np.sqrt(np.sum(a*a))", "_____no_output_____" ] ], [ [ "Then, the versor for the vector $ \\overrightarrow{\\mathbf{a}} = (1, 2, 3) $ is:", "_____no_output_____" ] ], [ [ "a = np.array([1, 2, 3])\nu = a/np.linalg.norm(a)\nprint('u =', u)", "u = [ 0.26726124 0.53452248 0.80178373]\n" ] ], [ [ "And we can verify its magnitude is indeed 1:", "_____no_output_____" ] ], [ [ "np.linalg.norm(u)", "_____no_output_____" ] ], [ [ "But the representation of a vector as a tuple of values is only valid for a vector with its origin coinciding with the origin $ (0, 0, 0) $ of the coordinate system we adopted.\nFor instance, consider the following vector: \n<br>\n<figure><img src=\"./../images/vector2.png\" width=260/><figcaption><center><i>Figure. A vector in space.</i></center></figcaption></figure>\n\nSuch a vector cannot be represented by $ (b_x, b_y, b_z) $ because this would be for the vector from the origin to the point B. To represent exactly this vector we need the two vectors $ \\mathbf{a} $ and $ \\mathbf{b} $. This fact is important when we perform some calculations in Mechanics.", "_____no_output_____" ], [ "### Vecton addition and subtraction", "_____no_output_____" ], [ "The addition of two vectors is another vector:\n\n$$ \\overrightarrow{\\mathbf{a}} + \\overrightarrow{\\mathbf{b}} = (a_x\\hat{\\mathbf{i}} + a_y\\hat{\\mathbf{j}} + a_z\\hat{\\mathbf{k}}) + (b_x\\hat{\\mathbf{i}} + b_y\\hat{\\mathbf{j}} + b_z\\hat{\\mathbf{k}}) = \n(a_x+b_x)\\hat{\\mathbf{i}} + (a_y+b_y)\\hat{\\mathbf{j}} + (a_z+b_z)\\hat{\\mathbf{k}} $$", "_____no_output_____" ], [ "<figure><img src=\"http://upload.wikimedia.org/wikipedia/commons/2/28/Vector_addition.svg\" width=300 alt=\"Vector addition\"/><figcaption><center><i>Figure. Vector addition (image from Wikipedia).</i></center></figcaption></figure> ", "_____no_output_____" ], [ "The subtraction of two vectors is also another vector:\n\n$$ \\overrightarrow{\\mathbf{a}} - \\overrightarrow{\\mathbf{b}} = (a_x\\hat{\\mathbf{i}} + a_y\\hat{\\mathbf{j}} + a_z\\hat{\\mathbf{k}}) + (b_x\\hat{\\mathbf{i}} + b_y\\hat{\\mathbf{j}} + b_z\\hat{\\mathbf{k}}) = \n(a_x-b_x)\\hat{\\mathbf{i}} + (a_y-b_y)\\hat{\\mathbf{j}} + (a_z-b_z)\\hat{\\mathbf{k}} $$", "_____no_output_____" ], [ "<figure><img src=\"http://upload.wikimedia.org/wikipedia/commons/2/24/Vector_subtraction.svg\" width=160 alt=\"Vector subtraction\"/><figcaption><center><i>Figure. Vector subtraction (image from Wikipedia).</i></center></figcaption></figure></div> ", "_____no_output_____" ], [ "Consider two 2D arrays (rows and columns) representing the position of two objects moving in space. The columns represent the vector components and the rows the values of the position vector in different instants. \nOnce again, it's easy to perform addition and subtraction with these vectors:", "_____no_output_____" ] ], [ [ "a = np.array([[1, 2, 3], [1, 1, 1]])\nb = np.array([[4, 5, 6], [7, 8, 9]])\nprint('a =', a, '\\nb =', b)\nprint('a + b =', a + b)\nprint('a - b =', a - b)", "a = [[1 2 3]\n [1 1 1]] \nb = [[4 5 6]\n [7 8 9]]\na + b = [[ 5 7 9]\n [ 8 9 10]]\na - b = [[-3 -3 -3]\n [-6 -7 -8]]\n" ] ], [ [ "Numpy can handle a N-dimensional array with the size limited by the available memory in your computer.\n\nAnd we can perform operations on each vector, for example, calculate the norm of each one. \nFirst let's check the shape of the variable `a` using the method `shape` or the function `numpy.shape`:", "_____no_output_____" ] ], [ [ "print(a.shape)\nprint(np.shape(a))", "(2, 3)\n(2, 3)\n" ] ], [ [ "This means the variable `a` has 2 rows and 3 columns. \nWe have to tell the function `numpy.norm` to calculate the norm for each vector, i.e., to operate through the columns of the variable `a` using the paraneter `axis`:", "_____no_output_____" ] ], [ [ "np.linalg.norm(a, axis=1)", "_____no_output_____" ] ], [ [ "## Dot product\n\nDot product (or scalar product or inner product) between two vectors is a mathematical operation algebraically defined as the sum of the products of the corresponding components (maginitudes in each direction) of the two vectors. The result of the dot product is a single number (a scalar). \nThe dot product between vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$ is:\n\n$$ \\overrightarrow{\\mathbf{a}} \\cdot \\overrightarrow{\\mathbf{b}} = (a_x\\,\\hat{\\mathbf{i}}+a_y\\,\\hat{\\mathbf{j}}+a_z\\,\\hat{\\mathbf{k}}) \\cdot (b_x\\,\\hat{\\mathbf{i}}+b_y\\,\\hat{\\mathbf{j}}+b_z\\,\\hat{\\mathbf{k}}) = a_x b_x + a_y b_y + a_z b_z $$\n\nBecause by definition:\n\n$$ \\hat{\\mathbf{i}} \\cdot \\hat{\\mathbf{i}} = \\hat{\\mathbf{j}} \\cdot \\hat{\\mathbf{j}} = \\hat{\\mathbf{k}} \\cdot \\hat{\\mathbf{k}}= 1 \\quad \\text{and} \\quad \\hat{\\mathbf{i}} \\cdot \\hat{\\mathbf{j}} = \\hat{\\mathbf{i}} \\cdot \\hat{\\mathbf{k}} = \\hat{\\mathbf{j}} \\cdot \\hat{\\mathbf{k}} = 0 $$\n\nThe geometric equivalent of the dot product is the product of the magnitudes of the two vectors and the cosine of the angle between them:\n\n$$ \\overrightarrow{\\mathbf{a}} \\cdot \\overrightarrow{\\mathbf{b}} = ||\\overrightarrow{\\mathbf{a}}||\\:||\\overrightarrow{\\mathbf{b}}||\\:cos(\\theta) $$\n\nWhich is also equivalent to state that the dot product between two vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$ is the magnitude of $\\overrightarrow{\\mathbf{a}}$ times the magnitude of the component of $\\overrightarrow{\\mathbf{b}}$ parallel to $\\overrightarrow{\\mathbf{a}}$ (or the magnitude of $\\overrightarrow{\\mathbf{b}}$ times the magnitude of the component of $\\overrightarrow{\\mathbf{a}}$ parallel to $\\overrightarrow{\\mathbf{b}}$).\n\nThe dot product between two vectors can be visualized in this interactive animation:", "_____no_output_____" ] ], [ [ "from IPython.display import IFrame\nIFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/DotProduct/DotProduct.html',\n width='100%', height=400)", "_____no_output_____" ] ], [ [ "The Numpy function for the dot product is `numpy.dot`:", "_____no_output_____" ] ], [ [ "a = np.array([1, 2, 3])\nb = np.array([4, 5, 6])\nprint('a =', a, '\\nb =', b)\nprint('np.dot(a, b) =', np.dot(a, b))", "a = [1 2 3] \nb = [4 5 6]\nnp.dot(a, b) = 32\n" ] ], [ [ "Or we can use the definition and compute directly:", "_____no_output_____" ] ], [ [ "np.sum(a*b)", "_____no_output_____" ] ], [ [ "For 2D arrays, the `numpy.dot` function performs matrix multiplication rather than the dot product; so let's use the `numpy.sum` function:", "_____no_output_____" ] ], [ [ "a = np.array([[1, 2, 3], [1, 1, 1]])\nb = np.array([[4, 5, 6], [7, 8, 9]])\nnp.sum(a*b, axis=1)", "_____no_output_____" ] ], [ [ "## Vector product\n\nCross product or vector product between two vectors is a mathematical operation in three-dimensional space which results in a vector perpendicular to both of the vectors being multiplied and a length (norm) equal to the product of the perpendicular components of the vectors being multiplied (which is equal to the area of the parallelogram that the vectors span). \nThe cross product between vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$ is:\n\n$$ \\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}} = (a_x\\,\\hat{\\mathbf{i}} + a_y\\,\\hat{\\mathbf{j}} + a_z\\,\\hat{\\mathbf{k}}) \\times (b_x\\,\\hat{\\mathbf{i}}+b_y\\,\\hat{\\mathbf{j}}+b_z\\,\\hat{\\mathbf{k}}) = (a_yb_z-a_zb_y)\\hat{\\mathbf{i}} + (a_zb_x-a_xb_z)\\hat{\\mathbf{j}}+(a_xb_y-a_yb_x)\\hat{\\mathbf{k}} $$\n\nBecause by definition:\n\n$$ \\begin{array}{l l}\n\\hat{\\mathbf{i}} \\times \\hat{\\mathbf{i}} = \\hat{\\mathbf{j}} \\times \\hat{\\mathbf{j}} = \\hat{\\mathbf{k}} \\times \\hat{\\mathbf{k}} = 0 \\\\\n\\hat{\\mathbf{i}} \\times \\hat{\\mathbf{j}} = \\hat{\\mathbf{k}}, \\quad \\hat{\\mathbf{k}} \\times \\hat{\\mathbf{k}} = \\hat{\\mathbf{i}}, \\quad \\hat{\\mathbf{k}} \\times \\hat{\\mathbf{i}} = \\hat{\\mathbf{j}} \\\\\n\\hat{\\mathbf{j}} \\times \\hat{\\mathbf{i}} = -\\hat{\\mathbf{k}}, \\quad \\hat{\\mathbf{k}} \\times \\hat{\\mathbf{j}}= -\\hat{\\mathbf{i}}, \\quad \\hat{\\mathbf{i}} \\times \\hat{\\mathbf{k}} = -\\hat{\\mathbf{j}}\n\\end{array} $$\n\nThe direction of the vector resulting from the cross product between the vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$ is given by the right-hand rule.\n\nThe geometric equivalent of the cross product is:\n\nThe geometric equivalent of the cross product is the product of the magnitudes of the two vectors and the sine of the angle between them:\n\n$$ \\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}} = ||\\overrightarrow{\\mathbf{a}}||\\:||\\overrightarrow{\\mathbf{b}}||\\:sin(\\theta) $$\n\nWhich is also eqivalent to state that the cross product between two vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$ is the magnitude of $\\overrightarrow{\\mathbf{a}}$ times the magnitude of the component of $\\overrightarrow{\\mathbf{b}}$ perpendicular to $\\overrightarrow{\\mathbf{a}}$ (or the magnitude of $\\overrightarrow{\\mathbf{b}}$ times the magnitude of the component of $\\overrightarrow{\\mathbf{a}}$ perpendicular to $\\overrightarrow{\\mathbf{b}}$).\n\nThe definition above, also implies that the magnitude of the cross product is the area of the parallelogram spanned by the two vectors: \n<br>\n<figure><img src=\"http://upload.wikimedia.org/wikipedia/commons/4/4e/Cross_product_parallelogram.svg\" width=160 alt=\"Vector subtraction\"/><figcaption><center><i>Figure. Area of a parallelogram as the magnitude of the cross product (image from Wikipedia).</i></center></figcaption></figure> \n\nThe cross product can also be calculated as the determinant of a matrix:\n\n$$ \\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}} = \\left| \\begin{array}{ccc}\n\\hat{\\mathbf{i}} & \\hat{\\mathbf{j}} & \\hat{\\mathbf{k}} \\\\\na_x & a_y & a_z \\\\\nb_x & b_y & b_z \n\\end{array} \\right|\n= a_y b_z \\hat{\\mathbf{i}} + a_z b_x \\hat{\\mathbf{j}} + a_x b_y \\hat{\\mathbf{k}} - a_y b_x \\hat{\\mathbf{k}}-a_z b_y \\hat{\\mathbf{i}} - a_x b_z \\hat{\\mathbf{j}} \\\\\n\\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}} = (a_yb_z-a_zb_y)\\hat{\\mathbf{i}} + (a_zb_x-a_xb_z)\\hat{\\mathbf{j}} + (a_xb_y-a_yb_x)\\hat{\\mathbf{k}} $$\n\nThe same result as before.\n\nThe cross product between two vectors can be visualized in this interactive animation:", "_____no_output_____" ] ], [ [ "IFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/CrossProduct/CrossProduct.html',\n width='100%', height=400)", "_____no_output_____" ] ], [ [ "The Numpy function for the cross product is `numpy.cross`:", "_____no_output_____" ] ], [ [ "print('a =', a, '\\nb =', b)\nprint('np.cross(a, b) =', np.cross(a, b))", "a = [[1 2 3]\n [1 1 1]] \nb = [[4 5 6]\n [7 8 9]]\nnp.cross(a, b) = [[-3 6 -3]\n [ 1 -2 1]]\n" ] ], [ [ "For 2D arrays with vectors in different rows:", "_____no_output_____" ] ], [ [ "a = np.array([[1, 2, 3], [1, 1, 1]])\nb = np.array([[4, 5, 6], [7, 8, 9]])\nnp.cross(a, b, axis=1)", "_____no_output_____" ] ], [ [ "### Gram–Schmidt process\n\nThe [Gram–Schmidt process](http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) is a method for orthonormalizing (orthogonal unit versors) a set of vectors using the scalar product. The Gram–Schmidt process works for any number of vectors. \nFor example, given three vectors, $\\overrightarrow{\\mathbf{a}}, \\overrightarrow{\\mathbf{b}}, \\overrightarrow{\\mathbf{c}}$, in the 3D space, a basis $\\{\\hat{e}_a, \\hat{e}_b, \\hat{e}_c\\}$ can be found using the Gram–Schmidt process by: \n\nThe first versor is in the $\\overrightarrow{\\mathbf{a}}$ direction (or in the direction of any of the other vectors): \n\n$$ \\hat{e}_a = \\frac{\\overrightarrow{\\mathbf{a}}}{||\\overrightarrow{\\mathbf{a}}||} $$\n\nThe second versor, orthogonal to $\\hat{e}_a$, can be found considering we can express vector $\\overrightarrow{\\mathbf{b}}$ in terms of the $\\hat{e}_a$ direction as: \n\n$$ \\overrightarrow{\\mathbf{b}} = \\overrightarrow{\\mathbf{b}}^\\| + \\overrightarrow{\\mathbf{b}}^\\bot $$\n\nThen:\n\n$$ \\overrightarrow{\\mathbf{b}}^\\bot = \\overrightarrow{\\mathbf{b}} - \\overrightarrow{\\mathbf{b}}^\\| = \\overrightarrow{\\mathbf{b}} - (\\overrightarrow{\\mathbf{b}} \\cdot \\hat{e}_a ) \\hat{e}_a $$\n\nFinally:\n\n$$ \\hat{e}_b = \\frac{\\overrightarrow{\\mathbf{b}}^\\bot}{||\\overrightarrow{\\mathbf{b}}^\\bot||} $$\n\nThe third versor, orthogonal to $\\{\\hat{e}_a, \\hat{e}_b\\}$, can be found expressing the vector $\\overrightarrow{\\mathbf{C}}$ in terms of $\\hat{e}_a$ and $\\hat{e}_b$ directions as:\n\n$$ \\overrightarrow{\\mathbf{c}} = \\overrightarrow{\\mathbf{c}}^\\| + \\overrightarrow{\\mathbf{c}}^\\bot $$\n\nThen:\n\n$$ \\overrightarrow{\\mathbf{c}}^\\bot = \\overrightarrow{\\mathbf{c}} - \\overrightarrow{\\mathbf{c}}^\\| $$\n\nWhere:\n\n$$ \\overrightarrow{\\mathbf{c}}^\\| = (\\overrightarrow{\\mathbf{c}} \\cdot \\hat{e}_a ) \\hat{e}_a + (\\overrightarrow{\\mathbf{c}} \\cdot \\hat{e}_b ) \\hat{e}_b $$\n\nFinally:\n\n$$ \\hat{e}_c = \\frac{\\overrightarrow{\\mathbf{c}}^\\bot}{||\\overrightarrow{\\mathbf{c}}^\\bot||} $$", "_____no_output_____" ], [ "Let's implement the Gram–Schmidt process in Python.\n\nFor example, consider the positions (vectors) $\\overrightarrow{\\mathbf{a}} = [1,2,0], \\overrightarrow{\\mathbf{b}} = [0,1,3], \\overrightarrow{\\mathbf{c}} = [1,0,1]$:", "_____no_output_____" ] ], [ [ "import numpy as np\n\na = np.array([1, 2, 0])\nb = np.array([0, 1, 3])\nc = np.array([1, 0, 1])", "_____no_output_____" ] ], [ [ "The first versor is:", "_____no_output_____" ] ], [ [ "ea = a/np.linalg.norm(a)\nprint(ea)", "[ 0.4472136 0.89442719 0. ]\n" ] ], [ [ "The second versor is:", "_____no_output_____" ] ], [ [ "eb = b - np.dot(b, ea)*ea\neb = eb/np.linalg.norm(eb)\nprint(eb)", "[-0.13187609 0.06593805 0.98907071]\n" ] ], [ [ "And the third version is:", "_____no_output_____" ] ], [ [ "ec = c - np.dot(c, ea)*ea - np.dot(c, eb)*eb\nec = ec/np.linalg.norm(ec)\nprint(ec)", "[ 0.88465174 -0.44232587 0.14744196]\n" ] ], [ [ "Let's check the orthonormality between these versors:", "_____no_output_____" ] ], [ [ "print('Versors:', '\\nea =', ea, '\\neb =', eb, '\\nec =', ec)\nprint('\\nTest of orthogonality (scalar product between versors):',\n '\\nea x eb:', np.dot(ea, eb),\n '\\neb x ec:', np.dot(eb, ec),\n '\\nec x ea:', np.dot(ec, ea))\nprint('\\nNorm of each versor:',\n '\\n||ea|| =', np.linalg.norm(ea),\n '\\n||eb|| =', np.linalg.norm(eb),\n '\\n||ec|| =', np.linalg.norm(ec))", "Versors: \nea = [ 0.4472136 0.89442719 0. ] \neb = [-0.13187609 0.06593805 0.98907071] \nec = [ 0.88465174 -0.44232587 0.14744196]\n\nTest of orthogonality (scalar product between versors): \nea x eb: 2.08166817117e-17 \neb x ec: -2.77555756156e-17 \nec x ea: 5.55111512313e-17\n\nNorm of each versor: \n||ea|| = 1.0 \n||eb|| = 1.0 \n||ec|| = 1.0\n" ] ], [ [ "Or, we can simply use the built-in QR factorization function from NumPy:", "_____no_output_____" ] ], [ [ "vectors = np.vstack((a,b,c)).T\nQ, R = np.linalg.qr(vectors)\nprint(Q)", "[[-0.4472136 0.13187609 0.88465174]\n [-0.89442719 -0.06593805 -0.44232587]\n [-0. -0.98907071 0.14744196]]\n" ], [ "ea, eb, ec = Q[:, 0], Q[:, 1], Q[:, 2]\nprint('Versors:', '\\nea =', ea, '\\neb =', eb, '\\nec =', ec)\n\nprint('\\nTest of orthogonality (scalar product between versors):')\nprint(np.dot(Q.T, Q))\n \nprint('\\nTest of orthogonality (scalar product between versors):',\n '\\nea x eb:', np.dot(ea, eb),\n '\\neb x ec:', np.dot(eb, ec),\n '\\nec x ea:', np.dot(ec, ea))\n\nprint('\\nNorm of each versor:',\n '\\n||ea|| =', np.linalg.norm(ea),\n '\\n||eb|| =', np.linalg.norm(eb),\n '\\n||ec|| =', np.linalg.norm(ec))", "Versors: \nea = [-0.4472136 -0.89442719 -0. ] \neb = [ 0.13187609 -0.06593805 -0.98907071] \nec = [ 0.88465174 -0.44232587 0.14744196]\n\nTest of orthogonality (scalar product between versors):\n[[ 1.00000000e+00 4.05428641e-17 1.77198775e-16]\n [ 4.05428641e-17 1.00000000e+00 -3.05471126e-16]\n [ 1.77198775e-16 -3.05471126e-16 1.00000000e+00]]\n\nTest of orthogonality (scalar product between versors): \nea x eb: 4.16333634234e-17 \neb x ec: -3.1918911958e-16 \nec x ea: 1.66533453694e-16\n\nNorm of each versor: \n||ea|| = 1.0 \n||eb|| = 1.0 \n||ec|| = 1.0\n" ] ], [ [ "Which results in the same basis with exception of the changed signals.", "_____no_output_____" ], [ "## Problems\n\n1. Given the vectors, $\\overrightarrow{\\mathbf{a}}=[1, 0, 0]$ and $\\overrightarrow{\\mathbf{b}}=[1, 1, 1]$, calculate the dot and cross products between them. \n\n2. Calculate the unit vectors for $[2, −2, 3]$ and $[3, −3, 2]$ and determine an orthogonal vector to these two vectors. \n\n3. Given the vectors $\\overrightarrow{\\mathbf{a}}$=[1, 0, 0] and $\\overrightarrow{\\mathbf{b}}$=[1, 1, 1], calculate $ \\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}} $ and verify that this vector is orthogonal to vectors $\\overrightarrow{\\mathbf{a}}$ and $\\overrightarrow{\\mathbf{b}}$. Also, calculate $\\overrightarrow{\\mathbf{b}} \\times \\overrightarrow{\\mathbf{a}}$ and compare it with $\\overrightarrow{\\mathbf{a}} \\times \\overrightarrow{\\mathbf{b}}$. \n\n4. Given the vectors $[1, 1, 0]; [1, 0, 1]; [0, 1, 1]$, calculate a basis using the Gram–Schmidt process.\n\n5. Write a Python function to calculate a basis using the Gram–Schmidt process (implement the algorithm!) considering that the input are three variables where each one contains the coordinates of vectors as columns and different positions of these vectors as rows. For example, sample variables can be generated with the command `np.random.randn(5, 3)`. \n\n6. Study the sample problems **2.1** to **2.9**, **2.11** (using Python), **2.12**, **2.14**, **2.17**, **2.18** to **2.24** of Ruina and Rudra's book\n\n7. From Ruina and Rudra's book, solve the problems **2.1.1** to **2.3.16**. \n\nIf you are new to scalars and vectors, you should solve these problems first by hand and then use Python to check the answers.", "_____no_output_____" ], [ "## References\n\n- Ruina A, Rudra P (2015) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
e72883a0159c8f12f9d702b9eddae19f8a841fc4
193,380
ipynb
Jupyter Notebook
1-2.ipynb
minijr/eat_tensorflow2_in_30_days
46a6c3bd7d2d70b5d9b61a1dd035100010ac0fe3
[ "Apache-2.0" ]
null
null
null
1-2.ipynb
minijr/eat_tensorflow2_in_30_days
46a6c3bd7d2d70b5d9b61a1dd035100010ac0fe3
[ "Apache-2.0" ]
null
null
null
1-2.ipynb
minijr/eat_tensorflow2_in_30_days
46a6c3bd7d2d70b5d9b61a1dd035100010ac0fe3
[ "Apache-2.0" ]
null
null
null
56.560398
5,716
0.587569
[ [ [ "import tensorflow as tf \nfrom tensorflow.keras import datasets,layers,models", "_____no_output_____" ], [ "print(tf.__version__)", "2.1.0\n" ], [ "BATCH_SIZE = 100\n\ndef load_image(img_path,size = (32,32)):\n # 由于Windows路径格式问题,需要修改代码才能正确分配标签:\n # https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/19\n label = tf.constant(1,tf.int8) if tf.strings.regex_full_match(img_path,\".*automobile.*\") \\\n else tf.constant(0,tf.int8)\n img = tf.io.read_file(img_path)\n img = tf.image.decode_jpeg(img) #注意此处为jpeg格式\n img = tf.image.resize(img,size)/255.0\n return(img,label)", "_____no_output_____" ], [ "#使用并行化预处理num_parallel_calls 和预存数据prefetch来提升性能\nds_train = tf.data.Dataset.list_files(\"./data/cifar2/train/*/*.jpg\") \\\n .map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE) \\\n .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \\\n .prefetch(tf.data.experimental.AUTOTUNE)\n\nds_test = tf.data.Dataset.list_files(\"./data/cifar2/test/*/*.jpg\") \\\n .map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE) \\\n .batch(BATCH_SIZE) \\\n .prefetch(tf.data.experimental.AUTOTUNE)", "_____no_output_____" ], [ "%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\n\n#查看部分样本\nfrom matplotlib import pyplot as plt \n\nplt.figure(figsize=(8,8)) \nfor i,(img,label) in enumerate(ds_train.unbatch().take(9)):\n ax=plt.subplot(3,3,i+1)\n ax.imshow(img.numpy())\n ax.set_title(\"label = %d\"%label)\n ax.set_xticks([])\n ax.set_yticks([]) \nplt.show()", "_____no_output_____" ], [ "for x,y in ds_train.take(1):\n print(x.shape,y.shape)", "(100, 32, 32, 3) (100,)\n" ], [ "tf.keras.backend.clear_session() #清空会话\n\ninputs = layers.Input(shape=(32,32,3))\nx = layers.Conv2D(32,kernel_size=(3,3))(inputs)\nx = layers.MaxPool2D()(x)\nx = layers.Conv2D(64,kernel_size=(5,5))(x)\nx = layers.MaxPool2D()(x)\nx = layers.Dropout(rate=0.1)(x)\nx = layers.Flatten()(x)\nx = layers.Dense(32,activation='relu')(x)\noutputs = layers.Dense(1,activation = 'sigmoid')(x)\n\nmodel = models.Model(inputs = inputs,outputs = outputs)\n\nmodel.summary()", "Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 32, 32, 3)] 0 \n_________________________________________________________________\nconv2d (Conv2D) (None, 30, 30, 32) 896 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 11, 11, 64) 51264 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 5, 5, 64) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 1600) 0 \n_________________________________________________________________\ndense (Dense) (None, 32) 51232 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 33 \n=================================================================\nTotal params: 103,425\nTrainable params: 103,425\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "import datetime\nimport os\n\n# logdir = \"./data/keras_model/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\nlogdir = os.path.join('data', 'keras_model', datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n# tf2在windows中存在bug,解决方法是将路径使用os.path.join包装\n# 详见:https://www.zhihu.com/question/324633187\n\ntensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)\n\nmodel.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),\n loss=tf.keras.losses.binary_crossentropy,\n metrics=[\"accuracy\"]\n )\n\nhistory = model.fit(ds_train,epochs= 10,validation_data=ds_test,\n callbacks = [tensorboard_callback],workers = 4)", "Train for 100 steps, validate for 20 steps\nEpoch 1/10\n100/100 [==============================] - 27s 266ms/step - loss: 0.4261 - accuracy: 0.7975 - val_loss: 0.3433 - val_accuracy: 0.8440\nEpoch 2/10\n100/100 [==============================] - 21s 208ms/step - loss: 0.3310 - accuracy: 0.8573 - val_loss: 0.2607 - val_accuracy: 0.8895\nEpoch 3/10\n100/100 [==============================] - 21s 211ms/step - loss: 0.2613 - accuracy: 0.8889 - val_loss: 0.2172 - val_accuracy: 0.9120\nEpoch 4/10\n100/100 [==============================] - 23s 232ms/step - loss: 0.2202 - accuracy: 0.9128 - val_loss: 0.1933 - val_accuracy: 0.9255\nEpoch 5/10\n100/100 [==============================] - 22s 223ms/step - loss: 0.1957 - accuracy: 0.9217 - val_loss: 0.1921 - val_accuracy: 0.9280\nEpoch 6/10\n100/100 [==============================] - 21s 214ms/step - loss: 0.1664 - accuracy: 0.9330 - val_loss: 0.1663 - val_accuracy: 0.9350\nEpoch 7/10\n100/100 [==============================] - 21s 210ms/step - loss: 0.1530 - accuracy: 0.9399 - val_loss: 0.1657 - val_accuracy: 0.9285\nEpoch 8/10\n100/100 [==============================] - 21s 205ms/step - loss: 0.1277 - accuracy: 0.9513 - val_loss: 0.1610 - val_accuracy: 0.9335\nEpoch 9/10\n100/100 [==============================] - 23s 225ms/step - loss: 0.1161 - accuracy: 0.9556 - val_loss: 0.1593 - val_accuracy: 0.9395\nEpoch 10/10\n100/100 [==============================] - 24s 244ms/step - loss: 0.1001 - accuracy: 0.9616 - val_loss: 0.1600 - val_accuracy: 0.9375\n" ], [ "logdir", "_____no_output_____" ], [ "from tensorboard import notebook\nnotebook.list()", "Known TensorBoard instances:\n - port 6006: logdir ./data/keras_model (started 2:42:26 ago; pid 19728)\n" ], [ "#在tensorboard中查看模型\n\n# 需要 pip install jupyter-tensorboard\nnotebook.start(\"--logdir ./data/keras_model\")", "_____no_output_____" ], [ "import pandas as pd \ndfhistory = pd.DataFrame(history.history)\ndfhistory.index = range(1,len(dfhistory) + 1)\ndfhistory.index.name = 'epoch'\n\ndfhistory", "_____no_output_____" ], [ "%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\n\nimport matplotlib.pyplot as plt\n\ndef plot_metric(history, metric):\n train_metrics = history.history[metric]\n val_metrics = history.history['val_'+metric]\n epochs = range(1, len(train_metrics) + 1)\n plt.plot(epochs, train_metrics, 'bo--')\n plt.plot(epochs, val_metrics, 'ro-')\n plt.title('Training and validation '+ metric)\n plt.xlabel(\"Epochs\")\n plt.ylabel(metric)\n plt.legend([\"train_\"+metric, 'val_'+metric])\n plt.show()", "_____no_output_____" ], [ "plot_metric(history,\"loss\")", "_____no_output_____" ], [ "plot_metric(history,\"accuracy\")", "_____no_output_____" ], [ "#可以使用evaluate对数据进行评估\nval_loss,val_accuracy = model.evaluate(ds_test,workers=4)\nprint(val_loss,val_accuracy)", "20/20 [==============================] - 2s 118ms/step - loss: 0.1600 - accuracy: 0.9375\n0.15999254249036313 0.9375\n" ], [ "model.predict(ds_test)", "_____no_output_____" ], [ "for x,y in ds_test.take(1):\n print(model.predict_on_batch(x[0:20]))", "tf.Tensor(\n[[5.7426644e-03]\n [5.6986439e-05]\n [9.9991930e-01]\n [5.8649434e-03]\n [9.9627388e-01]\n [1.6427400e-02]\n [4.3829138e-04]\n [9.9988854e-01]\n [3.0845884e-02]\n [9.9963236e-01]\n [9.5056266e-01]\n [6.5943390e-01]\n [3.0106928e-03]\n [9.9834156e-01]\n [7.6121232e-04]\n [9.9973959e-01]\n [9.9301428e-01]\n [8.0127064e-03]\n [3.9466918e-02]\n [9.9979430e-01]], shape=(20, 1), dtype=float32)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e728adf33c62449a4332eedc9343fce5a1019e76
71,617
ipynb
Jupyter Notebook
examples/similarity.ipynb
AlgoLab/mp3treesim
7d08ec2b8d4915989399a13cf44cd6f767cde8ac
[ "MIT" ]
4
2020-06-04T15:55:28.000Z
2021-03-10T22:07:58.000Z
examples/similarity.ipynb
AlgoLab/mp3treesim
7d08ec2b8d4915989399a13cf44cd6f767cde8ac
[ "MIT" ]
1
2020-06-09T09:53:49.000Z
2020-06-09T09:53:49.000Z
examples/similarity.ipynb
AlgoLab/mp3treesim
7d08ec2b8d4915989399a13cf44cd6f767cde8ac
[ "MIT" ]
1
2020-06-04T18:08:19.000Z
2020-06-04T18:08:19.000Z
227.355556
17,004
0.925814
[ [ [ "Import the library and create the trees", "_____no_output_____" ] ], [ [ "import mp3treesim as mp3\n\ngv = '''\nstrict digraph G {\n\tgraph [name=G];\n\t0\t [label=root];\n\t1\t [label=\"C,M\"];\n\t0 -> 1;\n\t2\t [label=\"I,Q\"];\n\t0 -> 2;\n\t3\t [label=A];\n\t0 -> 3;\n\t1 -> 4;\n\t2 -> 5;\n\t6\t [label=G];\n\t3 -> 6;\n\t7\t [label=\"E,R\"];\n\t3 -> 7;\n\t8\t [label=F];\n\t3 -> 8;\n\t9\t [label=\"L,N\"];\n\t4 -> 9;\n\t10\t [label=\"H,K\"];\n\t4 -> 10;\n\t11\t [label=K];\n\t5 -> 11;\n}\n\n'''\n\ntree1 = mp3.read_dotfile('trees/tree1.gv')\ntree2 = mp3.read_dotstring(gv)", "_____no_output_____" ], [ "mp3.draw_tree(tree1)", "_____no_output_____" ], [ "mp3.draw_tree(tree2)", "_____no_output_____" ] ], [ [ "Compute the similarity score", "_____no_output_____" ] ], [ [ "print(mp3.similarity(tree1, tree2))", "0.05009996058703979\n" ] ], [ [ "Let's consider the tree described in `gv` string as a partially-labeled tree", "_____no_output_____" ] ], [ [ "tree2_p = mp3.read_dotstring(gv, labeled_only=True)\nmp3.draw_tree(tree2_p)", "_____no_output_____" ] ], [ [ "Let's compare the trees", "_____no_output_____" ] ], [ [ "print('Tree 1 vs Tree 2', mp3.similarity(tree1, tree2))\nprint('Tree 1 vs Tree 2p', mp3.similarity(tree1, tree2_p))\nprint('Tree 2 vs Tree 2p', mp3.similarity(tree2, tree2_p))", "Tree 1 vs Tree 2 0.05009996058703979\nTree 1 vs Tree 2p 0.061922490968955\nTree 2 vs Tree 2p 1.0\n" ] ], [ [ "Let's exlude now some mutation from the computation, without having to modify the input. In particular let's remove mutations `A,G,F,H` from tree1", "_____no_output_____" ] ], [ [ "tree1_r = mp3.read_dotfile('trees/tree1.gv', exclude='A,G,F,H')\nmp3.draw_tree(tree1_r)", "_____no_output_____" ], [ "print('Tree 1r vs Tree 2', mp3.similarity(tree1_r, tree2))\nprint('Tree 1r vs Tree 2p', mp3.similarity(tree1_r, tree2_p))\nprint('Tree 1 vs Tree 1r', mp3.similarity(tree1, tree1_r))", "Tree 1r vs Tree 2 0.018792140154100388\nTree 1r vs Tree 2p 0.02475323155433422\nTree 1 vs Tree 1r 0.88\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e728aeeef36ea10e4afaf175a294a9c234c5a822
29,920
ipynb
Jupyter Notebook
assignments/assignment1/Linear classifier.ipynb
venera111/dlcourse
24a98bf274c4f310c1b847209bdb625df6145e5d
[ "MIT" ]
null
null
null
assignments/assignment1/Linear classifier.ipynb
venera111/dlcourse
24a98bf274c4f310c1b847209bdb625df6145e5d
[ "MIT" ]
null
null
null
assignments/assignment1/Linear classifier.ipynb
venera111/dlcourse
24a98bf274c4f310c1b847209bdb625df6145e5d
[ "MIT" ]
null
null
null
32.416035
334
0.524064
[ [ [ "# Задание 1.2 - Линейный классификатор (Linear classifier)\n\nВ этом задании мы реализуем другую модель машинного обучения - линейный классификатор. Линейный классификатор подбирает для каждого класса веса, на которые нужно умножить значение каждого признака и потом сложить вместе.\nТот класс, у которого эта сумма больше, и является предсказанием модели.\n\nВ этом задании вы:\n- потренируетесь считать градиенты различных многомерных функций\n- реализуете подсчет градиентов через линейную модель и функцию потерь softmax\n- реализуете процесс тренировки линейного классификатора\n- подберете параметры тренировки на практике\n\nНа всякий случай, еще раз ссылка на туториал по numpy: \nhttp://cs231n.github.io/python-numpy-tutorial/", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "from dataset import load_svhn, random_split_train_val\nfrom gradient_check import check_gradient\nfrom metrics import multiclass_accuracy\nimport linear_classifer", "_____no_output_____" ] ], [ [ "# Как всегда, первым делом загружаем данные\n\nМы будем использовать все тот же SVHN.", "_____no_output_____" ] ], [ [ "def prepare_for_linear_classifier(train_X, test_X):\n train_flat = train_X.reshape(train_X.shape[0], -1).astype(float) / 255.0 # pixel normalize to the range 0-1\n test_flat = test_X.reshape(test_X.shape[0], -1).astype(float) / 255.0\n mean_image = np.mean(train_flat, axis=0)\n \n # Subtract mean https://stats.stackexchange.com/questions/211436/why-normalize-images-by-subtracting-datasets-image-mean-instead-of-the-current\n # https://machinelearningmastery.com/how-to-manually-scale-image-pixel-data-for-deep-learning/\n train_flat -= mean_image # centering\n test_flat -= mean_image\n\n # Add another channel with ones as a bias term\n train_flat_with_ones = np.hstack([train_flat, np.ones((train_X.shape[0], 1))])\n test_flat_with_ones = np.hstack([test_flat, np.ones((test_X.shape[0], 1))]) \n return train_flat_with_ones, test_flat_with_ones\n\ntrain_X, train_y, test_X, test_y = load_svhn(\"data\", max_train=10000, max_test=1000)\nprint(\"type test_X:\", test_X.dtype, '\\n')\nprint(test_X)\ntrain_X, test_X = prepare_for_linear_classifier(train_X, test_X)\nprint(\"type test_X after astype(float):\", test_X.dtype, '\\n')\nprint(test_X)\n# Split train into train and val\ntrain_X, train_y, val_X, val_y = random_split_train_val(train_X, train_y, num_val = 1000)\n", "type test_X: float32 \n\n[[[[ 97. 103. 93.]\n [ 96. 102. 93.]\n [ 96. 101. 91.]\n ...\n [ 83. 91. 77.]\n [ 83. 91. 78.]\n [ 82. 90. 77.]]\n\n [[100. 106. 95.]\n [101. 106. 95.]\n [101. 104. 95.]\n ...\n [ 87. 93. 80.]\n [ 85. 91. 80.]\n [ 85. 91. 79.]]\n\n [[102. 107. 98.]\n [103. 107. 97.]\n [104. 106. 97.]\n ...\n [ 91. 96. 84.]\n [ 87. 92. 80.]\n [ 87. 92. 80.]]\n\n ...\n\n [[ 99. 110. 111.]\n [100. 111. 112.]\n [101. 111. 113.]\n ...\n [ 72. 76. 70.]\n [ 65. 68. 63.]\n [ 64. 66. 61.]]\n\n [[ 99. 110. 112.]\n [100. 111. 113.]\n [101. 111. 112.]\n ...\n [ 79. 85. 79.]\n [ 71. 76. 70.]\n [ 70. 74. 68.]]\n\n [[100. 111. 113.]\n [100. 111. 113.]\n [ 99. 111. 112.]\n ...\n [ 87. 94. 87.]\n [ 78. 85. 78.]\n [ 76. 82. 76.]]]\n\n\n [[[ 90. 84. 84.]\n [ 90. 85. 82.]\n [ 90. 85. 81.]\n ...\n [103. 96. 88.]\n [113. 106. 98.]\n [121. 114. 104.]]\n\n [[ 90. 85. 82.]\n [ 90. 85. 81.]\n [ 90. 85. 81.]\n ...\n [106. 99. 93.]\n [115. 108. 100.]\n [121. 114. 104.]]\n\n [[ 92. 84. 82.]\n [ 92. 84. 81.]\n [ 90. 85. 81.]\n ...\n [107. 102. 96.]\n [116. 112. 103.]\n [122. 115. 105.]]\n\n ...\n\n [[ 83. 79. 76.]\n [ 83. 79. 76.]\n [ 83. 79. 76.]\n ...\n [130. 128. 115.]\n [130. 128. 115.]\n [130. 126. 115.]]\n\n [[ 83. 78. 75.]\n [ 83. 79. 76.]\n [ 83. 79. 76.]\n ...\n [132. 128. 117.]\n [132. 128. 117.]\n [133. 126. 116.]]\n\n [[ 84. 79. 76.]\n [ 83. 79. 76.]\n [ 83. 80. 75.]\n ...\n [131. 127. 116.]\n [132. 128. 117.]\n [133. 126. 116.]]]\n\n\n [[[145. 138. 128.]\n [140. 134. 123.]\n [129. 126. 115.]\n ...\n [137. 130. 112.]\n [137. 130. 113.]\n [137. 130. 114.]]\n\n [[146. 139. 128.]\n [142. 135. 124.]\n [133. 128. 117.]\n ...\n [137. 130. 112.]\n [138. 131. 113.]\n [138. 131. 114.]]\n\n [[147. 141. 129.]\n [144. 138. 126.]\n [138. 132. 120.]\n ...\n [138. 131. 112.]\n [139. 133. 114.]\n [139. 134. 115.]]\n\n ...\n\n [[140. 137. 121.]\n [142. 139. 125.]\n [147. 143. 130.]\n ...\n [154. 152. 133.]\n [156. 154. 136.]\n [156. 154. 137.]]\n\n [[133. 129. 113.]\n [134. 131. 116.]\n [138. 134. 121.]\n ...\n [149. 147. 131.]\n [151. 149. 134.]\n [151. 149. 135.]]\n\n [[128. 125. 110.]\n [130. 126. 112.]\n [133. 129. 117.]\n ...\n [146. 145. 130.]\n [147. 147. 133.]\n [148. 147. 134.]]]\n\n\n ...\n\n\n [[[102. 89. 73.]\n [ 99. 85. 71.]\n [ 96. 82. 70.]\n ...\n [116. 104. 91.]\n [116. 104. 90.]\n [116. 104. 90.]]\n\n [[103. 90. 73.]\n [100. 86. 72.]\n [ 97. 83. 71.]\n ...\n [122. 110. 98.]\n [123. 111. 98.]\n [123. 111. 97.]]\n\n [[105. 91. 77.]\n [103. 89. 76.]\n [100. 86. 75.]\n ...\n [132. 120. 108.]\n [134. 122. 109.]\n [135. 123. 109.]]\n\n ...\n\n [[103. 81. 71.]\n [102. 81. 72.]\n [ 99. 81. 73.]\n ...\n [133. 124. 111.]\n [133. 122. 109.]\n [133. 121. 107.]]\n\n [[ 99. 79. 68.]\n [ 97. 77. 67.]\n [ 95. 76. 68.]\n ...\n [111. 101. 91.]\n [109. 98. 87.]\n [108. 96. 85.]]\n\n [[ 94. 74. 63.]\n [ 90. 70. 61.]\n [ 88. 69. 61.]\n ...\n [ 94. 85. 77.]\n [ 93. 82. 73.]\n [ 92. 79. 71.]]]\n\n\n [[[193. 197. 208.]\n [193. 198. 210.]\n [192. 198. 212.]\n ...\n [174. 179. 183.]\n [172. 177. 181.]\n [172. 177. 180.]]\n\n [[192. 196. 207.]\n [188. 193. 206.]\n [184. 191. 206.]\n ...\n [163. 166. 171.]\n [158. 163. 168.]\n [156. 162. 167.]]\n\n [[186. 190. 202.]\n [178. 183. 199.]\n [173. 179. 196.]\n ...\n [148. 150. 155.]\n [142. 145. 151.]\n [139. 143. 150.]]\n\n ...\n\n [[176. 183. 186.]\n [177. 186. 189.]\n [178. 188. 191.]\n ...\n [191. 202. 198.]\n [189. 199. 196.]\n [188. 197. 196.]]\n\n [[186. 195. 194.]\n [185. 195. 195.]\n [183. 193. 194.]\n ...\n [191. 201. 198.]\n [190. 200. 198.]\n [190. 200. 199.]]\n\n [[190. 199. 195.]\n [188. 197. 195.]\n [184. 194. 193.]\n ...\n [192. 201. 200.]\n [191. 201. 200.]\n [191. 201. 200.]]]\n\n\n [[[215. 231. 231.]\n [212. 226. 227.]\n [210. 222. 225.]\n ...\n [122. 140. 146.]\n [157. 174. 177.]\n [186. 202. 202.]]\n\n [[210. 227. 229.]\n [210. 225. 228.]\n [208. 220. 225.]\n ...\n [140. 157. 165.]\n [171. 188. 194.]\n [193. 208. 211.]]\n\n [[206. 225. 228.]\n [202. 218. 222.]\n [198. 210. 215.]\n ...\n [147. 162. 171.]\n [167. 182. 189.]\n [181. 197. 200.]]\n\n ...\n\n [[169. 184. 182.]\n [179. 194. 192.]\n [188. 201. 200.]\n ...\n [162. 180. 184.]\n [151. 169. 173.]\n [131. 150. 155.]]\n\n [[153. 167. 167.]\n [159. 173. 174.]\n [166. 178. 180.]\n ...\n [150. 169. 173.]\n [150. 169. 173.]\n [140. 159. 163.]]\n\n [[145. 159. 160.]\n [139. 153. 154.]\n [139. 151. 154.]\n ...\n [150. 171. 171.]\n [154. 174. 175.]\n [147. 167. 168.]]]]\ntype test_X after astype(float): float64 \n\n[[-0.06622431 -0.03960588 -0.09044392 ... -0.12233843 -0.16171961\n 1. ]\n [-0.09367529 -0.11411569 -0.12573804 ... 0.05021059 -0.00485686\n 1. ]\n [ 0.12201098 0.09764902 0.04681098 ... 0.13256353 0.06573137\n 1. ]\n ...\n [-0.04661647 -0.09450784 -0.16887529 ... -0.13410314 -0.18132745\n 1. ]\n [ 0.31024627 0.32902157 0.36053647 ... 0.34432824 0.3245549\n 1. ]\n [ 0.39652078 0.4623549 0.45073255 ... 0.2109949 0.19906471\n 1. ]]\n" ], [ "print(\"min:\", test_X.min())\nprint(\"mean:\", test_X.mean())\nprint(\"max:\", test_X.max())", "min: -0.48431568627450733\nmean: 0.01875625032701183\nmax: 1.0\n" ] ], [ [ "# Играемся с градиентами!\n\nВ этом курсе мы будем писать много функций, которые вычисляют градиенты аналитическим методом.\n\nВсе функции, в которых мы будем вычислять градиенты, будут написаны по одной и той же схеме. \nОни будут получать на вход точку, где нужно вычислить значение и градиент функции, а на выходе будут выдавать кортеж (tuple) из двух значений - собственно значения функции в этой точке (всегда одно число) и аналитического значения градиента в той же точке (той же размерности, что и вход).\n```\ndef f(x):\n \"\"\"\n Computes function and analytic gradient at x\n \n x: np array of float, input to the function\n \n Returns:\n value: float, value of the function \n grad: np array of float, same shape as x\n \"\"\"\n ...\n \n return value, grad\n```\n\nНеобходимым инструментом во время реализации кода, вычисляющего градиенты, является функция его проверки. Эта функция вычисляет градиент численным методом и сверяет результат с градиентом, вычисленным аналитическим методом.\n\nМы начнем с того, чтобы реализовать вычисление численного градиента (numeric gradient) в функции `check_gradient` в `gradient_check.py`. Эта функция будет принимать на вход функции формата, заданного выше, использовать значение `value` для вычисления численного градиента и сравнит его с аналитическим - они должны сходиться.\n\nНапишите часть функции, которая вычисляет градиент с помощью численной производной для каждой координаты. Для вычисления производной используйте так называемую two-point formula (https://en.wikipedia.org/wiki/Numerical_differentiation):\n\n![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/22fc2c0a66c63560a349604f8b6b39221566236d)\n\nВсе функции приведенные в следующей клетке должны проходить gradient check.", "_____no_output_____" ] ], [ [ "# TODO: Implement check_gradient function in gradient_check.py\n# All the functions below should pass the gradient check\n\ndef square(x):\n return float(x*x), 2*x\n\ncheck_gradient(square, np.array([3.0]))\n\ndef array_sum(x):\n assert x.shape == (2,), x.shape\n return np.sum(x), np.ones_like(x)\n\ncheck_gradient(array_sum, np.array([3.0, 2.0]))\n\ndef array_2d_sum(x):\n assert x.shape == (2,2)\n return np.sum(x), np.ones_like(x)\n\ncheck_gradient(array_2d_sum, np.array([[3.0, 2.0], [1.0, 0.0]]))", "6.0 6.000000000039306\nGradient check passed!\n1.0 0.9999999999621422\n1.0 0.9999999999621422\nGradient check passed!\n1.0 0.9999999999621422\n1.0 0.9999999999621422\n1.0 0.9999999999621422\n1.0 0.9999999999621422\nGradient check passed!\n" ] ], [ [ "## Начинаем писать свои функции, считающие аналитический градиент\n\nТеперь реализуем функцию softmax, которая получает на вход оценки для каждого класса и преобразует их в вероятности от 0 до 1:\n![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/e348290cf48ddbb6e9a6ef4e39363568b67c09d3)\n\n**Важно:** Практический аспект вычисления этой функции заключается в том, что в ней учавствует вычисление экспоненты от потенциально очень больших чисел - это может привести к очень большим значениям в числителе и знаменателе за пределами диапазона float.\n\nК счастью, у этой проблемы есть простое решение -- перед вычислением softmax вычесть из всех оценок максимальное значение среди всех оценок:\n```\npredictions -= np.max(predictions)\n```\n(подробнее здесь - http://cs231n.github.io/linear-classify/#softmax, секция `Practical issues: Numeric stability`)", "_____no_output_____" ] ], [ [ "# TODO Implement softmax and cross-entropy for single sample\nprobs = linear_classifer.softmax(np.array([-10, 0, 10]))\n\n# Make sure it works for big numbers too!\nprobs = linear_classifer.softmax(np.array([1000, 0, 0])) # [1. 0. 0.]\nassert np.isclose(probs[0], 1.0)", "_____no_output_____" ] ], [ [ "Кроме этого, мы реализуем cross-entropy loss, которую мы будем использовать как функцию ошибки (error function).\nВ общем виде cross-entropy определена следующим образом:\n![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/0cb6da032ab424eefdca0884cd4113fe578f4293)\n\nгде x - все классы, p(x) - истинная вероятность принадлежности сэмпла классу x, а q(x) - вероятность принадлежности классу x, предсказанная моделью. \nВ нашем случае сэмпл принадлежит только одному классу, индекс которого передается функции. Для него p(x) равна 1, а для остальных классов - 0. \n\nЭто позволяет реализовать функцию проще!", "_____no_output_____" ] ], [ [ "probs = linear_classifer.softmax(np.array([-5, 0, 5]))\nlinear_classifer.cross_entropy_loss(probs, 1)", "_____no_output_____" ] ], [ [ "После того как мы реализовали сами функции, мы можем реализовать градиент.\n\nОказывается, что вычисление градиента становится гораздо проще, если объединить эти функции в одну, которая сначала вычисляет вероятности через softmax, а потом использует их для вычисления функции ошибки через cross-entropy loss.\n\nЭта функция `softmax_with_cross_entropy` будет возвращает и значение ошибки, и градиент по входным параметрам. Мы проверим корректность реализации с помощью `check_gradient`.", "_____no_output_____" ] ], [ [ "# TODO Implement combined function or softmax and cross entropy and produces gradient\nloss, grad = linear_classifer.softmax_with_cross_entropy(np.array([1, 0, 0]), 1)\ncheck_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, 1), np.array([1, 0, 0], float))", "0.5761168847658291 0.5761168847651099\n-0.7880584423829146 -0.7880584423691771\n0.21194155761708544 0.2119415576151695\nGradient check passed!\n" ] ], [ [ "В качестве метода тренировки мы будем использовать стохастический градиентный спуск (stochastic gradient descent или SGD), который работает с батчами сэмплов. \n\nПоэтому все наши функции будут получать не один пример, а батч, то есть входом будет не вектор из `num_classes` оценок, а матрица размерности `batch_size, num_classes`. Индекс примера в батче всегда будет первым измерением.\n\nСледующий шаг - переписать наши функции так, чтобы они поддерживали батчи.\n\nФинальное значение функции ошибки должно остаться числом, и оно равно среднему значению ошибки среди всех примеров в батче.", "_____no_output_____" ] ], [ [ "# TODO Extend combined function so it can receive a 2d array with batch of samples\nnp.random.seed(42)\n# Test batch_size = 1\nnum_classes = 4\nbatch_size = 1\npredictions = np.random.randint(-1, 3, size=(batch_size, num_classes)).astype(float)\ntarget_index = np.random.randint(0, num_classes, size=(batch_size, 1)).astype(int)\ncheck_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, target_index), predictions)\n\n# Test batch_size = 3\nnum_classes = 4\nbatch_size = 3\npredictions = np.random.randint(-1, 3, size=(batch_size, num_classes)).astype(float)\ntarget_index = np.random.randint(0, num_classes, size=(batch_size, 1)).astype(int)\ncheck_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, target_index), predictions)\n\n# Make sure maximum subtraction for numberic stability is done separately for every sample in the batch\nprobs = linear_classifer.softmax(np.array([[20,0,0], [1000, 0, 0]]))\nassert np.all(np.isclose(probs[:, 0], 1.0))", "0.20603190919001857 0.20603190920009948\n0.5600527948339517 0.560052794829069\n-0.9721166132139888 -0.9721166132292679\n0.20603190919001857 0.20603190920009948\nGradient check passed!\n0.2271508539361916 0.2271508539486433\n0.011309175094739847 0.011309175063090036\n0.011309175094739847 0.011309175063090036\n-0.24976920412567125 -0.24976920411923229\n0.03641059085767864 0.03641059085346399\n0.0989742474918849 0.0989742474866162\n0.0989742474918849 0.0989742474866162\n-0.23435908584144846 -0.23435908584890083\n0.05072100718053443 0.050721007172072057\n0.1378739921399875 0.13787399215647866\n-0.32646899146050945 -0.32646899146282493\n0.1378739921399875 0.13787399215647866\nGradient check passed!\n" ] ], [ [ "### Наконец, реализуем сам линейный классификатор!\n\nsoftmax и cross-entropy получают на вход оценки, которые выдает линейный классификатор.\n\nОн делает это очень просто: для каждого класса есть набор весов, на которые надо умножить пиксели картинки и сложить. Получившееся число и является оценкой класса, идущей на вход softmax.\n\nТаким образом, линейный классификатор можно представить как умножение вектора с пикселями на матрицу W размера `num_features, num_classes`. Такой подход легко расширяется на случай батча векторов с пикселями X размера `batch_size, num_features`:\n\n`predictions = X * W`, где `*` - матричное умножение.\n\nРеализуйте функцию подсчета линейного классификатора и градиентов по весам `linear_softmax` в файле `linear_classifer.py`", "_____no_output_____" ] ], [ [ "# TODO Implement linear_softmax function that uses softmax with cross-entropy for linear classifier\nbatch_size = 2\nnum_classes = 2\nnum_features = 3\nnp.random.seed(42)\nW = np.random.randint(-1, 3, size=(num_features, num_classes)).astype(float)\nX = np.random.randint(-1, 3, size=(batch_size, num_features)).astype(float)\ntarget_index = np.ones(batch_size, dtype=int)\nloss, dW = linear_classifer.linear_softmax(X, W, target_index)\ncheck_gradient(lambda w: linear_classifer.linear_softmax(X, w, target_index), W)\nprint(\"loss:\", loss)", "_____no_output_____" ] ], [ [ "### И теперь регуляризация\n\nМы будем использовать L2 regularization для весов как часть общей функции ошибки.\n\nНапомним, L2 regularization определяется как\n\nl2_reg_loss = regularization_strength * sum<sub>ij</sub> W[i, j]<sup>2</sup>\n\nРеализуйте функцию для его вычисления и вычисления соотвествующих градиентов.", "_____no_output_____" ] ], [ [ "# TODO Implement l2_regularization function that implements loss for L2 regularization\nlinear_classifer.l2_regularization(W, 0.01)\ncheck_gradient(lambda w: linear_classifer.l2_regularization(w, 0.01), W)", "_____no_output_____" ] ], [ [ "# Тренировка!", "_____no_output_____" ], [ "Градиенты в порядке, реализуем процесс тренировки!", "_____no_output_____" ] ], [ [ "# TODO: Implement LinearSoftmaxClassifier.fit function\nclassifier = linear_classifer.LinearSoftmaxClassifier()\nloss_history = classifier.fit(train_X, train_y, epochs=10, learning_rate=1e-3, batch_size=300, reg=1e1)", "_____no_output_____" ], [ "# let's look at the loss history!\nplt.plot(loss_history);", "_____no_output_____" ], [ "# Let's check how it performs on validation set\npred = classifier.predict(val_X)\naccuracy = multiclass_accuracy(pred, val_y)\nprint(\"Accuracy: \", accuracy)\n\n# Now, let's train more and see if it performs better\nclassifier.fit(train_X, train_y, epochs=100, learning_rate=1e-3, batch_size=300, reg=1e1)\npred = classifier.predict(val_X)\naccuracy = multiclass_accuracy(pred, val_y)\nprint(\"Accuracy after training for 100 epochs: \", accuracy)", "_____no_output_____" ] ], [ [ "### Как и раньше, используем кросс-валидацию для подбора гиперпараметтов.\n\nВ этот раз, чтобы тренировка занимала разумное время, мы будем использовать только одно разделение на тренировочные (training) и проверочные (validation) данные.\n\nТеперь нам нужно подобрать не один, а два гиперпараметра! Не ограничивайте себя изначальными значениями в коде. \nДобейтесь точности более чем **20%** на проверочных данных (validation data).", "_____no_output_____" ] ], [ [ "num_epochs = 200\nbatch_size = 300\n\nlearning_rates = [1e-3, 1e-4, 1e-5]\nreg_strengths = [1e-4, 1e-5, 1e-6]\n\nbest_classifier = None\nbest_val_accuracy = None\n\n# TODO use validation set to find the best hyperparameters\n# hint: for best results, you might need to try more values for learning rate and regularization strength \n# than provided initially\n\nfor lr in learning_rates:\n for rs in reg_strengths:\n loss_history = classifier.fit(train_X, train_y, epochs=num_epochs, learning_rate=lr, batch_size=batch_size, reg=rs)\n pred = classifier.predict(val_X)\n accuracy = multiclass_accuracy(pred, val_y)\n if best_val_accuracy is None or best_val_accuracy < accuracy:\n best_classifier = classifier\n best_val_accuracy = accuracy\n\nprint('best validation accuracy achieved: %f' % best_val_accuracy)", "_____no_output_____" ] ], [ [ "# Какой же точности мы добились на тестовых данных?", "_____no_output_____" ] ], [ [ "test_pred = best_classifier.predict(test_X)\ntest_accuracy = multiclass_accuracy(test_pred, test_y)\nprint('Linear softmax classifier test set accuracy: %f' % (test_accuracy, ))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e728b3bfad66297e485d44296d5dc482e56e68a9
20,439
ipynb
Jupyter Notebook
Pandas/Pandas - Delete Columns from DataFrame.ipynb
AmanKumar05032005/Python-1
3677438558ebf2af5cef2fc7fb20f2dbef2b481f
[ "MIT" ]
1
2020-08-18T09:14:22.000Z
2020-08-18T09:14:22.000Z
Pandas/Pandas - Delete Columns from DataFrame.ipynb
AmanKumar05032005/Python-1
3677438558ebf2af5cef2fc7fb20f2dbef2b481f
[ "MIT" ]
null
null
null
Pandas/Pandas - Delete Columns from DataFrame.ipynb
AmanKumar05032005/Python-1
3677438558ebf2af5cef2fc7fb20f2dbef2b481f
[ "MIT" ]
1
2020-10-06T17:16:32.000Z
2020-10-06T17:16:32.000Z
26.96438
78
0.299134
[ [ [ "# Pandas - Delete Columns from a DataFrame", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "df = pd.read_csv('iris.data', names=['A','B','C','D','Label'])\ndf", "_____no_output_____" ] ], [ [ "### 1) to drop a single column\ndf.drop('col_name', axis=1) \nTo save changes you must either set df = df.drop(), or add inplace=True.", "_____no_output_____" ] ], [ [ "df.drop('B', axis=1)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### 2) to drop multiple axes by name\ndf.drop(['col_name1', 'col_name2'], axis=1, inplace=True)", "_____no_output_____" ] ], [ [ "df.drop(['A','Label'], axis=1, inplace=True)\ndf", "_____no_output_____" ] ], [ [ "### 3) to drop multiple axes by numerical column index\ndf.drop(df.columns[[idx1, idx2]], axis=1, inplace=True)", "_____no_output_____" ] ], [ [ "df.drop(df.columns[[0, 2]], axis=1, inplace=True)\ndf", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]