hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ecbbb6ce94790b0247f7e16642a9ae5c7d5d1581 | 6,464 | ipynb | Jupyter Notebook | notebooks/3d_smFISH.ipynb | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | 3 | 2020-09-01T12:18:20.000Z | 2021-05-18T03:50:31.000Z | notebooks/3d_smFISH.ipynb | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | null | null | null | notebooks/3d_smFISH.ipynb | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | null | null | null | 30.205607 | 284 | 0.583075 | [
[
[
"# Reproduce 3d smFISH results with Starfish\n\nThis notebook walks through a work flow that analyzes one field of view of a mouse gene panel from the Allen Institute for Cell Science, using the starfish package.",
"_____no_output_____"
],
[
"The 3d smFISH workflow run by the Allen runs a bandpass filter to remove high and low frequency signal and blurs over z with a 1-pixel gaussian to smooth the signal over the z-axis. low-intensity signal is (stringently) clipped from the images before and after these filters.\n\nSpots are then detected using a spot finder based on trackpy's locate method, which identifies local intensity maxima, and spots are matched to the gene they represent by looking them up in a codebook that records which (round, channel) matches which gene target.",
"_____no_output_____"
],
[
"## Load imports",
"_____no_output_____"
]
],
[
[
"%gui qt5\n\nimport os\nfrom typing import Optional, Tuple\n\n# import napari_gui\nimport numpy as np\n\nimport starfish\nfrom starfish import data, FieldOfView, IntensityTable\n",
"_____no_output_____"
]
],
[
[
"## Initialize Pipeline Components with pre-selected parameters",
"_____no_output_____"
]
],
[
[
"# bandpass filter to remove cellular background and camera noise\nbandpass = starfish.image.Filter.Bandpass(lshort=.5, llong=7, threshold=0.0)\n\n# gaussian blur to smooth z-axis\nglp = starfish.image.Filter.GaussianLowPass(\n sigma=(1, 0, 0),\n is_volume=True\n)\n\n# pre-filter clip to remove low-intensity background signal\nclip1 = starfish.image.Filter.Clip(p_min=50, p_max=100)\n\n# post-filter clip to eliminate all but the highest-intensity peaks\nclip2 = starfish.image.Filter.Clip(p_min=99, p_max=100, is_volume=True)\n\n# peak caller\ntlmpf = starfish.spots.SpotFinder.TrackpyLocalMaxPeakFinder(\n spot_diameter=5, # must be odd integer\n min_mass=0.02,\n max_size=2, # this is max radius\n separation=7,\n noise_size=0.65, # this is not used because preprocess is False\n preprocess=False,\n percentile=10, # this is irrelevant when min_mass, spot_diameter, and max_size are set properly\n verbose=True,\n is_volume=True,\n)",
"_____no_output_____"
]
],
[
[
"## Combine pipeline components into a pipeline",
"_____no_output_____"
],
[
"Define a function that identifies spots of a field of view.",
"_____no_output_____"
]
],
[
[
"def processing_pipeline(\n experiment: starfish.Experiment,\n fov_name: str,\n n_processes: Optional[int]=None\n) -> Tuple[starfish.ImageStack, starfish.IntensityTable]:\n \"\"\"Process a single field of view of an experiment\n\n Parameters\n ----------\n experiment : starfish.Experiment\n starfish experiment containing fields of view to analyze\n fov_name : str\n name of the field of view to process\n n_processes : int\n\n Returns\n -------\n starfish.IntensityTable :\n decoded IntensityTable containing spots matched to the genes they are hybridized against\n \"\"\"\n\n print(\"Loading images...\")\n primary_image = experiment[fov_name].get_image(FieldOfView.PRIMARY_IMAGES)\n all_intensities = list()\n codebook = experiment.codebook\n for primary_image in experiment[fov_name].iterate_image_type(FieldOfView.PRIMARY_IMAGES):\n print(\"Filtering images...\")\n filter_kwargs = dict(\n in_place=True,\n verbose=True,\n n_processes=n_processes\n )\n clip1.run(primary_image, **filter_kwargs)\n bandpass.run(primary_image, **filter_kwargs)\n glp.run(primary_image, **filter_kwargs)\n clip2.run(primary_image, **filter_kwargs)\n\n print(\"Calling spots...\")\n spot_attributes = tlmpf.run(primary_image)\n all_intensities.append(spot_attributes)\n\n spot_attributes = IntensityTable.concatanate_intensity_tables(all_intensities)\n print(\"Decoding spots...\")\n decoded = codebook.decode_per_round_max(spot_attributes)\n decoded = decoded[decoded[\"total_intensity\"]>.025]\n\n return primary_image, decoded",
"_____no_output_____"
]
],
[
[
"## Run the pipeline on a field of view",
"_____no_output_____"
]
],
[
[
"experiment = starfish.data.allen_smFISH(use_test_data=True)\n\nimage, intensities = processing_pipeline(experiment, fov_name='fov_001')",
"_____no_output_____"
]
],
[
[
"## Display the results",
"_____no_output_____"
]
],
[
[
"viewer = starfish.display(image, intensities)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbbbc8a933e45a302a0e9f4b42c01c602ffd8b6 | 331,196 | ipynb | Jupyter Notebook | new/ASartan_LS_DS_112_Loading_Data_Assignment.ipynb | sartansartan/DS-Unit-1-Sprint-1-Dealing-With-Data | e58074bce1b527cc6608b0979ff872410078a5b7 | [
"MIT"
] | null | null | null | new/ASartan_LS_DS_112_Loading_Data_Assignment.ipynb | sartansartan/DS-Unit-1-Sprint-1-Dealing-With-Data | e58074bce1b527cc6608b0979ff872410078a5b7 | [
"MIT"
] | null | null | null | new/ASartan_LS_DS_112_Loading_Data_Assignment.ipynb | sartansartan/DS-Unit-1-Sprint-1-Dealing-With-Data | e58074bce1b527cc6608b0979ff872410078a5b7 | [
"MIT"
] | null | null | null | 91.718637 | 172,056 | 0.687535 | [
[
[
"<a href=\"https://colab.research.google.com/github/sartansartan/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/new/ASartan_LS_DS_112_Loading_Data_Assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Practice Loading Datasets\n\nThis assignment is purposely semi-open-ended you will be asked to load datasets both from github and also from CSV files from the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). \n\nRemember that the UCI datasets may not have a file type of `.csv` so it's important that you learn as much as you can about the dataset before you try and load it. See if you can look at the raw text of the file either locally, on github, using the `!curl` shell command, or in some other way before you try and read it in as a dataframe, this will help you catch what would otherwise be unforseen problems.\n",
"_____no_output_____"
],
[
"## 1) Load a dataset from Github (via its *RAW* URL)\n\nPick a dataset from the following repository and load it into Google Colab. Make sure that the headers are what you would expect and check to see if missing values have been encoded as NaN values:\n\n<https://github.com/ryanleeallred/datasets>",
"_____no_output_____"
]
],
[
[
"#Loading Churn Dataset via its RAW Url\n\nimport pandas as pd\n\nchurn_data_url = 'https://raw.githubusercontent.com/ryanleeallred/datasets/master/churn.csv'\n\n# loading dataset to Google Colab \n\nchurn_data = pd.read_csv(churn_data_url)\n\n#checking data with special attention to headers and missing values\nchurn_data.head()",
"_____no_output_____"
],
[
"#checking data with special attention to headers and missing values in a different way to make sure I didn't miss anything\n\nchurn_data.sample(10)",
"_____no_output_____"
],
[
"#checking data if there any conventional NaN values\n\nchurn_data.isna().sum()",
"_____no_output_____"
],
[
"#Filling missing values\n\nimport numpy as np\n\nchurn_data = churn_data.replace('?', np.NaN)\n\n#Checking if missing values are filled with conventional NaN values\n\nchurn_data.head(20)",
"_____no_output_____"
]
],
[
[
"## 2) Load a dataset from your local machine\nDownload a dataset from the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php) and then upload the file to Google Colab either using the files tab in the left-hand sidebar or by importing `files` from `google.colab` The following link will be a useful resource if you can't remember the syntax: <https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92>\n\nWhile you are free to try and load any dataset from the UCI repository, I strongly suggest starting with one of the most popular datasets like those that are featured on the right-hand side of the home page. \n\nSome datasets on UCI will have challenges associated with importing them far beyond what we have exposed you to in class today, so if you run into a dataset that you don't know how to deal with, struggle with it for a little bit, but ultimately feel free to simply choose a different one. \n\n- Make sure that your file has correct headers, and the same number of rows and columns as is specified on the UCI page. If your dataset doesn't have headers use the parameters of the `read_csv` function to add them. Likewise make sure that missing values are encoded as `NaN`.",
"_____no_output_____"
]
],
[
[
"#Uploading Dresses Attributes Dataset from my computer\n\nfrom google.colab import files\nuploaded = files.upload()",
"_____no_output_____"
],
[
"#simplyfying column headers just for practice\n\ncolumn_headers = ['Dress_ID', 'Style', 'Price', 'Rating', 'Size', 'Season', 'Neck', 'Sleeve', 'Waise', 'Material', 'Fabric', 'Decor', 'Psttern', 'Recom']",
"_____no_output_____"
],
[
"#Using read_excel instead of read_csv because dataset is in .xslx format\n\ndf = pd.read_excel('Attribute DataSet.xlsx', names = column_headers)\ndf.head()",
"_____no_output_____"
],
[
"#Checking if the dataset has the same number of rows and columns as is specified on the UCI page\n\ndf.shape",
"_____no_output_____"
],
[
"#Diagnoding missing values\n\ndf.sample(10)",
"_____no_output_____"
],
[
"#Making sure that missing values are encoded as NaN\n\ndf.isna().sum()",
"_____no_output_____"
]
],
[
[
"## 3) Load a dataset from UCI using `!wget`\n\n\"Shell Out\" and try loading a file directly into your google colab's memory using the `!wget` command and then read it in with `read_csv`.\n\nWith this file we'll do a bit more to it.\n\n- Read it in, fix any problems with the header as make sure missing values are encoded as `NaN`.\n- Use the `.fillna()` method to fill any missing values. \n - [.fillna() documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html)\n- Create one of each of the following plots using the Pandas plotting functionality:\n - Scatterplot\n - Histogram\n - Density Plot\n",
"_____no_output_____"
]
],
[
[
"#Loading Heart Dataset from UCI using !wget\n\n!wget https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.switzerland.data",
"--2019-09-06 19:19:23-- https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.switzerland.data\nResolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252\nConnecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4109 (4.0K) [application/x-httpd-php]\nSaving to: ‘processed.switzerland.data.3’\n\n\r processed 0%[ ] 0 --.-KB/s \rprocessed.switzerla 100%[===================>] 4.01K --.-KB/s in 0s \n\n2019-09-06 19:19:24 (93.4 MB/s) - ‘processed.switzerland.data.3’ saved [4109/4109]\n\n"
],
[
"!curl https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.switzerland.data -o heart.2",
" % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 4109 100 4109 0 0 13428 0 --:--:-- --:--:-- --:--:-- 13428\n"
],
[
"#Checking the dataset\n\ndf = pd.read_csv('heart.2')\ndf.head()",
"_____no_output_____"
],
[
"#Fixing column heads and missing values\n\ncolumn_headers = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal', 'num']",
"_____no_output_____"
],
[
"df = pd.read_csv('heart.2', names = column_headers, na_values='?')\ndf.head()",
"_____no_output_____"
],
[
"#Making sure heads and missing values are fixed. \n\ndf.sample(10)",
"_____no_output_____"
],
[
"#Discoverung NaN values.\n\ndf.isna().sum()",
"_____no_output_____"
],
[
"#Dropping columns with high amount of missing values\n\ndf = df.drop(['fbs', 'ca'], axis=1)\ndf.head()",
"_____no_output_____"
],
[
"#Filling missing values with mode \n\ndf['restecg'] = df['restecg'].fillna(df['restecg'].mode()[0])\ndf['exang'] = df['exang'].fillna(df['exang'].mode()[0])\ndf['slope'] = df['slope'].fillna(df['slope'].mode()[0])\ndf['thal'] = df['thal'].fillna(df['thal'].mode()[0])\n\n\ndf.sample(20)",
"_____no_output_____"
],
[
"#Dropping rows with missing values for columns with a small amount of missing values\n\ndf = df.dropna(subset=['trestbps', 'thalach', 'oldpeak'])\n",
"_____no_output_____"
],
[
"#Checking if we don't have missing values anymore\ndf.isna().sum()",
"_____no_output_____"
],
[
"#Creating a scaterplot\nimport matplotlib.pyplot as plt\n\ndf.plot.scatter('age', 'thalach')",
"_____no_output_____"
],
[
"#Creating a histogram\n\ndf.hist('trestbps')",
"_____no_output_____"
],
[
"#Creating a density plot with seaborn colors\n\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn')\n\ndf.plot(figsize=(15, 10))",
"_____no_output_____"
]
],
[
[
"## Stretch Goals - Other types and sources of data\n\nNot all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.\n\nIf you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.\n\nOverall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.\n\nHow does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.\n\nOne last major source of data is APIs: https://github.com/toddmotto/public-apis\n\nAPI stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as \"somebody else's database\" - you have (usually limited) access.\n\n*Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. Image, text, or (public) APIs are probably more tractable - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.",
"_____no_output_____"
]
],
[
[
"import requests",
"_____no_output_____"
],
[
"response = requests.get('https://date.nager.at/api/v2/PublicHolidays/2017/AT')\n\nresponse.status_code",
"_____no_output_____"
],
[
"response.json()",
"_____no_output_____"
],
[
"json_data = response.json()\n\nlen(json_data)\n\nresponse = json_data\nresponse[:3]",
"_____no_output_____"
],
[
"for i in response[:3]:\n print(f\"{i['launchYear']}: {i['localName']}, {i['name']}, {i['type']}\")",
"1967: Neujahr, New Year's Day, Public\nNone: Heilige Drei Könige, Epiphany, Public\n1642: Ostermontag, Easter Monday, Public\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbbce54c1b823b381041558b1bcf21d1c431533 | 41,816 | ipynb | Jupyter Notebook | reconstruct_global.ipynb | gree7/privacy-analysis | 5353bc35ea2925934d5942fc932c173e3e82870d | [
"MIT"
] | null | null | null | reconstruct_global.ipynb | gree7/privacy-analysis | 5353bc35ea2925934d5942fc932c173e3e82870d | [
"MIT"
] | null | null | null | reconstruct_global.ipynb | gree7/privacy-analysis | 5353bc35ea2925934d5942fc932c173e3e82870d | [
"MIT"
] | 1 | 2020-09-28T11:34:48.000Z | 2020-09-28T11:34:48.000Z | 51.880893 | 307 | 0.486249 | [
[
[
"import sys\nimport getopt\nimport os\n#from sets import Set\nimport json\nimport pprint\nfrom graphviz import Digraph\nfrom math import log\nimport src.searchtree as st\nimport src.utils as utils",
"_____no_output_____"
],
[
"#agent is the agent we are trying to compromise. We are the adversary. All other agents are the adversary.\nagent=0\ndomain=\"uav-factored\" #\"logistics-small-factored\"#\"logistics-small-factored\" #\"uav-factored\"#\nproblem=\"example\" #\"\"test2\"#\"probLOGISTICS-4-0\"#\"example\" #\"test\"#\"probLOGISTICS-4-0\"\n\nst.configuration[\"privateActions\"]=False \nst.configuration[\"useFullStates\"]=1\nst.configuration[\"nTo1Mapping\"]=1\nst.configuration[\"SecureMAFS\"]=False\nst.configuration[\"debug\"]=True\nst.configuration[\"console\"]=False\n\nif st.configuration[\"console\"]:\n #read options\n params = [\"domain=\",\"problem=\",\"agent=\",\"use-full-states=\",\"n-to-1-mapping=\",\"secure-mafs\",\"debug\"]\n #print(str(sys.argv))\n try:\n opts, args = getopt.getopt(sys.argv[1:],'',params)\n #print(\"opts:\"+str(opts))\n #print(\"args:\"+str(args))\n except getopt.GetoptError:\n print ('bad leakage.py params: ' + str(params))\n sys.exit(2)\n\n for opt, arg in opts:\n print(\"opt=\"+str(opt)+\",arg=\"+str(arg))\n if opt == \"--domain\":\n domain = arg\n elif opt == \"--problem\":\n problem = arg\n elif opt == \"--agent\":\n agent = int(arg)\n elif opt == \"--use-full-states\":\n st.configuration[\"useFullStates\"] = int(arg)\n elif opt == \"--n-to-1-mapping\":\n st.configuration[\"nTo1Mapping\"] = int(arg)\n elif opt == \"--secure-mafs\":\n st.configuration[\"SecureMAFS\"]=True\n elif opt == \"--debug\":\n st.configuration[\"debug\"]=True\n\nroot=\"traces/\"+domain+\"/\"+problem+\"/\"+str(st.configuration[\"nTo1Mapping\"])\nagentFile=root+\"/agent\"+str(agent)+\".json\"\n#adversaryFile=root+\"/agent\"+str(adversary)+\".json\"\n\noutputFile=root+\"/global-searchtree\"\n",
"_____no_output_____"
],
[
"#load data\nvarMap = {}\nstateMap = {}\n#states = {}\nopMap = {}\noperators = set()\n\nadvers = set()\nstates = []\n\nplan = []\n\nagents = 0\n\nfor fileName in os.listdir(root):\n agentID = -1\n if fileName.find(\"agent\")!= -1 and fileName.find(\".json\")!= -1:\n #print(\"next: \"+fileName[fileName.find(\"agent\")+5:fileName.find(\".json\")])\n agentID=int(fileName[fileName.find(\"agent\")+5:fileName.find(\".json\")])\n #print(agentID)\n if agentID != -1:\n print(\"processing \" + fileName)\n agents += 1\n\n f = open(root+\"/\"+fileName)\n data = json.load(f)\n f.close()\n\n #load variables\n for v in data[\"variables\"]:\n #print(v)\n var = st.Variable(v)\n varMap[var.hash] = var\n \n if st.configuration[\"debug\"]:\n print(\"variables:\")\n pprint.pprint(varMap)\n \n #load states\n order = 0\n secureMAFSStates = set()\n for s in data[\"states\"]:\n state = st.State(s,varMap,order)\n \n stateMap[state.hash] = state\n #states[agentID].append(state)\n order += 1\n states.append(state)\n \n #states = [s for s in stateMap.values()]\n\n #load operators (and convert to label non-preserving projection)\n allOps = data[\"operators\"]\n \n for op in allOps:\n #print(op)\n operator = st.Operator(op)\n if operator.hash in opMap:\n opMap[operator.hash].process(operator)\n else:\n opMap[operator.hash] = operator\n\n operators = operators | set(opMap.values())\n \n plan = data[\"plan\"]\n \nreceived = list(filter(lambda x: x.isReceived(), states))\nsent = list(filter(lambda x: x.isSent() or x.isInit(), states))\n\n \nprint(\"done!\") \nprint(\"variables:\" + str(len(varMap)))\nprint(\"operators:\" + str(len(operators)))\nprint(\"states:\" + str(len(states)))\nprint(\"received:\" + str(len(received)))\nprint(\"sent:\" + str(len(sent)))\n\nif st.configuration[\"debug\"]:\n print(\"varMap:\")\n pprint.pprint(varMap)\n \n if len(states) < 25:\n print(\"stateMap:\")\n pprint.pprint(stateMap)\n print(\"states:\")\n pprint.pprint(states)\n \n if len(received) < 25:\n print(\"received:\")\n pprint.pprint(received)\n \n if len(sent) < 25:\n print(\"sent:\")\n pprint.pprint(sent)\n \n print(\"opMap:\")\n pprint.pprint(opMap)\n print(\"operators:\")\n pprint.pprint(operators)\n print(\"plan:\")\n pprint.pprint(plan)\n \n\n",
"processing agent0.json\nvariables:\n{'0:2': 0:2:True:{'1': '(N)base-has-supplies()', '0': '(P)base-has-supplies()'},\n 'P:0': P:0:False:{'1': '(N)mission-complete()', '0': 'mission-complete()'},\n 'P:1': P:1:False:{'1': '(N)uav-has-fuel()', '0': 'uav-has-fuel()'}}\nprocessing agent1.json\nvariables:\n{'0:2': 0:2:True:{'1': '(N)base-has-supplies()', '0': '(P)base-has-supplies()'},\n '1:2': 1:2:True:{'1': '(N)location-complete(l1)', '0': '(P)location-complete(l1)'},\n '1:3': 1:3:True:{'1': '(N)location-complete(l2)', '0': '(P)location-complete(l2)'},\n 'P:0': P:0:False:{'1': '(N)mission-complete()', '0': 'mission-complete()'},\n 'P:1': P:1:False:{'1': '(N)uav-has-fuel()', '0': 'uav-has-fuel()'}}\ndone!\nvariables:5\noperators:8\nstates:16\nreceived:6\nsent:9\nvarMap:\n{'0:2': 0:2:True:{'1': '(N)base-has-supplies()', '0': '(P)base-has-supplies()'},\n '1:2': 1:2:True:{'1': '(N)location-complete(l1)', '0': '(P)location-complete(l1)'},\n '1:3': 1:3:True:{'1': '(N)location-complete(l2)', '0': '(P)location-complete(l2)'},\n 'P:0': P:0:False:{'1': '(N)mission-complete()', '0': 'mission-complete()'},\n 'P:1': P:1:False:{'1': '(N)uav-has-fuel()', '0': 'uav-has-fuel()'}}\nstateMap:\n{'0:0': 0:0:[1, 1],[1],[0, 0],\n '0:1': 0:1:[1, 0],[0],[1, 0],\n '0:2': 0:2:[1, 1],[0],[1, 1],\n '0:3': 0:3:[1, 0],[1],[0, 1],\n '0:4': 0:4:[1, 1],[0],[1, 2],\n '0:5': 0:5:[1, 0],[1],[0, 2],\n '0:6': 0:6:[1, 1],[1],[0, 3],\n '0:7': 0:7:[1, 0],[0],[1, 3],\n '1:0': 1:0:[1, 1],[1, 1],[0, 0],\n '1:1': 1:1:[1, 0],[1, 1],[1, 0],\n '1:2': 1:2:[1, 1],[0, 1],[1, 1],\n '1:3': 1:3:[1, 1],[1, 0],[1, 2],\n '1:4': 1:4:[1, 0],[0, 1],[0, 1],\n '1:5': 1:5:[1, 0],[1, 0],[0, 2],\n '1:6': 1:6:[1, 1],[0, 0],[0, 3],\n '1:7': 1:7:[0, 1],[0, 0],[0, 3]}\nstates:\n[0:0:[1, 1],[1],[0, 0],\n 0:1:[1, 0],[0],[1, 0],\n 0:2:[1, 1],[0],[1, 1],\n 0:4:[1, 1],[0],[1, 2],\n 0:3:[1, 0],[1],[0, 1],\n 0:5:[1, 0],[1],[0, 2],\n 0:6:[1, 1],[1],[0, 3],\n 0:7:[1, 0],[0],[1, 3],\n 1:0:[1, 1],[1, 1],[0, 0],\n 1:1:[1, 0],[1, 1],[1, 0],\n 1:2:[1, 1],[0, 1],[1, 1],\n 1:3:[1, 1],[1, 0],[1, 2],\n 1:4:[1, 0],[0, 1],[0, 1],\n 1:5:[1, 0],[1, 0],[0, 2],\n 1:6:[1, 1],[0, 0],[0, 3],\n 1:7:[0, 1],[0, 0],[0, 3]]\nreceived:\n[0:2:[1, 1],[0],[1, 1],\n 0:4:[1, 1],[0],[1, 2],\n 0:6:[1, 1],[1],[0, 3],\n 1:1:[1, 0],[1, 1],[1, 0],\n 1:4:[1, 0],[0, 1],[0, 1],\n 1:5:[1, 0],[1, 0],[0, 2]]\nsent:\n[0:0:[1, 1],[1],[0, 0],\n 0:1:[1, 0],[0],[1, 0],\n 0:3:[1, 0],[1],[0, 1],\n 0:5:[1, 0],[1],[0, 2],\n 0:7:[1, 0],[0],[1, 3],\n 1:0:[1, 1],[1, 1],[0, 0],\n 1:2:[1, 1],[0, 1],[1, 1],\n 1:3:[1, 1],[1, 0],[1, 2],\n 1:6:[1, 1],[0, 0],[0, 3]]\nopMap:\n{'{0: 1, 2: 0, 3: 0}->{0: 0}': complete-mission-{0: 1, 2: 0, 3: 0}->{0: 0},\n '{0: 1}->{0: 0}': complete-mission-{0: 1}->{0: 0},\n '{1: 0, 2: 1}->{1: 1, 2: 0}': survey-location-{1: 0, 2: 1}->{1: 1, 2: 0},\n '{1: 0, 3: 1}->{1: 1, 3: 0}': survey-location-{1: 0, 3: 1}->{1: 1, 3: 0},\n '{1: 0}->{1: 1}': survey-location-{1: 0}->{1: 1},\n '{1: 1, 2: 0}->{1: 0, 2: 1}': refuel-{1: 1, 2: 0}->{1: 0, 2: 1},\n '{1: 1, 2: 1}->{1: 0, 2: 0}': refuel-and-resuply-{1: 1, 2: 1}->{1: 0, 2: 0},\n '{1: 1}->{1: 0}': refuel-{1: 1}->{1: 0}}\noperators:\n{complete-mission-{0: 1, 2: 0, 3: 0}->{0: 0},\n survey-location-{1: 0, 3: 1}->{1: 1, 3: 0},\n complete-mission-{0: 1}->{0: 0},\n survey-location-{1: 0, 2: 1}->{1: 1, 2: 0},\n survey-location-{1: 0}->{1: 1},\n refuel-and-resuply-{1: 1, 2: 1}->{1: 0, 2: 0},\n refuel-{1: 1, 2: 0}->{1: 0, 2: 1},\n refuel-{1: 1}->{1: 0}}\nplan:\n['refuel-and-resuply ',\n 'survey-location l1',\n 'refuel ',\n 'survey-location l2',\n 'complete-mission ']\n"
],
[
"#show search tree in graphviz\nif st.configuration[\"debug\"]:\n from graphviz import Digraph\n\n\n def getNodeID(state):\n return str(state.agentID)+str(state.stateID)\n\n dot = Digraph(comment=root,engine=\"dot\")\n\n with dot.subgraph(name='states') as dotS:\n #dotS.attr(rankdir='LR')\n #dotS.attr(rank='same')\n dotS.attr(ordering='out')\n\n x = 10\n y = 10\n for state in states:\n label = state.printStateDotLabel()\n\n position = str(x)+\",\"+str(y)+\"!\"\n\n #add state, special case for initial state\n id = getNodeID(state)\n if state.heuristic == -1:\n dotS.node(id, label, shape='invhouse',pos=position)\n\n elif state.heuristic == 0:\n dotS.node(id, label, shape='house', style='bold',pos=position)\n else:\n if state.isReceived():\n dotS.node(id, label, shape='box',color='red',pos=position)\n else:\n dotS.node(id, label, shape='box',pos=position)\n\n y += 1\n x = 8+(x+1)%4\n\n prev = -1\n done = set()\n for state in states:\n if state.isReceived():\n sentStateID = state.privateIDs[state.senderID]\n sentStateHash = str(state.senderID)+\":\"+str(sentStateID)\n label = str(state.senderID)\n sentState = stateMap[sentStateHash]\n dotS.edge(str(sentState.agentID)+str(sentState.stateID), getNodeID(state),color='red',constraint='false',label=label)\n \n iparentID = state.privateIDs[state.agentID]\n iparentHash = str(state.agentID)+\":\"+str(iparentID)\n label = \"\"#',\\n'.join(str(s) for s in ip[\"applicable\"])\n iparent = stateMap[iparentHash]\n dotS.edge(str(iparent.agentID)+str(iparent.stateID), getNodeID(state),style='dashed',label=label,constraint='false')\n\n \n else:\n if state.parentID != -1:\n dotS.edge(str(state.agentID)+str(state.parentID),getNodeID(state),style='bold',color='grey',constraint='false')\n \n #invisible link from previous state\n if prev != -1 and prev.agentID == state.agentID:\n dotS.edge(getNodeID(prev), getNodeID(state),style='invis')\n\n prev=state\n\n\n #print(done)\n\n\n\n\n #print variables\n for var in varMap:\n print(str(var)+\" (private=\" + str(varMap[var].isPrivate) + \"):\")\n for val in varMap[var].vals:\n print( \" \" + val + \" = \"+ varMap[var].vals[val])\n\n #print(dot.source)\n\n dot.render(outputFile)\n \ndot\n\n",
"0:2 (private=True):\n 1 = (N)base-has-supplies()\n 0 = (P)base-has-supplies()\n1:3 (private=True):\n 1 = (N)location-complete(l2)\n 0 = (P)location-complete(l2)\nP:1 (private=False):\n 1 = (N)uav-has-fuel()\n 0 = uav-has-fuel()\nP:0 (private=False):\n 1 = (N)mission-complete()\n 0 = mission-complete()\n1:2 (private=True):\n 1 = (N)location-complete(l1)\n 0 = (P)location-complete(l1)\n"
],
[
"#check global correctness\n\nsentStateIDCorrect = True\nsentStateAgentIDCorrect = True\nsentStatePublicValuesCorrect = True\n\niparentOtherIDsMatch = True\niparentIDCorrect = True\niparentPrivateValuesCorrect = True\n\nfor state in states:\n if state.isReceived():\n sentStateID = state.privateIDs[state.senderID]\n sentStateHash = str(state.senderID)+\":\"+str(sentStateID)\n sentState = stateMap[sentStateHash]\n \n if state.privateIDs[state.senderID] != sentState.stateID:\n sentStateIDCorrect = False\n \n if state.privateIDs[state.agentID] != sentState.privateIDs[state.agentID]:\n sentStateAgentIDCorrect = False\n \n if state.publicValues != sentState.publicValues:\n sentStatePublicValuesCorrect = False\n \n \n iparentID = state.privateIDs[state.agentID]\n iparentHash = str(state.agentID)+\":\"+str(iparentID)\n iparent = stateMap[iparentHash]\n \n #this does not work as the values might have been changed by multiple agents before receiving the state\n# for a in range(0,agents):\n# if a != state.agentID and a != sentState.agentID and state.privateIDs[a] != iparent.privateIDs[a]:\n# iparentOtherIDsMatch = False\n# print(\"iparentOtherIDsMatch, agent=\" + str(a))\n# print(state)\n# print(iparent)\n \n if state.privateIDs[state.agentID] != iparent.stateID:\n iparentIDCorrect = False\n \n if state.privateValues != iparent.privateValues:\n iparentPrivateValuesCorrect = False\n print(\"iparentPrivateValuesCorrect:\" + str(state.privateValues) + \" != \" + str(iparent.privateValues))\n print(state)\n print(iparent)\n \noutputCSVFile = \"./global_view_test.csv\"\n \nimport csv \nfrom pathlib import Path\n\ncolumns=[\n 'domain',\n 'problem',\n 'agents',\n 'privateActions',\n 'useFullStates',\n 'nTo1Mapping',\n 'SecureMAFS',\n 'sentStateIDCorrect',\n 'sentStateAgentIDCorrect',\n 'sentStatePublicValuesCorrect',\n 'iparentOtherIDsMatch',\n 'iparentIDCorrect',\n 'iparentPrivateValuesCorrect'\n]\n\noutCSV = Path(outputCSVFile)\nexists = outCSV.is_file()\n\nrow = [\n domain,\n problem,\n agents,\n st.configuration[\"privateActions\"],\n st.configuration[\"useFullStates\"],\n st.configuration[\"nTo1Mapping\"],\n st.configuration[\"SecureMAFS\"],\n sentStateIDCorrect,\n sentStateAgentIDCorrect,\n sentStatePublicValuesCorrect,\n iparentOtherIDsMatch,\n iparentIDCorrect,\n iparentPrivateValuesCorrect\n]\n\nwith open(outputCSVFile, 'a') as f:\n writer = csv.writer(f)\n if not exists:\n writer.writerow(columns)\n writer.writerow(row)\n ",
"iparentPrivateValuesCorrect:[0, 1] != [1, 1]\n1:4:[1, 0],[0, 1],[0, 1]\n1:1:[1, 0],[1, 1],[1, 0]\niparentPrivateValuesCorrect:[1, 0] != [0, 1]\n1:5:[1, 0],[1, 0],[0, 2]\n1:2:[1, 1],[0, 1],[1, 1]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbbd108c33c93eabb282b73f7664c8f4cbc646f | 7,410 | ipynb | Jupyter Notebook | session4/Session4_Assignment_empty.ipynb | remayer/WS19_PsyMSc4_Python_for_Psychologists | 351bfe55371d18572f9268da49b98ed0d59b71d5 | [
"BSD-3-Clause"
] | 15 | 2019-10-28T16:00:09.000Z | 2021-11-17T18:05:33.000Z | session4/Session4_Assignment_empty.ipynb | remayer/WS19_PsyMSc4_Python_for_Psychologists | 351bfe55371d18572f9268da49b98ed0d59b71d5 | [
"BSD-3-Clause"
] | null | null | null | session4/Session4_Assignment_empty.ipynb | remayer/WS19_PsyMSc4_Python_for_Psychologists | 351bfe55371d18572f9268da49b98ed0d59b71d5 | [
"BSD-3-Clause"
] | 6 | 2019-10-15T08:31:58.000Z | 2021-11-28T02:40:54.000Z | 27.962264 | 343 | 0.551822 | [
[
[
"# Python for Psychologists - Session 4\n## Homework assignment",
"_____no_output_____"
],
[
"**Exercise 1.** Write a for loop that iterates through the list below. Whenever the type of the element is `str`, print \"It's a string!\", whenever it is `int` print \"It's an integer!\". Note: don't put `str` and `int` in quoation marks!",
"_____no_output_____"
]
],
[
[
"my_list = [1, \"gin\", 2, \"beer\", 3, 4, 5]",
"_____no_output_____"
]
],
[
[
"**Exercise 2.** You are probably familiar with the well known, highly sophisticated musical composition of \"99 bottles of beer on the wall\" (if you are not you may go to youtube and familiarize yourself with it). In its original form the song starts counting from 99 to 0 like this:\n\n*99 bottles of beer on the wall, 99 bottles of beer. Take one down, pass it around, 98 bottles of beer on the wall.*\n\n*98 bottles of beer on the wall, 98 bottles of beer. Take one down, pass it around, 97 bottles of beer on the wall.*\n\n*97 bottles of beer on the wall, 97 bottles of beer. Take one down, pass it around, 96 bottles of beer on the wall.*\n\n... and so forth.\n\nWrite a for loop, that prints the songtext of the song (always 1 of the lines above, so 1 verse, per iteration). As it's still a weekday, it should suffice to drink only 20 bottles. Ideally, in the last line (when the last bottle of beer is reached) you will have to print \"bottle\" instead of \"bottles\".",
"_____no_output_____"
],
[
"**Exercise 3.** Write a for loop that iterates through the list below. Whenever the element is \"otter\", increase the value of the variable x (already defined below) by 1. After the for-loop is done, print the value of x.",
"_____no_output_____"
]
],
[
[
"x=0\nanimals = [\"otter\", \"dog\", \"piglet\", \"otter\", \"duckling\", \"otter\", \"hedgehog\", \"kitten\", \"otter\", \"otter\"]",
"_____no_output_____"
]
],
[
[
"By the way: we could have also used the following code (but then you wouldn't have practiced your loop-writing skills ;) ):\n```python\nanimals.count(\"otter\")\n```\n\nThe function \"count\" counts how many times a certain value occurs in a list.",
"_____no_output_____"
],
[
"**Exercise 4.**\nUse a list comprehension to create a new list which only contains those elements of the list below whose first letter is \"a\".",
"_____no_output_____"
]
],
[
[
"bands = [\"aerosmith\", \"black sabbath\", \"abba\", \"creedence clearwater revival\", \"a-ha\"]",
"_____no_output_____"
]
],
[
[
"**Exercise 5.**\n\nFind all of the integer numbers from 1-100 that have a 3 in them. Use `range()` and a list comprehension.",
"_____no_output_____"
],
[
"**Exercise 6.**\n\nBelow there is a list with sublists. Create a new **flattened** list (using a list comprehension) that is \"otter\" if the element is otter, but \"not an otter\" if the element is not \"otter\". In the end, the result should look like this:\n\n['otter', 'not an otter', 'otter', 'not an otter' ,'otter', 'otter', ...]",
"_____no_output_____"
]
],
[
[
"some_list = [[\"otter\", \"sheep\", \"otter\"], [\"cow\", \"otter\", \"otter\"], [\"cat\", \"fish\", \"dolphin\"]]",
"_____no_output_____"
]
],
[
[
"## Optional Exercises",
"_____no_output_____"
],
[
"**Exercise 7.**\n\nBelow is the `some_list` from exercise 6. Do the same thing as in exercise 6 (that is, creating a new list which lets \"otter\" be \"otter\" but replaces everything else by \"not an otter\"), HOWEVER, this time keeping the structure of the sublists inside the list.",
"_____no_output_____"
]
],
[
[
"some_list = [[\"otter\", \"sheep\", \"otter\"], [\"cow\", \"otter\", \"otter\"], [\"cat\", \"fish\", \"dolphin\"]]",
"_____no_output_____"
]
],
[
[
"**Exercise 8.** \n\nBelow there are three lists. Create a fourth list that has the value \"all otters\" if all three lists have \"otter\" written at the respective position. If any (or all) of the lists has/have a different animal name at the respective position, the value of the new list should be \"not all otters\". Use `zip()` and a list comprehension.",
"_____no_output_____"
]
],
[
[
"a = [\"otter\",\"dog\",\"otter\",\"otter\",\"whale\",\"\",\"otter\",\"shark\",\"otter\",\"gull\",\"goat\",\"chicken\"]\nb = [\"dog\",\"otter\",\"cat\",\"otter\",\"parrot\",\"ant\",\"otter\",\"blackbird\",\"eagle\",\"eel\",\"worm\",\"otter\"]\nc = [\"fish\",\"ladybug\",\"otter\",\"otter\",\"spider\",\"horse\",\"otter\",\"donkey\",\"pigeon\",\"mule\",\"mantis\",\"otter\"]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbbe1f31c773803d3fdc2c203cdbca0981b44a2 | 87,188 | ipynb | Jupyter Notebook | Module3/Module3 - Lab5.ipynb | vrushalipatel/aardwolf | 163fb7d894a9ac69f1c1b9ab8dd31cf00abee0c4 | [
"MIT"
] | null | null | null | Module3/Module3 - Lab5.ipynb | vrushalipatel/aardwolf | 163fb7d894a9ac69f1c1b9ab8dd31cf00abee0c4 | [
"MIT"
] | null | null | null | Module3/Module3 - Lab5.ipynb | vrushalipatel/aardwolf | 163fb7d894a9ac69f1c1b9ab8dd31cf00abee0c4 | [
"MIT"
] | null | null | null | 78.760614 | 51,782 | 0.692492 | [
[
[
"# DAT210x - Programming with Python for DS",
"_____no_output_____"
],
[
"## Module3 - Lab5",
"_____no_output_____"
],
[
"This code is intentionally missing! Read the directions on the course lab page!",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# This is new\nfrom pandas.tools.plotting import parallel_coordinates",
"_____no_output_____"
],
[
"# Look pretty...\n\n# matplotlib.style.use('ggplot')\nplt.style.use('ggplot')",
"_____no_output_____"
],
[
"# load wheat data\ndf = pd.read_csv('Datasets\\wheat.data')\ndf.head()",
"_____no_output_____"
],
[
"df_new = df.drop(labels=['id'],axis=1)\ndf_new",
"_____no_output_____"
],
[
"from pandas.plotting import andrews_curves\n\nplt.figure()\nandrews_curves(df_new,'wheat_type',alpha=0.4)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbbe8627934504ffbb18bbe298621b2ce95c93c | 22,817 | ipynb | Jupyter Notebook | docker-build/client/pipeline/pipeline_tutorial_homo_nn.ipynb | guojiex/KubeFATE | 0e3a5460bf695eed6ccea800a658037c6ee6a1b8 | [
"Apache-2.0"
] | 715 | 2019-01-24T10:52:03.000Z | 2019-10-31T12:19:22.000Z | docker-build/client/pipeline/pipeline_tutorial_homo_nn.ipynb | guojiex/KubeFATE | 0e3a5460bf695eed6ccea800a658037c6ee6a1b8 | [
"Apache-2.0"
] | 270 | 2019-02-11T02:57:36.000Z | 2019-08-29T11:22:33.000Z | docker-build/client/pipeline/pipeline_tutorial_homo_nn.ipynb | guojiex/KubeFATE | 0e3a5460bf695eed6ccea800a658037c6ee6a1b8 | [
"Apache-2.0"
] | 200 | 2019-01-26T14:21:35.000Z | 2019-11-01T01:14:36.000Z | 51.739229 | 12,678 | 0.775431 | [
[
[
"## Pipeline Tutorial",
"_____no_output_____"
],
[
"### install",
"_____no_output_____"
],
[
"`Pipeline` is distributed along with [fate_client](https://pypi.org/project/fate-client/).\n\n```bash\npip install fate_client\n```\n\nTo use Pipeline, we need to first specify which `FATE Flow Service` to connect to. Once `fate_client` installed, one can find an cmd enterpoint name `pipeline`:",
"_____no_output_____"
]
],
[
[
"!pipeline --help",
"Usage: pipeline [OPTIONS] COMMAND [ARGS]...\r\n\r\nOptions:\r\n --help Show this message and exit.\r\n\r\nCommands:\r\n init \b - DESCRIPTION: Pipeline Config Command.\r\n"
]
],
[
[
"Assume we have a `FATE Flow Service` in 127.0.0.1:9380(defaults in standalone), then exec",
"_____no_output_____"
]
],
[
[
"!pipeline init --ip 127.0.0.1 --port 9380",
"Pipeline configuration succeeded.\r\n"
]
],
[
[
"### homo nn",
"_____no_output_____"
],
[
"The `pipeline` package provides components to compose a `FATE pipeline`.",
"_____no_output_____"
]
],
[
[
"from pipeline.backend.pipeline import PipeLine\nfrom pipeline.component import DataTransform\nfrom pipeline.component import Reader\nfrom pipeline.component import HomoNN\nfrom pipeline.interface import Data",
"_____no_output_____"
]
],
[
[
"Make a `pipeline` instance:\n\n - initiator: \n * role: guest\n * party: 9999\n - roles:\n * guest: 9999\n * host: [10000, 9999]\n * arbiter: 9999\n ",
"_____no_output_____"
]
],
[
[
"pipeline = PipeLine() \\\n .set_initiator(role='guest', party_id=9999) \\\n .set_roles(guest=9999, host=[10000], arbiter=10000)",
"_____no_output_____"
]
],
[
[
"Define a `Reader` to load data",
"_____no_output_____"
]
],
[
[
"reader_0 = Reader(name=\"reader_0\")\n# set guest parameter\nreader_0.get_party_instance(role='guest', party_id=9999).component_param(\n table={\"name\": \"breast_homo_guest\", \"namespace\": \"experiment\"})\n# set host parameter\nreader_0.get_party_instance(role='host', party_id=10000).component_param(\n table={\"name\": \"breast_homo_host\", \"namespace\": \"experiment\"})",
"_____no_output_____"
]
],
[
[
"Add a `DataTransform` component to parse raw data into Data Instance",
"_____no_output_____"
]
],
[
[
"data_transform_0 = DataTransform(name=\"data_transform_0\", with_label=True)\n# set guest parameter\ndata_transform_0.get_party_instance(role='guest', party_id=9999).component_param(\n with_label=True)\ndata_transform_0.get_party_instance(role='host', party_id=[10000]).component_param(\n with_label=True)",
"_____no_output_____"
]
],
[
[
"Now, we define the `HomoNN` component.",
"_____no_output_____"
]
],
[
[
"homo_nn_0 = HomoNN(\n name=\"homo_nn_0\", \n max_iter=10, \n batch_size=-1, \n early_stop={\"early_stop\": \"diff\", \"eps\": 0.0001})",
"_____no_output_____"
]
],
[
[
"Add single `Dense` layer:",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Dense\nhomo_nn_0.add(\n Dense(units=1, input_shape=(10,), activation=\"sigmoid\"))",
"_____no_output_____"
]
],
[
[
"Compile:",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras import optimizers\nhomo_nn_0.compile(\n optimizer=optimizers.Adam(learning_rate=0.05), \n metrics=[\"accuracy\", \"AUC\"],\n loss=\"binary_crossentropy\")",
"_____no_output_____"
]
],
[
[
"Add components to pipeline:\n\n - data_transform_0 comsume reader_0's output data\n - homo_nn_0 comsume data_transform_0's output data",
"_____no_output_____"
]
],
[
[
"pipeline.add_component(reader_0)\npipeline.add_component(data_transform_0, data=Data(data=reader_0.output.data))\npipeline.add_component(homo_nn_0, data=Data(train_data=data_transform_0.output.data))\npipeline.compile();",
"_____no_output_____"
]
],
[
[
"Now, submit(fit) our pipeline:",
"_____no_output_____"
]
],
[
[
"pipeline.fit()",
"2020-11-02 17:39:31.756 | INFO | pipeline.utils.invoker.job_submitter:monitor_job_status:121 - Job id is 2020110217393142628946\n"
]
],
[
[
"Success! Now we can get model summary from homo_nn_0:",
"_____no_output_____"
]
],
[
[
"summary = pipeline.get_component(\"homo_nn_0\").get_summary()\nsummary",
"_____no_output_____"
]
],
[
[
"And we can use the summary data to draw the loss curve:",
"_____no_output_____"
]
],
[
[
"%pylab inline\npylab.plot(summary['loss_history'])",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"For more examples about using pipeline to submit `HomoNN` jobs, please refer to [HomoNN Examples](https://github.com/FederatedAI/FATE/tree/master/examples/pipeline/homo_nn)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbbec90509ba597ef2422d371a87163a45f5f00 | 18,542 | ipynb | Jupyter Notebook | Data_analysis/Ecommerce Purchases Exercise .ipynb | suyogdahal/Data-Science | 230fd55ff8fb9799507a875c413623234985479c | [
"MIT"
] | null | null | null | Data_analysis/Ecommerce Purchases Exercise .ipynb | suyogdahal/Data-Science | 230fd55ff8fb9799507a875c413623234985479c | [
"MIT"
] | null | null | null | Data_analysis/Ecommerce Purchases Exercise .ipynb | suyogdahal/Data-Science | 230fd55ff8fb9799507a875c413623234985479c | [
"MIT"
] | null | null | null | 26.602582 | 354 | 0.442509 | [
[
[
"___\n\n<a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>\n___\n# Ecommerce Purchases Exercise\n\nIn this Exercise you will be given some Fake Data about some purchases done through Amazon! Just go ahead and follow the directions and try your best to answer the questions and complete the tasks. Feel free to reference the solutions. Most of the tasks can be solved in different ways. For the most part, the questions get progressively harder.\n\nPlease excuse anything that doesn't make \"Real-World\" sense in the dataframe, all the data is fake and made-up.\n\nAlso note that all of these questions can be answered with one line of code.\n____\n** Import pandas and read in the Ecommerce Purchases csv file and set it to a DataFrame called ecom. **",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"ecom = pd.read_csv('data/EcommercePurchases.csv' ) ",
"_____no_output_____"
]
],
[
[
"**Check the head of the DataFrame.**",
"_____no_output_____"
]
],
[
[
"ecom.head()",
"_____no_output_____"
]
],
[
[
"** How many rows and columns are there? **",
"_____no_output_____"
]
],
[
[
"ecom.shape",
"_____no_output_____"
]
],
[
[
"** What is the average Purchase Price? **",
"_____no_output_____"
]
],
[
[
"ecom['Purchase Price'].mean()",
"_____no_output_____"
]
],
[
[
"** What were the highest and lowest purchase prices? **",
"_____no_output_____"
]
],
[
[
"ecom['Purchase Price'].max()",
"_____no_output_____"
],
[
"ecom['Purchase Price'].min()",
"_____no_output_____"
]
],
[
[
"** How many people have English 'en' as their Language of choice on the website? **",
"_____no_output_____"
]
],
[
[
"ecom['Language'].value_counts()",
"_____no_output_____"
]
],
[
[
"** How many people have the job title of \"Lawyer\" ? **\n",
"_____no_output_____"
]
],
[
[
"ecom[ecom['Job']=='Lawyer'].count()",
"_____no_output_____"
]
],
[
[
"** How many people made the purchase during the AM and how many people made the purchase during PM ? **\n\n**(Hint: Check out [value_counts()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) ) **",
"_____no_output_____"
]
],
[
[
"ecom.columns\necom['AM or PM'].value_counts()",
"_____no_output_____"
]
],
[
[
"** What are the 5 most common Job Titles? **",
"_____no_output_____"
]
],
[
[
"ecom['Job'].value_counts().sort_values(ascending=False).head()",
"_____no_output_____"
]
],
[
[
"** Someone made a purchase that came from Lot: \"90 WT\" , what was the Purchase Price for this transaction? **",
"_____no_output_____"
]
],
[
[
"ecom.loc[ ecom['Lot'] == \"90 WT\"]['Purchase Price']",
"_____no_output_____"
]
],
[
[
"** What is the email of the person with the following Credit Card Number: 4926535242672853 **",
"_____no_output_____"
]
],
[
[
"ecom[ecom['Credit Card']==4926535242672853]['Email']",
"_____no_output_____"
]
],
[
[
"** How many people have American Express as their Credit Card Provider *and* made a purchase above $95 ?**",
"_____no_output_____"
]
],
[
[
"ecom[(ecom['CC Provider']==\"American Express\") & (ecom['Purchase Price']>95)].count()",
"_____no_output_____"
]
],
[
[
"** Hard: How many people have a credit card that expires in 2025? **",
"_____no_output_____"
]
],
[
[
"count = 0\nfor date in ecom['CC Exp Date']:\n if date.split('/')[1]=='25':\n count+=1\nprint(count)",
"1033\n"
]
],
[
[
"** Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...) **",
"_____no_output_____"
]
],
[
[
"from collections import Counter\nemail=list()\nfor i in ecom['Email']:\n email.append(i.split('@')[1])\nc=Counter(email)\nc.most_common(5)",
"_____no_output_____"
]
],
[
[
"# Great Job!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbbf4320798b6865eff00219b3f4ff23a52c04d | 2,067 | ipynb | Jupyter Notebook | meta-kaggle.ipynb | nicapotato/KaggleNotebooks | 5fea648e99a92d21710c635db48c808c3ece82b3 | [
"MIT"
] | null | null | null | meta-kaggle.ipynb | nicapotato/KaggleNotebooks | 5fea648e99a92d21710c635db48c808c3ece82b3 | [
"MIT"
] | null | null | null | meta-kaggle.ipynb | nicapotato/KaggleNotebooks | 5fea648e99a92d21710c635db48c808c3ece82b3 | [
"MIT"
] | 1 | 2021-12-03T11:28:09.000Z | 2021-12-03T11:28:09.000Z | 2,067 | 2,067 | 0.725689 | [
[
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\nfrom IPython.display import display\n\nimport os\nprint(os.listdir(\"../input\"))\n \n# Any results you write to the current directory are saved as output.\n\n# nifty loading by cafeal - https://www.kaggle.com/cafeal/lightgbm-trial-public-0-742\ninput_files = os.listdir(\"../input\")\nfor filename in input_files:\n locals()[filename.rstrip('.csv')] = pd.read_csv(f'../input/{filename}')#.sample(1000)\n display(locals()[filename.rstrip('.csv')].head())\n display(locals()[filename.rstrip('.csv')].describe())\n #display(locals()[filename.rstrip('.csv')].describe(include=['O']))\n print(filename.rstrip('.csv'), \"## Loaded ##\")",
"_____no_output_____"
],
[
"Submission.head()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
ecbbf64dee9a7017b5eb681f3046cf87cfc0a355 | 3,061 | ipynb | Jupyter Notebook | Week1_Practice2_HousePrice.ipynb | Kathy-Xueqing-Wang/Exercises_TensorFlow-in-practice | f8a43fd85d6c2a37eddff4d019cd9da3210b6c14 | [
"MIT"
] | null | null | null | Week1_Practice2_HousePrice.ipynb | Kathy-Xueqing-Wang/Exercises_TensorFlow-in-practice | f8a43fd85d6c2a37eddff4d019cd9da3210b6c14 | [
"MIT"
] | null | null | null | Week1_Practice2_HousePrice.ipynb | Kathy-Xueqing-Wang/Exercises_TensorFlow-in-practice | f8a43fd85d6c2a37eddff4d019cd9da3210b6c14 | [
"MIT"
] | null | null | null | 24.488 | 276 | 0.45998 | [
[
[
"<a href=\"https://colab.research.google.com/github/Kathy-Xueqing-Wang/Exercises_TensorFlow-in-practice/blob/master/Week1_Practice2_HousePrice.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"This time we want to predict the house price. (As simple as the 1st exercise.)",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nfrom tensorflow import keras",
"_____no_output_____"
],
[
"x = np.array([1,2,3,4,5,6], dtype = float)\ny = np.array([1, 1.5, 2, 2.5, 3, 3.5], dtype = float)",
"_____no_output_____"
],
[
"model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])])\nmodel.compile(optimizer = 'sgd', loss = 'mean_squared_error')\nmodel.fit(x, y, epochs = 500)",
"_____no_output_____"
],
[
"model.predict([7])",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbc002b1bf5669fee62d48b6a43a0318f5e730f | 27,872 | ipynb | Jupyter Notebook | site/en/tutorials/images/transfer_learning_with_hub.ipynb | kewlcoder/docs | b0971871742a84c283c7aaa764ab13f127bd4439 | [
"Apache-2.0"
] | 491 | 2020-01-27T19:05:32.000Z | 2022-03-31T08:50:44.000Z | site/en/tutorials/images/transfer_learning_with_hub.ipynb | kewlcoder/docs | b0971871742a84c283c7aaa764ab13f127bd4439 | [
"Apache-2.0"
] | 511 | 2020-01-27T22:40:05.000Z | 2022-03-21T08:40:55.000Z | site/en/tutorials/images/transfer_learning_with_hub.ipynb | kewlcoder/docs | b0971871742a84c283c7aaa764ab13f127bd4439 | [
"Apache-2.0"
] | 627 | 2020-01-27T21:49:52.000Z | 2022-03-28T18:11:50.000Z | 29.905579 | 548 | 0.526693 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Transfer learning with TensorFlow Hub\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"[TensorFlow Hub](https://tfhub.dev/) is a repository of pre-trained TensorFlow models.\n\nThis tutorial demonstrates how to:\n\n1. Use models from TensorFlow Hub with `tf.keras`.\n1. Use an image classification model from TensorFlow Hub.\n1. Do simple transfer learning to fine-tune a model for your own image classes.",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport time\n\nimport PIL.Image as Image\nimport matplotlib.pylab as plt\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nimport datetime\n\n%load_ext tensorboard",
"_____no_output_____"
]
],
[
[
"## An ImageNet classifier\n\nYou'll start by using a classifier model pre-trained on the [ImageNet](https://en.wikipedia.org/wiki/ImageNet) benchmark dataset—no initial training required!",
"_____no_output_____"
],
[
"### Download the classifier\n\nSelect a <a href=\"https://arxiv.org/abs/1801.04381\" class=\"external\">MobileNetV2</a> pre-trained model [from TensorFlow Hub](https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2) and wrap it as a Keras layer with [`hub.KerasLayer`](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer). Any <a href=\"https://tfhub.dev/s?q=tf2&module-type=image-classification/\" class=\"external\">compatible image classifier model</a> from TensorFlow Hub will work here, including the examples provided in the drop-down below.",
"_____no_output_____"
]
],
[
[
"mobilenet_v2 =\"https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4\"\ninception_v3 = \"https://tfhub.dev/google/imagenet/inception_v3/classification/5\"\n\nclassifier_model = mobilenet_v2 #@param [\"mobilenet_v2\", \"inception_v3\"] {type:\"raw\"}",
"_____no_output_____"
],
[
"IMAGE_SHAPE = (224, 224)\n\nclassifier = tf.keras.Sequential([\n hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,))\n])",
"_____no_output_____"
]
],
[
[
"### Run it on a single image",
"_____no_output_____"
],
[
"Download a single image to try the model on:",
"_____no_output_____"
]
],
[
[
"grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')\ngrace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)\ngrace_hopper",
"_____no_output_____"
],
[
"grace_hopper = np.array(grace_hopper)/255.0\ngrace_hopper.shape",
"_____no_output_____"
]
],
[
[
"Add a batch dimension (with `np.newaxis`) and pass the image to the model:",
"_____no_output_____"
]
],
[
[
"result = classifier.predict(grace_hopper[np.newaxis, ...])\nresult.shape",
"_____no_output_____"
]
],
[
[
"The result is a 1001-element vector of logits, rating the probability of each class for the image.\n\nThe top class ID can be found with `tf.math.argmax`:",
"_____no_output_____"
]
],
[
[
"predicted_class = tf.math.argmax(result[0], axis=-1)\npredicted_class",
"_____no_output_____"
]
],
[
[
"### Decode the predictions\n\nTake the `predicted_class` ID (such as `653`) and fetch the ImageNet dataset labels to decode the predictions:",
"_____no_output_____"
]
],
[
[
"labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')\nimagenet_labels = np.array(open(labels_path).read().splitlines())",
"_____no_output_____"
],
[
"plt.imshow(grace_hopper)\nplt.axis('off')\npredicted_class_name = imagenet_labels[predicted_class]\n_ = plt.title(\"Prediction: \" + predicted_class_name.title())",
"_____no_output_____"
]
],
[
[
"## Simple transfer learning",
"_____no_output_____"
],
[
"But what if you want to create a custom classifier using your own dataset that has classes that aren't included in the original ImageNet dataset (that the pre-trained model was trained on)?\n\nTo do that, you can:\n\n1. Select a pre-trained model from TensorFlow Hub; and\n2. Retrain the top (last) layer to recognize the classes from your custom dataset.",
"_____no_output_____"
],
[
"### Dataset\n\nIn this example, you will use the TensorFlow flowers dataset:",
"_____no_output_____"
]
],
[
[
"data_root = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)",
"_____no_output_____"
]
],
[
[
"First, load this data into the model using the image data off disk with `tf.keras.utils.image_dataset_from_directory`, which will generate a `tf.data.Dataset`:",
"_____no_output_____"
]
],
[
[
"batch_size = 32\nimg_height = 224\nimg_width = 224\n\ntrain_ds = tf.keras.utils.image_dataset_from_directory(\n str(data_root),\n validation_split=0.2,\n subset=\"training\",\n seed=123,\n image_size=(img_height, img_width),\n batch_size=batch_size\n)\n\nval_ds = tf.keras.preprocessing.image_dataset_from_directory(\n str(data_root),\n validation_split=0.2,\n subset=\"validation\",\n seed=123,\n image_size=(img_height, img_width),\n batch_size=batch_size\n)",
"_____no_output_____"
]
],
[
[
"The flowers dataset has five classes:",
"_____no_output_____"
]
],
[
[
"class_names = np.array(train_ds.class_names)\nprint(class_names)",
"_____no_output_____"
]
],
[
[
"Second, because TensorFlow Hub's convention for image models is to expect float inputs in the `[0, 1]` range, use the `tf.keras.layers.Rescaling` preprocessing layer to achieve this.",
"_____no_output_____"
],
[
"Note: You could also include the `tf.keras.layers.Rescaling` layer inside the model. Refer to the [Working with preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) guide for a discussion of the tradeoffs.",
"_____no_output_____"
]
],
[
[
"normalization_layer = tf.keras.layers.Rescaling(1./255)\ntrain_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.\nval_ds = val_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.",
"_____no_output_____"
]
],
[
[
"Third, finish the input pipeline by using buffered prefetching with `Dataset.prefetch`, so you can yield the data from disk without I/O blocking issues.\n\nThese are some of the most important `tf.data` methods you should use when loading data. Interested readers can learn more about them, as well as how to cache data to disk and other techniques, in the [Better performance with the tf.data API](https://www.tensorflow.org/guide/data_performance#prefetching) guide.",
"_____no_output_____"
]
],
[
[
"AUTOTUNE = tf.data.AUTOTUNE\ntrain_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)",
"_____no_output_____"
],
[
"for image_batch, labels_batch in train_ds:\n print(image_batch.shape)\n print(labels_batch.shape)\n break",
"_____no_output_____"
]
],
[
[
"### Run the classifier on a batch of images",
"_____no_output_____"
],
[
"Now, run the classifier on an image batch:",
"_____no_output_____"
]
],
[
[
"result_batch = classifier.predict(train_ds)",
"_____no_output_____"
],
[
"predicted_class_names = imagenet_labels[tf.math.argmax(result_batch, axis=-1)]\npredicted_class_names",
"_____no_output_____"
]
],
[
[
"Check how these predictions line up with the images:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,9))\nplt.subplots_adjust(hspace=0.5)\nfor n in range(30):\n plt.subplot(6,5,n+1)\n plt.imshow(image_batch[n])\n plt.title(predicted_class_names[n])\n plt.axis('off')\n_ = plt.suptitle(\"ImageNet predictions\")",
"_____no_output_____"
]
],
[
[
"Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file.\n\nThe results are far from perfect, but reasonable considering that these are not the classes the model was trained for (except for \"daisy\").",
"_____no_output_____"
],
[
"### Download the headless model\n\nTensorFlow Hub also distributes models without the top classification layer. These can be used to easily perform transfer learning.\n\nSelect a <a href=\"https://arxiv.org/abs/1801.04381\" class=\"external\">MobileNetV2</a> pre-trained model <a href=\"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4\" class=\"external\">from TensorFlow Hub</a>. Any <a href=\"https://tfhub.dev/s?module-type=image-feature-vector&q=tf2\" class=\"external\">compatible image feature vector model</a> from TensorFlow Hub will work here, including the examples from the drop-down menu.",
"_____no_output_____"
]
],
[
[
"mobilenet_v2 = \"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4\"\ninception_v3 = \"https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4\"\n\nfeature_extractor_model = mobilenet_v2 #@param [\"mobilenet_v2\", \"inception_v3\"] {type:\"raw\"}",
"_____no_output_____"
]
],
[
[
"Create the feature extractor by wrapping the pre-trained model as a Keras layer with [`hub.KerasLayer`](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer). Use the `trainable=False` argument to freeze the variables, so that the training only modifies the new classifier layer:",
"_____no_output_____"
]
],
[
[
"feature_extractor_layer = hub.KerasLayer(\n feature_extractor_model,\n input_shape=(224, 224, 3),\n trainable=False)",
"_____no_output_____"
]
],
[
[
"The feature extractor returns a 1280-long vector for each image (the image batch size remains at 32 in this example):",
"_____no_output_____"
]
],
[
[
"feature_batch = feature_extractor_layer(image_batch)\nprint(feature_batch.shape)",
"_____no_output_____"
]
],
[
[
"### Attach a classification head\n\nTo complete the model, wrap the feature extractor layer in a `tf.keras.Sequential` model and add a fully-connected layer for classification:",
"_____no_output_____"
]
],
[
[
"num_classes = len(class_names)\n\nmodel = tf.keras.Sequential([\n feature_extractor_layer,\n tf.keras.layers.Dense(num_classes)\n])\n\nmodel.summary()",
"_____no_output_____"
],
[
"predictions = model(image_batch)",
"_____no_output_____"
],
[
"predictions.shape",
"_____no_output_____"
]
],
[
[
"### Train the model\n\nUse `Model.compile` to configure the training process and add a `tf.keras.callbacks.TensorBoard` callback to create and store logs:",
"_____no_output_____"
]
],
[
[
"model.compile(\n optimizer=tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['acc'])\n\nlog_dir = \"logs/fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=log_dir,\n histogram_freq=1) # Enable histogram computation for every epoch.",
"_____no_output_____"
]
],
[
[
"Now use the `Model.fit` method to train the model.\n\nTo keep this example short, you'll be training for just 10 epochs. To visualize the training progress in TensorBoard later, create and store logs an a [TensorBoard callback](https://www.tensorflow.org/tensorboard/get_started#using_tensorboard_with_keras_modelfit).",
"_____no_output_____"
]
],
[
[
"NUM_EPOCHS = 10\n\nhistory = model.fit(train_ds,\n validation_data=val_ds,\n epochs=NUM_EPOCHS,\n callbacks=tensorboard_callback)",
"_____no_output_____"
]
],
[
[
"Start the TensorBoard to view how the metrics change with each epoch and to track other scalar values:",
"_____no_output_____"
]
],
[
[
"%tensorboard --logdir logs/fit",
"_____no_output_____"
]
],
[
[
"<!-- <img class=\"tfo-display-only-on-site\" src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/images/tensorboard_transfer_learning_with_hub.png?raw=1\"/> -->",
"_____no_output_____"
],
[
"### Check the predictions\n\nObtain the ordered list of class names from the model predictions:",
"_____no_output_____"
]
],
[
[
"predicted_batch = model.predict(image_batch)\npredicted_id = tf.math.argmax(predicted_batch, axis=-1)\npredicted_label_batch = class_names[predicted_id]\nprint(predicted_label_batch)",
"_____no_output_____"
]
],
[
[
"Plot the model predictions:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,9))\nplt.subplots_adjust(hspace=0.5)\n\nfor n in range(30):\n plt.subplot(6,5,n+1)\n plt.imshow(image_batch[n])\n plt.title(predicted_label_batch[n].title())\n plt.axis('off')\n_ = plt.suptitle(\"Model predictions\")",
"_____no_output_____"
]
],
[
[
"## Export and reload your model\n\nNow that you've trained the model, export it as a SavedModel for reusing it later.",
"_____no_output_____"
]
],
[
[
"t = time.time()\n\nexport_path = \"/tmp/saved_models/{}\".format(int(t))\nmodel.save(export_path)\n\nexport_path",
"_____no_output_____"
]
],
[
[
"Confirm that you can reload the SavedModel and that the model is able to output the same results:",
"_____no_output_____"
]
],
[
[
"reloaded = tf.keras.models.load_model(export_path)",
"_____no_output_____"
],
[
"result_batch = model.predict(image_batch)\nreloaded_result_batch = reloaded.predict(image_batch)",
"_____no_output_____"
],
[
"abs(reloaded_result_batch - result_batch).max()",
"_____no_output_____"
],
[
"reloaded_predicted_id = tf.math.argmax(reloaded_result_batch, axis=-1)\nreloaded_predicted_label_batch = class_names[reloaded_predicted_id]\nprint(reloaded_predicted_label_batch)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,9))\nplt.subplots_adjust(hspace=0.5)\nfor n in range(30):\n plt.subplot(6,5,n+1)\n plt.imshow(image_batch[n])\n plt.title(reloaded_predicted_label_batch[n].title())\n plt.axis('off')\n_ = plt.suptitle(\"Model predictions\")",
"_____no_output_____"
]
],
[
[
"## Next steps\n\nYou can use the SavedModel to load for inference or convert it to a [TensorFlow Lite](https://www.tensorflow.org/lite/convert/) model (for on-device machine learning) or a [TensorFlow.js](https://www.tensorflow.org/js/tutorials#convert_pretrained_models_to_tensorflowjs) model (for machine learning in JavaScript).\n\nDiscover [more tutorials](https://www.tensorflow.org/hub/tutorials) to learn how to use pre-trained models from TensorFlow Hub on image, text, audio, and video tasks.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecbc09e4de5fb30fc93cbec7fff51859c3cea6b6 | 1,506 | ipynb | Jupyter Notebook | A Beginners Guide to Python/Homework Solutions/05. Assignment (HW).ipynb | fluffy-hamster/A-Beginners-Guide-to-Python | 1fa788b3fe86070ef7d0182d7b2bf18cc5a30651 | [
"MIT"
] | 44 | 2017-06-21T22:24:20.000Z | 2019-11-07T16:36:06.000Z | A Beginners Guide to Python/Homework Solutions/05. Assignment (HW).ipynb | fluffy-hamster/A-Beginners-Guide-to-Python | 1fa788b3fe86070ef7d0182d7b2bf18cc5a30651 | [
"MIT"
] | null | null | null | A Beginners Guide to Python/Homework Solutions/05. Assignment (HW).ipynb | fluffy-hamster/A-Beginners-Guide-to-Python | 1fa788b3fe86070ef7d0182d7b2bf18cc5a30651 | [
"MIT"
] | 6 | 2017-06-22T06:14:28.000Z | 2018-03-05T11:12:51.000Z | 25.1 | 150 | 0.536521 | [
[
[
"# Homework\n\n1. create a variable called \"my_name\". Its value should be your name (a string).\n2. create a variable called \"kitchen_utensil\". its value should be your favourite kitchen tool. Personally, I like a good spoon (a string).\n3. Now type into python: \n print(\"Hello, my name is \" + my_name + \" and one time, at band-camp, I used a \" + kitchen_utensil + \".\")\n \n## Possible Solution",
"_____no_output_____"
]
],
[
[
"my_name = \"Chris\"\nkitchen_utensil = \"big fat shiny spoon\"\n\nprint(\"Hello, my name is \" + my_name + \" and one time, at band-camp, I used a \" + kitchen_utensil + \".\")",
"Hello, my name is Chris and one time, at band-camp, I used a big fat shiny spoon.\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ecbc13768e95cc54134074c2fd288f4c0ac41da4 | 801,464 | ipynb | Jupyter Notebook | examples/kaguya_tc_isis_cmp.ipynb | Kelvinrr/knoten | 8bc3831ed4669940ddfc4557615f7718a5891eef | [
"Unlicense"
] | null | null | null | examples/kaguya_tc_isis_cmp.ipynb | Kelvinrr/knoten | 8bc3831ed4669940ddfc4557615f7718a5891eef | [
"Unlicense"
] | null | null | null | examples/kaguya_tc_isis_cmp.ipynb | Kelvinrr/knoten | 8bc3831ed4669940ddfc4557615f7718a5891eef | [
"Unlicense"
] | null | null | null | 1,102.42641 | 162,672 | 0.954309 | [
[
[
"# Comparing a USGSCSM and ISIS camera for Kaguya Terrain Camera",
"_____no_output_____"
]
],
[
[
"import pvl\nimport os\nimport tempfile\nimport csmapi\nimport json\n\nos.environ['ISISROOT'] = '/usgs/pkgs/isis3.8.0_RC1/install'\nimport knoten\nimport ale\nfrom knoten import vis\nfrom ale.drivers.kaguya_drivers import KaguyaTcPds3NaifSpiceDriver\nfrom ale.formatters.usgscsm_formatter import to_usgscsm\nfrom ale import util\nfrom IPython.display import Image\nfrom pysis import isis\nfrom pysis.exceptions import ProcessError\n\nimport plotly\nplotly.offline.init_notebook_mode(connected=True)",
"_____no_output_____"
]
],
[
[
"## Make a CSM sensor model\nRequires TC1S2B0_01_02842S506E1942.img in data directory",
"_____no_output_____"
]
],
[
[
"fileName = 'data/TC1S2B0_01_02842S506E1942.img'\ncamera = knoten.csm.create_csm(fileName)",
"_____no_output_____"
]
],
[
[
"## Ingest the image and spiceinit",
"_____no_output_____"
]
],
[
[
"cub_loc = os.path.splitext(fileName)[0] + '.cub'\n\ntry: \n isis.kaguyatc2isis(from_=fileName, to=cub_loc)\nexcept ProcessError as e:\n print(e.stderr)\n\ntry:\n isis.spiceinit(from_=cub_loc, shape='ellipsoid')#, iak='/home/arsanders/testData/kaguyaTcAddendum007.ti')\nexcept ProcessError as e:\n print(e.stderr)\n\nwith KaguyaTcPds3NaifSpiceDriver(fileName) as driver:\n usgscsmString = to_usgscsm(driver)\n usgscsm_dict = json.loads(usgscsmString)\n\n csm_isd = os.path.splitext(fileName)[0] + '.json'\n json.dump(usgscsm_dict, open(csm_isd, 'w'))",
"_____no_output_____"
]
],
[
[
"## Compare USGS CSM and ISIS pixels",
"_____no_output_____"
]
],
[
[
"csmisis_diff_lv_plot, csmisis_diff_ephem_plot, external_orientation_data = vis.external_orientation_diff(csm_isd, cub_loc, 10, 50, 600, 600)",
"_____no_output_____"
],
[
"csmisis_diff_lv_plot_bytes = csmisis_diff_lv_plot.to_image(format=\"png\")\ncsmisis_diff_ephem_plot_bytes = csmisis_diff_ephem_plot.to_image(format=\"png\")\nImage(csmisis_diff_lv_plot_bytes)",
"_____no_output_____"
],
[
"Image(csmisis_diff_ephem_plot_bytes)",
"_____no_output_____"
],
[
"external_orientation_data[['diffx', 'diffy', 'diffz', 'diffu', 'diffv', 'diffw']].describe()",
"_____no_output_____"
],
[
"isis2csm_plot, csm2isis_plot, isiscsm_plotlatlon, isiscsm_plotbf, isis2csm_data, csm2isis_data, isiscsm_latlondata, isiscsm_bfdata = vis.reprojection_diff(csm_isd, cub_loc, 10, 50, 500, 500)",
"_____no_output_____"
],
[
"Image(isis2csm_plot.to_image())",
"_____no_output_____"
],
[
"isis2csm_data[['diff line', 'diff sample']].describe()",
"_____no_output_____"
],
[
"Image(csm2isis_plot.to_image())",
"_____no_output_____"
],
[
"csm2isis_data[['diff line', 'diff sample']].describe()",
"_____no_output_____"
],
[
"Image(isiscsm_plotlatlon.to_image())",
"_____no_output_____"
],
[
"Image(isiscsm_plotbf.to_image())",
"_____no_output_____"
],
[
"isiscsm_bfdata[['diffx', 'diffy', 'diffz']].describe()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbc1a7ed9232026158b6d81156eb87bd8364508 | 393,083 | ipynb | Jupyter Notebook | Copy_of_OH_Binary_Classification_Model.ipynb | cwmarris/pull-request-monitor | a9b4381cd9d5547200f9d0afb47edb89e3ae7051 | [
"MIT"
] | null | null | null | Copy_of_OH_Binary_Classification_Model.ipynb | cwmarris/pull-request-monitor | a9b4381cd9d5547200f9d0afb47edb89e3ae7051 | [
"MIT"
] | null | null | null | Copy_of_OH_Binary_Classification_Model.ipynb | cwmarris/pull-request-monitor | a9b4381cd9d5547200f9d0afb47edb89e3ae7051 | [
"MIT"
] | null | null | null | 61.199284 | 32,218 | 0.562433 | [
[
[
"<a href=\"https://colab.research.google.com/github/cwmarris/pull-request-monitor/blob/master/Copy_of_OH_Binary_Classification_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Binary Classification Models\n- Author: Amy Zhuang\n- Last Updated: May 2021",
"_____no_output_____"
],
[
"## Types Of Binary Classification Models",
"_____no_output_____"
],
[
"* Logistic Regression: Logistic regression is a statistical model that uses logistic function to model the probability of an event happening. Logistic function is a s-shaped curve with the value ranges from 0 to 1.\n* Least Absolute Shrinkage and Selection Operator (LASSO) Regression: LASSO Regression is also called L1 regularization. It shrinks the model coefficients based on a penalty term added to the absolute values of the coefficients. Some of the coefficients may become zeros. So LASSO can do automatic variable selection and make a model simpler. The penalty parameter controls the strength of the l1 penalty. Large penalty parameter value means more coefficients will be set to zeros.\n* Ridge Regression: Ridge Regression is also called L2 regularization. It is similar to the LASSO regression in the sense that it can decrease the magnitude of model coefficients. It shrinks the model coefficients based on a penalty term added to the square of the coefficients. The penalty parameter controls the strength of the l2 penalty. Large penalty parameter value means more coefficients will be close to zeros, but they will not be set to zeros.\n* Elastic Net: Elastic net is a combination of LASSO and Ridge regression.\n* K Nearest Neighbors (KNN): KNN is a supervised machine learning model that makes prediction based on the values of neighbors. It is easy to implement and easy to understand, but can be slow for large datasets.\n* Support Vector Machine (SVM): SVM uses a hyperplane that maximize the distance between classes to make prediction.\n* Decision Tree: Uses a tree structure to split data and make predictions.\n* Random Forest: Random forest is an ensembled model with many decision tree models. The decision tree model were trained independently.\n* Extra Tree: Extra tree is also called extremely randomized trees. It is similar to the random forest. There are two major differences between the random forest and the extra tree model. Random forest uses a subset of the dataset for individual trees, while extra tree model uses all the data in the dataset for individual trees. Random forest split data based on best split of certain metrics, while extra tree split the data randomly.\n* Gradient Boosting Machine: Similar to the random forest, the gradient boosting machine is an ensembled model with many trees. Unlike the random forest, the trees in the gradient boosting machine are not independent. Each later tree is dependent on the error results of the previous tree. The model is trained in a sequential manner. Gradient descent is used to minimize the loss when adding new models.\n* XGBoost: XGBoost stands for eXtreme Gradient Boosting. It is a variation of the Gradient Boosting machine. It supports parrallel computing so it is much faster than GBM. It also has a more efficient optimization algrithm, so it usually produce better performance than GBM.\n* Naive Bayesian Model: Naive Bayesian Model is a model based on the Bayesian's Theorem. It is naive in the sense that it assumes the independence among predictors.",
"_____no_output_____"
],
[
"### A Deep Look at XGBoost\n* Reference 1: https://machinelearningmastery.com/gentle-introduction-xgboost-applied-machine-learning/\n* Reference 2: https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/",
"_____no_output_____"
],
[
"#### What makes XGBoost special?\n**Gradient Boosting Types**\n* Gradient boosting algorithm with learning rate\n* Stochastic Gradient Boosting with sub-sampling at the row, column and column per split levels.\n* Regularized Gradient Boosting with L1 and L2\n* Users can define custom optimization objectives and evaluation criteria.\n\n**System Features**\n* Parallelization of tree construction using all CPU cores during training.\n* Distributed Computing for tranining very large models using a cluster of machines.\n* Out-of-Core Computing for very large datasets that do not fit into memory\n* Cache Optimization of data structures and algorithm to make best use of hardware.\n\n**Algorithm Features**\n* Sparse Aware implementation with automatic handling of missing data values.\n* Block Structure to support the parallelization of tree constructions.\n* Continued Training and can further boost an already fitted model on new data.\n* Split to max_depth and prun the tree backwards. This helps to find potential positive gain after negative gain.\n* Has built-in Cross-Validation",
"_____no_output_____"
],
[
"#### XGBoost Hyperparameters\n**Overall Parameters**\n* booster has two options: gbtree(default) is for tree-based models and gblinear is for linear models.\n* slient controls the messages. Default is 0 and 1 means no messages will be printed.\n* nthread automatically detects and uses all the cores available.\n\n**Tree Parameters**\n* num_boosting_rounds: number of trees\n* eta/learning rate: learning rate with the default value of 0.3\n* min_child_weight: minimum sum of weights of all observations required in a child. High values prevent overfitting and low values may cause under fitting\n* max_depth: maximum depth of a tree.\n* max_leaf_nodes: Maximum number of terminal nodes in a tree\n* gamma: the minimum loss reduction required to make a split.\n* subsample: Percentage of observations to be randomly sampled for each tree. Lower value can prevent overfitting. Typical 0.5-1\n* colsample_bytree: Percentage of columns to be sampled for each tree\n* colsample_bylevel: subsample ratio of columns for each split in each level.\n* lambda/reg_lambda: L2 regularization term on weights. Default=1\n* alpha/reg_alpha: L1 regularization term on weights. Default=0. Can be used for dimension reductiton. Default=0\n* scale_pos_weight: helps imbalanced data to converge.\n* max_delta_step: not commonly used.\n\n**Optimization Parameter**\n* objective: loss function to be minimized\n * binary:logistic-logistic regression for binary classification, returns predicted probability (not class)\n * multi: softmax–multiclass classification using the softmax objective, returns predicted class (not probabilities)\nyou also need to set an additional num_class (number of classes) parameter defining the number of unique classes\n * multi:softprob –same as softmax, but returns predicted probability of each data point belonging to each class.\n* eval_metric [ default according to objective ]\n * The metric to be used for validation data.\n * The default values are rmse for regression and error for classification.\n * Typical values are:\n * rmse – root mean square error\n * mae – mean absolute error\n * logloss – negative log-likelihood\n * error – Binary classification error rate (0.5 threshold)\n * merror – Multiclass classification error rate\n * mlogloss – Multiclass logloss\n * auc: Area under the curve",
"_____no_output_____"
],
[
"#### Early Stopping\n* Reference: https://machinelearningmastery.com/avoid-overfitting-by-early-stopping-with-xgboost-in-python/\n* Early stopping is an approach to training complex machine learning models to avoid overfitting. It works by monitoring the performance of the model that is being trained on a separate test dataset and stopping the training procedure once the performance on the test dataset has not improved after a fixed number of training iterations. It avoids overfitting by attempting to automatically select the inflection point where performance on the test dataset starts to decrease while performance on the training dataset continues to improve as the model starts to overfit. The performance measure may be the loss function that is being optimized to train the model (such as logarithmic loss), or an external metric of interest to the problem in general (such as classification accuracy).",
"_____no_output_____"
],
[
"This is a reference for feature importance of boosting: https://stats.stackexchange.com/questions/162162/relative-variable-importance-for-boosting",
"_____no_output_____"
],
[
"# Which Model to use?",
"_____no_output_____"
],
[
"If the project goal is to build a solid model with the performance as good as possible, and the project have time and resources for that, then try different models and see which one has the best performance.\n\nIf the project goal is to get insights from the model, logistic regression and its variations (LASSO, Ridge, Elastic Net) or decision tree model produces easy to interpret results.\n\nIf the time and resrouce is limited and you would like to run a quick model with decent results, use tree based models such as random forest or xgboost.\n\nOr you can use ensemble models to incoporate every model in a meta model.",
"_____no_output_____"
],
[
"## Import Libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# from sklearn.decomposition import PCA\nimport seaborn as sns\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import confusion_matrix, log_loss\nfrom sklearn.ensemble import StackingClassifier",
"_____no_output_____"
]
],
[
[
"## Readin Data",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets\ndata = datasets.load_breast_cancer()",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"df = pd.DataFrame(data=data.data, columns=data.feature_names)\ndf['target']=data.target\ndf.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 569 entries, 0 to 568\nData columns (total 31 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mean radius 569 non-null float64\n 1 mean texture 569 non-null float64\n 2 mean perimeter 569 non-null float64\n 3 mean area 569 non-null float64\n 4 mean smoothness 569 non-null float64\n 5 mean compactness 569 non-null float64\n 6 mean concavity 569 non-null float64\n 7 mean concave points 569 non-null float64\n 8 mean symmetry 569 non-null float64\n 9 mean fractal dimension 569 non-null float64\n 10 radius error 569 non-null float64\n 11 texture error 569 non-null float64\n 12 perimeter error 569 non-null float64\n 13 area error 569 non-null float64\n 14 smoothness error 569 non-null float64\n 15 compactness error 569 non-null float64\n 16 concavity error 569 non-null float64\n 17 concave points error 569 non-null float64\n 18 symmetry error 569 non-null float64\n 19 fractal dimension error 569 non-null float64\n 20 worst radius 569 non-null float64\n 21 worst texture 569 non-null float64\n 22 worst perimeter 569 non-null float64\n 23 worst area 569 non-null float64\n 24 worst smoothness 569 non-null float64\n 25 worst compactness 569 non-null float64\n 26 worst concavity 569 non-null float64\n 27 worst concave points 569 non-null float64\n 28 worst symmetry 569 non-null float64\n 29 worst fractal dimension 569 non-null float64\n 30 target 569 non-null int64 \ndtypes: float64(30), int64(1)\nmemory usage: 137.9 KB\n"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df['target'].value_counts()",
"_____no_output_____"
],
[
"df['target'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"## Standardization",
"_____no_output_____"
]
],
[
[
"X_features = df[df.columns.difference(['target'])]",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX = pd.DataFrame(sc.fit_transform(X_features),index=X_features.index,columns=X_features.columns)\n# Note: For the datasets with outliers, standardize using Robust Scaler",
"_____no_output_____"
]
],
[
[
"Here is an explaination of standardization and normalization: https://www.statisticshowto.com/probability-and-statistics/normal-distributions/normalized-data-normalization/#:~:text=Normalization%20vs.&text=The%20terms%20normalization%20and%20standardization,a%20standard%20deviation%20of%201.",
"_____no_output_____"
]
],
[
[
"X.describe()",
"_____no_output_____"
]
],
[
[
"## Train Test Split",
"_____no_output_____"
]
],
[
[
"y = df['target']",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"X_train.shape",
"_____no_output_____"
],
[
"X_test.shape",
"_____no_output_____"
],
[
"y_train.shape",
"_____no_output_____"
],
[
"y_test.shape",
"_____no_output_____"
]
],
[
[
"## Logistic Regression",
"_____no_output_____"
]
],
[
[
"# Check default values\nLogisticRegression()",
"_____no_output_____"
],
[
"logistic = LogisticRegression(penalty='none', random_state=0).fit(X_train, y_train)\n# penalty='none' means no regularization is applied\n# C is the inverse of the regularization strength, so smaller number indicates stronger regularization.",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=logistic.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, logistic.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 5 66\n"
],
[
"print(classification_report(y_test, logistic.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.89 0.95 0.92 43\n 1 0.97 0.93 0.95 71\n\n accuracy 0.94 114\n macro avg 0.93 0.94 0.94 114\nweighted avg 0.94 0.94 0.94 114\n\n"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"LogisticCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(logistic.coef_))], axis = 1)\nLogisticCoeff.columns=['Variable','Coefficient']\nLogisticCoeff['Coefficient_Abs']=LogisticCoeff['Coefficient'].apply(abs)\nLogisticCoeff.sort_values(by='Coefficient_Abs', ascending=False)",
"_____no_output_____"
]
],
[
[
"To learn more about the interpretation of the logistic regression coefficient, check out his article: https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-how-do-i-interpret-odds-ratios-in-logistic-regression/",
"_____no_output_____"
],
[
"## Ridge",
"_____no_output_____"
]
],
[
[
"ridge = LogisticRegression(penalty='l2', random_state=0).fit(X_train, y_train)\n# penalty='l2' means Ridge regularization is applied",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=ridge.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, ridge.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, ridge.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.95 0.96 43\n 1 0.97 0.99 0.98 71\n\n accuracy 0.97 114\n macro avg 0.97 0.97 0.97 114\nweighted avg 0.97 0.97 0.97 114\n\n"
],
[
"ridgeCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(ridge.coef_))], axis = 1)\nridgeCoeff.columns=['Variable','Coefficient']\nridgeCoeff['Coefficient_Abs']=ridgeCoeff['Coefficient'].apply(abs)\nridgeCoeff.sort_values(by='Coefficient_Abs', ascending=False)",
"_____no_output_____"
]
],
[
[
"## LASSO",
"_____no_output_____"
]
],
[
[
"lasso = LogisticRegression(penalty='l1', solver='liblinear', random_state=0).fit(X_train, y_train)\n# penalty='l1' means LASSO regularization is applied\n# solver is an algorithm to use in the optimization problem.\n# For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones.\n# For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.\n# ‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty\n# ‘liblinear’ and ‘saga’ also handle L1 penalty\n# ‘saga’ also supports ‘elasticnet’ penalty\n# ‘liblinear’ does not support setting penalty='none'",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=lasso.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, lasso.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 42 1\ntrue:yes 2 69\n"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, ridge.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, lasso.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.95 0.98 0.97 43\n 1 0.99 0.97 0.98 71\n\n accuracy 0.97 114\n macro avg 0.97 0.97 0.97 114\nweighted avg 0.97 0.97 0.97 114\n\n"
],
[
"lassoCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(lasso.coef_))], axis = 1)\nlassoCoeff.columns=['Variable','Coefficient']\nlassoCoeff['Coefficient_Abs']=lassoCoeff['Coefficient'].apply(abs)\nlassoCoeff.sort_values(by='Coefficient_Abs', ascending=False)",
"_____no_output_____"
]
],
[
[
"## Elastic Net",
"_____no_output_____"
]
],
[
[
"elasticNet = LogisticRegression(penalty='elasticnet', solver='saga', l1_ratio=0.5, random_state=0).fit(X_train, y_train)\n# solver is an algorithm to use in the optimization problem.\n# For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones.\n# For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.\n# ‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty\n# ‘liblinear’ and ‘saga’ also handle L1 penalty\n# ‘saga’ also supports ‘elasticnet’ penalty\n# ‘liblinear’ does not support setting penalty='none'\n\n# l1_ratio: The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'`. Setting ``l1_ratio=0 is equivalent to \n# using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2.",
"/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_sag.py:330: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge\n \"the coef_ did not converge\", ConvergenceWarning)\n"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=elasticNet.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, elasticNet.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 42 1\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, elasticNet.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.98 0.98 43\n 1 0.99 0.99 0.99 71\n\n accuracy 0.98 114\n macro avg 0.98 0.98 0.98 114\nweighted avg 0.98 0.98 0.98 114\n\n"
],
[
"elasticNetCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(elasticNet.coef_))], axis = 1)\nelasticNetCoeff.columns=['Variable','Coefficient']\nelasticNetCoeff['Coefficient_Abs']=elasticNetCoeff['Coefficient'].apply(abs)\nelasticNetCoeff.sort_values(by='Coefficient_Abs', ascending=False)",
"_____no_output_____"
]
],
[
[
"## KNN",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier \nknn = KNeighborsClassifier().fit(X_train, y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=knn.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix = confusion_matrix(y_test, knn.predict(X_test))\ncmtx = pd.DataFrame(confusion_matrix, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 40 3\ntrue:yes 3 68\n"
],
[
"print(classification_report(y_test, knn.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.93 0.93 0.93 43\n 1 0.96 0.96 0.96 71\n\n accuracy 0.95 114\n macro avg 0.94 0.94 0.94 114\nweighted avg 0.95 0.95 0.95 114\n\n"
]
],
[
[
"knn does not have coefficients: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html",
"_____no_output_____"
],
[
"## SVM",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import LinearSVC\nfrom sklearn.calibration import CalibratedClassifierCV \n# LinearSVC does not provide predict_proba, this will scikit-learn provides \n# CalibratedClassifierCV which can be used to solve this problem: it allows \n# to add probability output to LinearSVC or any other classifier which implements \n# decision_function method",
"_____no_output_____"
],
[
" svm = LinearSVC(random_state = 42)\n svm.fit(X_train, y_train)\n clf = CalibratedClassifierCV(svm) \n clf.fit(X_train, y_train)\n y_proba = clf.predict_proba(X_test)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=clf.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, svm.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 3 68\n"
],
[
"print(classification_report(y_test, svm.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.93 0.95 0.94 43\n 1 0.97 0.96 0.96 71\n\n accuracy 0.96 114\n macro avg 0.95 0.96 0.95 114\nweighted avg 0.96 0.96 0.96 114\n\n"
],
[
"svmCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(svm.coef_))], axis = 1)\nsvmCoeff.columns=['Variable','Coefficient']\nsvmCoeff['Coefficient_Abs']=svmCoeff['Coefficient'].apply(abs)\nsvmCoeff.sort_values(by='Coefficient_Abs', ascending=False)",
"_____no_output_____"
]
],
[
[
"## Decision Tree",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\ntreeClf = DecisionTreeClassifier(random_state=0)\ntreeClf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=treeClf.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, treeClf.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 39 4\ntrue:yes 3 68\n"
],
[
"print(classification_report(y_test, treeClf.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.93 0.91 0.92 43\n 1 0.94 0.96 0.95 71\n\n accuracy 0.94 114\n macro avg 0.94 0.93 0.93 114\nweighted avg 0.94 0.94 0.94 114\n\n"
],
[
"treeCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(treeClf.feature_importances_))], axis = 1)\ntreeCoeff.columns=['Variable','Feature_Importance']\ntreeCoeff.sort_values(by='Feature_Importance', ascending=False)",
"_____no_output_____"
],
[
"from sklearn.tree import export_graphviz\nimport graphviz",
"_____no_output_____"
],
[
"export_graphviz(treeClf, out_file=\"mytree.dot\")\nwith open(\"mytree.dot\") as f:\n dot_graph = f.read()\ngraphviz.Source(dot_graph)",
"_____no_output_____"
]
],
[
[
"## Random Forest",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nrf = RandomForestClassifier(n_estimators=200, random_state=0, n_jobs=-1).fit(X_train,y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=rf.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, rf.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 40 3\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, rf.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.93 0.95 43\n 1 0.96 0.99 0.97 71\n\n accuracy 0.96 114\n macro avg 0.97 0.96 0.96 114\nweighted avg 0.97 0.96 0.96 114\n\n"
],
[
"rfCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(rf.feature_importances_))], axis = 1)\nrfCoeff.columns=['Variable','Feature_Importance']\nrfCoeff.sort_values(by='Feature_Importance', ascending=False)",
"_____no_output_____"
]
],
[
[
"To see how the feature importance are calculated, check out this: https://mljar.com/blog/feature-importance-in-random-forest/",
"_____no_output_____"
],
[
"## Extra Tree",
"_____no_output_____"
],
[
"Extra Tree is also called Extreme Randomized Trees. Comparing to Random Forest\n\n\n1. When choosing variables at a split, samples are drawn from the entire training set instead of a bootstrap sample of the training set.\n2. Splits are chosen completely at random from the range of values in the sample at each split.\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import ExtraTreesClassifier",
"_____no_output_____"
],
[
"et = ExtraTreesClassifier(n_estimators=200, random_state=0, n_jobs=-1).fit(X_train,y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=et.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, et.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, et.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.95 0.96 43\n 1 0.97 0.99 0.98 71\n\n accuracy 0.97 114\n macro avg 0.97 0.97 0.97 114\nweighted avg 0.97 0.97 0.97 114\n\n"
],
[
"etCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(et.feature_importances_))], axis = 1)\netCoeff.columns=['Variable','Feature_Importance']\netCoeff.sort_values(by='Feature_Importance', ascending=False)",
"_____no_output_____"
]
],
[
[
"## Gradient Boosting Machine (GBM)",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import GradientBoostingClassifier\ngb = GradientBoostingClassifier(n_estimators=200, random_state=42).fit(X_train,y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=gb.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, rf.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 40 3\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, gb.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.95 0.93 0.94 43\n 1 0.96 0.97 0.97 71\n\n accuracy 0.96 114\n macro avg 0.96 0.95 0.95 114\nweighted avg 0.96 0.96 0.96 114\n\n"
],
[
"gbCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(gb.feature_importances_))], axis = 1)\ngbCoeff.columns=['Variable','Feature_Importance']\ngbCoeff.sort_values(by='Feature_Importance', ascending=False)",
"_____no_output_____"
]
],
[
[
"## XGBOOST",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier\nxgb = XGBClassifier().fit(X_train, y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=xgb.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, xgb.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 40 3\ntrue:yes 2 69\n"
],
[
"print(classification_report(y_test, gb.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.95 0.93 0.94 43\n 1 0.96 0.97 0.97 71\n\n accuracy 0.96 114\n macro avg 0.96 0.95 0.95 114\nweighted avg 0.96 0.96 0.96 114\n\n"
],
[
"xgbCoeff = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(xgb.feature_importances_))], axis = 1)\nxgbCoeff.columns=['Variable','Feature_Importance']\nxgbCoeff.sort_values(by='Feature_Importance', ascending=False)",
"_____no_output_____"
]
],
[
[
"Xgboost vs. GBM\n\n\n* Regularization\n* Paralell computing\n* Handles missing values\n* Second-order gradients (partial derivative of the loss function) provides more information for the directions of the gradient\n\n",
"_____no_output_____"
],
[
"## Naive Beyes Model",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\nnb = GaussianNB().fit(X_train, y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=nb.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, nb.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 40 3\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, nb.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.93 0.95 43\n 1 0.96 0.99 0.97 71\n\n accuracy 0.96 114\n macro avg 0.97 0.96 0.96 114\nweighted avg 0.97 0.96 0.96 114\n\n"
]
],
[
[
"Training is fast because only the probability of each class and the probability of each class given different input (x) values need to be calculated. No coefficients need to be fitted by optimization procedures.",
"_____no_output_____"
],
[
"# Stacking Model",
"_____no_output_____"
],
[
"Stacking is an ensemble method that uses the predictions of individual machine learning models as the input of a meta model.",
"_____no_output_____"
]
],
[
[
"estimator_list = [\n ('knn',knn),\n ('LASSO',lasso),\n ('Ridge',ridge),\n ('Random Forest',rf),\n ('Extra Tree', et),\n ('Gradient Boosting', gb),\n ('XGBoost', xgb),\n ('Naive Bayes',nb) ]",
"_____no_output_____"
],
[
"stack_model = StackingClassifier(\n estimators=estimator_list, final_estimator=LogisticRegression()\n)",
"_____no_output_____"
],
[
"stack_model.fit(X_train, y_train)",
"_____no_output_____"
],
[
"#ROC/AUC Curve\nfrom sklearn import metrics\ny_test_prob=stack_model.predict_proba(X_test)[:,1]\nfpr,tpr, _=metrics.roc_curve(y_test,y_test_prob)\nauc=metrics.roc_auc_score(y_test,y_test_prob)\nplt.plot(fpr,tpr,label=\"area=\"+str(auc))\nplt.legend(loc=4)\nplt.show()",
"_____no_output_____"
],
[
"log_loss(y_test,y_test_prob)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_test, stack_model.predict(X_test))\ncmtx = pd.DataFrame(cm, index=['true:no', 'true:yes'], columns=['pred:no', 'pred:yes'])\nprint(cmtx)",
" pred:no pred:yes\ntrue:no 41 2\ntrue:yes 1 70\n"
],
[
"print(classification_report(y_test, stack_model.predict(X_test)))",
" precision recall f1-score support\n\n 0 0.98 0.95 0.96 43\n 1 0.97 0.99 0.98 71\n\n accuracy 0.97 114\n macro avg 0.97 0.97 0.97 114\nweighted avg 0.97 0.97 0.97 114\n\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbc2a4e1a940fa28d5e9f637e5ea1ac0df15fb7 | 32,575 | ipynb | Jupyter Notebook | ImageProcessing/BrighterFatterCorrection.ipynb | herjy/StackClub | 1a8639b68f7bf53a8b90406ee2001d7b295eeaa8 | [
"MIT"
] | 3 | 2020-11-24T22:34:17.000Z | 2020-11-25T01:10:42.000Z | ImageProcessing/BrighterFatterCorrection.ipynb | herjy/StackClub | 1a8639b68f7bf53a8b90406ee2001d7b295eeaa8 | [
"MIT"
] | 2 | 2020-11-19T22:45:14.000Z | 2020-11-20T00:47:12.000Z | ImageProcessing/BrighterFatterCorrection.ipynb | herjy/StackClub | 1a8639b68f7bf53a8b90406ee2001d7b295eeaa8 | [
"MIT"
] | 1 | 2020-11-24T22:33:11.000Z | 2020-11-24T22:33:11.000Z | 44.623288 | 875 | 0.645157 | [
[
[
"# Analysis of Beam Simulator Images and Brighter-fatter Correction\n<br>Owner(s): **Andrew Bradshaw** ([@andrewkbradshaw](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@andrewkbradshaw))\n<br>Last Verified to Run: **2019-08-14**\n<br>Verified Stack Release: **18.1.0**\n\nThis notebook demonstrates the [brighter-fatter systematic error](https://arxiv.org/abs/1402.0725) on images of stars and galaxies illuminated on an ITL-3800C-002 CCD at the [UC Davis LSST beam simulator laboratory](https://arxiv.org/abs/1411.5667). Using a series of images at increasing exposure times, we demonstrate the broadening of image profiles on DM stack shape measurements, and a [possible correction method](https://arxiv.org/abs/1711.06273) which iteratively applies a kernel to restore electrons to the pixels from which they were deflected. To keep things simple, for now we skip most DM stack instrument signature removal (ISR) and work on a subset of images which are already processed arrays (500x500) of electrons.\n\n### Learning Objectives:\n\nAfter working through this tutorial you should be able to: \n1. Characterize and measure objects (stars/galaxies) in LSST beam simulator images\n2. Test the Brighter-Fatter kernel correction method on those images\n3. Build your own tests of stack ISR algorithms\n\n### Logistics\nThis notebook is intended to be runnable on `lsst-lsp-stable.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n\n## Set-up",
"_____no_output_____"
]
],
[
[
"# What version of the Stack are we using?\n! echo $HOSTNAME\n! eups list -s | grep lsst_distrib",
"_____no_output_____"
],
[
"%pwd",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LogNorm\nfrom itertools import cycle\nfrom astropy.io import fits\nimport time,glob,os\n\n\n# if running stack v16.0, silence a long matplotlib Agg warning with:\n#import warnings\n#warnings.filterwarnings(\"ignore\", category=UserWarning)\n\n%matplotlib inline\n\n# What version of the Stack am I using?\n! echo $HOSTNAME\n! eups list -s lsst_distrib\n\n# make a directory to write the catalogs\nusername=os.environ.get('USERNAME')\ncat_dir='/home/'+username+'/DATA/beamsim/'\nif not os.path.exists(cat_dir):\n ! mkdir /home/$USER/DATA/beamsim/",
"_____no_output_____"
]
],
[
[
"## Step 1: Read in an image\nCut-outs of beam simulator star/galaxy images have been placed in the shared data directory at `/project/shared/data/beamsim/bfcorr/`. We skip (for now) most of the instrument signature removal (ISR) steps because these are preprocessed images (bias subtracted, gain corrected). We instead start by reading in one of those `.fits` files and making an image plane `afwImage.ExposureF` as well as a variance plane (based upon the image), which is then ready for characterization and calibration in the following cells.",
"_____no_output_____"
]
],
[
[
"import lsst.afw.image as afwImage\nfrom lsst.ip.isr.isrFunctions import updateVariance\n\n# where the data lives, choosing one image to start\nimnum=0 # for this dataset, choose 0-19 as an example\nfitsglob='/project/shared/data/beamsim/bfcorr/*part.fits'\nfitsfilename = np.sort(glob.glob(fitsglob))[imnum] \n#fitsfilename ='/home/sarujin/testdata/ITL-3800C-002_spot_spot_400_20171108114719whole.fits'\n# Read in a single image to an afwImage.ImageF object\nimage_array=afwImage.ImageF.readFits(fitsfilename)\nimage = afwImage.ImageF(image_array)\nexposure = afwImage.ExposureF(image.getBBox())\nexposure.setImage(image)\nhdr=fits.getheader(fitsfilename) # the header has some useful info in it\nprint(\"Read in \",fitsfilename.split('/')[-1])\n\n# Set the variance plane using the image plane via updateVariance function\ngain = 1.0 # because these images are already gain corrected\nreadNoise = 10.0 # in electrons\nupdateVariance(exposure.maskedImage, gain, readNoise)\n\n# Another way of setting variance and/or masks?\n#mask = afwImage.makeMaskFromArray(np.zeros((4000,4072)).astype('int32'))\n#variance = afwImage.makeImageFromArray((readNoise**2 + image_array.array())\n#masked_image = afwImage.MaskedImageF(image, mask, variance)\n#exposure = afwImage.ExposureF(masked_image)",
"_____no_output_____"
]
],
[
[
"Now visualize the image and its electron distribution using matplotlib. Things to note: 1) the array is (purposefully) tilted with respect to the pixel grid, 2) most pixel values are at the background/sky level (a function of the mask opacity and illumination), but there is a pileup of counts around ~200k electrons indicating full well and saturation in some of the brightest pixels of the image",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12,5)),plt.subplots_adjust(wspace=.3)\nplt.suptitle('Star/galaxy beam sim image and histogram \\n'+fitsfilename.split('/')[-1])\n\nplt.subplot(121)\nplt.imshow(exposure.image.array,vmax=1e3,origin='lower')\nplt.colorbar(label='electrons')\n\nplt.subplot(122)\nplt.hist(exposure.image.array.flatten(),bins=1000,histtype='step')\nplt.yscale('log')#,plt.xscale('log')\nplt.xlabel('Number of electrons in pixel'),plt.ylabel('Number of pixels')",
"_____no_output_____"
]
],
[
[
"## Step 2: Perform image characterization and initial measurement\nWe now perform a base-level characterization of the image using the stack. We set some configuration settings which are specific to our sestup which has a very small optical PSF, setting a PSF size and turning off some other aspects such as cosmic ray rejection because of this.",
"_____no_output_____"
]
],
[
[
"from lsst.pipe.tasks.characterizeImage import CharacterizeImageTask, CharacterizeImageConfig\nimport lsst.meas.extensions.shapeHSM\n\n# first set a few configs that are specific to our beam simulator data\ncharConfig = CharacterizeImageConfig()\n#this set the fwhm of the simple PSF to that of optics\ncharConfig.installSimplePsf.fwhm = .5\ncharConfig.doMeasurePsf = False\ncharConfig.doApCorr = False # necessary\ncharConfig.repair.doCosmicRay = False \n# we do have some cosmic rays, but we also have subpixel mask features and an undersampled PSF\ncharConfig.detection.background.binSize = 10 # worth playing around with\n#charConfig.background.binSize = 50\ncharConfig.detection.minPixels = 5 # also worth playing around with\n\n# Add the HSM (Hirata/Seljak/Mandelbaum) adaptive moments shape measurement plugin\ncharConfig.measurement.plugins.names |= [\"ext_shapeHSM_HsmSourceMoments\"]\n# to configure hsm you would do something like\n# charConfig.measurement.plugins[\"ext_shapeHSM_hsmSourceMoments\"].addFlux = True\n# (see sfm.py in meas_base for all the configuration options for the measurement task)\n\ncharTask = CharacterizeImageTask(config=charConfig)\n\ncharTask.run?\n# use charTask.run instead of characterize for v16.0+22\n# could also perform similar functions with processCcdTask.run()",
"_____no_output_____"
],
[
"# Display which plugins are being used for measurement\ncharConfig.measurement.plugins.active ",
"_____no_output_____"
],
[
"tstart=time.time()\ncharResult = charTask.run(exposure) # charTask.run(exposure) stack v16.0+22\nprint(\"Characterization took \",str(time.time()-tstart)[:4],\" seconds\")\nprint(\"Detected \",len(charResult.sourceCat),\" objects \")\n\nplt.title('X/Y locations of detections')\nplt.plot(charResult.sourceCat['base_SdssCentroid_x'],charResult.sourceCat['base_SdssCentroid_y'],'r.')",
"_____no_output_____"
]
],
[
[
"This figure illustrates the centroids of detections made during characterization. Note that not all objects have been detected in this first round.",
"_____no_output_____"
]
],
[
[
"# display some of the source catalog measurements filtered by searchword\nsearchword='flux'\nfor name in charResult.sourceCat.schema.getOrderedNames():\n if searchword in name.lower():\n print(name)",
"_____no_output_____"
],
[
"# Looking at the mask plane, which started off as all zeros\n# and now has some values of 2^5\nmaskfoo=exposure.mask\nprint(\"Unique mask plane values: \",np.unique(maskfoo.array))\nprint(\"Mask dictionary entries: \",maskfoo.getMaskPlaneDict())\n\nplt.figure(figsize=(12,5)),plt.subplots_adjust(wspace=.3)\nplt.subplot(121)\nplt.imshow(maskfoo.array,origin='lower'),plt.colorbar()\nplt.subplot(122)\nplt.hist(maskfoo.array.flatten()),plt.xlabel('Mask plane values')\nplt.yscale('log')",
"_____no_output_____"
]
],
[
[
"The above figures illustrate the new mask plane of the exposure object which was added and modified during characterization. Values of 0 and 5 are now seen, which correspond to unassociated pixels and those which are \"detected\".",
"_____no_output_____"
],
[
"## Step 3: Further image calibration and measurement\nThis builds on the exposure output from characterization, using the new mask plane as well as the source catalog. Similar to the characterization, we turn off some processing which is suited to our particular setup. For this dataset a calibrate task is almost unncessary (as it is not on-sky data and we don't have a reference catalog), however, it does provide a background-subtracted image and for completeness it is included here. The steps in calibration that are turned on/off can be seen by printing the calibration config object.",
"_____no_output_____"
]
],
[
[
"from lsst.pipe.tasks.calibrate import CalibrateTask, CalibrateConfig\n\ncalConfig = CalibrateConfig()\ncalConfig.doAstrometry = False\ncalConfig.doPhotoCal = False\ncalConfig.doApCorr = False\ncalConfig.doDeblend = False # these are well-separated objects, deblending adds time & trouble\n# these images should have a uniform background, so measure it\n# on scales which are larger than the objects\ncalConfig.detection.background.binSize = 50\ncalConfig.detection.minPixels = 5\ncalConfig.measurement.plugins.names |= [\"ext_shapeHSM_HsmSourceMoments\"]\n# to configure hsm you would do something like\n#charConfig.measurement.plugins[\"ext_shapeHSM_hsmSourceMoments\"].addFlux = True\n\ncalTask = CalibrateTask(config= calConfig, icSourceSchema=charResult.sourceCat.schema)\n\n#calTask.run? # for stack v16.0+22 \ncalTask.run?",
"_____no_output_____"
],
[
"tstart=time.time()\n# for stack v16.0+22, change to calTask.run(charResult.exposure)\ncalResult = calTask.run(charResult.exposure, background=charResult.background,\n icSourceCat = charResult.sourceCat)\n\nprint(\"Calibration took \",str(time.time()-tstart)[:4],\" seconds\")\nprint(\"Detected \",len(calResult.sourceCat),\" objects \")",
"_____no_output_____"
]
],
[
[
"Below we look at some of the measurements in the source catalog which has been attached to the calibration result. We also save the source catalog to `$fitsfilename.cat` in `/home/$USER/beamsim/`, which was created in the first cell. This will allow the results from each image to be read in after these measurements are performed on each image.",
"_____no_output_____"
]
],
[
[
"print(\"Catalogs will be saved to: \"+cat_dir)",
"_____no_output_____"
],
[
"src=calResult.sourceCat #.copy(deep=True) ?\n#print(src.asAstropy)\n\n# catalog directory\nsrc.writeFits(cat_dir+fitsfilename.split('/')[-1].replace('.fits','.cat'))\n# read back in and access via:\n#catalog=fits.open(fitsfilename+'.cat')\n#catalog[1].data['base_SdssShape_xx'] etc.\n\npar_names=['base_SdssShape_xx','base_SdssShape_yy','base_SdssShape_instFlux']\npar_mins=[0,0,0]\npar_maxs=[5,5,1e6]\nn_par=len(par_names)\n\n\nplt.figure(figsize=(5*n_par,6)),plt.subplots_adjust(wspace=.25)\nfor par_name,par_min,par_max,i in zip(par_names,par_mins,par_maxs,range(n_par)):\n plt.subplot(2,n_par,i+1)\n plt.scatter(src['base_SdssCentroid_x'],src['base_SdssCentroid_y'],c=src[par_name],marker='o',vmin=par_min,vmax=par_max)\n plt.xlabel('X'),plt.ylabel('Y'),plt.colorbar(label=par_name)\n\n\n plt.subplot(2,n_par,n_par+i+1)\n plt.hist(src[par_name],range=[par_min,par_max],bins=20,histtype='step')\n plt.xlabel(par_name)",
"_____no_output_____"
]
],
[
[
"The above figures show the 2-dimensional distribution of detected objects measured parameter values and their histogram. By default, two shape parameters (in pixels) and a flux measurement (in electrons) are shown, but this can be modified through the `par_names` variable in the cell above. ",
"_____no_output_____"
],
[
"## Step 4: Apply the brighter-fatter kernel correction to an image\nThis brighter fatter correction method takes in a \"kernel\" (derived from theory or flat fields) which models the broadening of incident image profiles assuming the pixel boundary displacement can be represented as the gradient of a scalar field. Given a kernel and this assumption, the incident image profile can in theory be reconstructed using an iterative process, which we test here using our beam simulator images. See [this paper](https://arxiv.org/abs/1711.06273) and the IsrTask docstring below for more details about the theory and its assumptions. The kernel used here is not generated by the stack but rather through similar code which was written at UC Davis by Craig Lage. Future additions to the notebook will use a stack-generated kernel when available.",
"_____no_output_____"
]
],
[
[
"#from lsst.ip.isr.isrTask import IsrTask # brighterFatterCorrection lives here\n#isr=IsrTask()\nimport lsst.ip.isr as isr\n\npre_bfcorr_exposure=exposure.clone() #save a copy of the pre-bf corrected image\n\nisr.brighterFatterCorrection?",
"_____no_output_____"
],
[
"# Read in the kernel (determined from e.g. simulations or flat fields)\nkernel=fits.getdata('/project/shared/data/beamsim/bfcorr/BF_kernel-ITL_3800C_002.fits')\nexposure=pre_bfcorr_exposure.clone() # save the pre-bf correction image\n\n# define the maximum number of iterations and threshold for differencing convergence (e-)\nbf_maxiter,bf_threshold=20,10\n\n# Perform the correction\ntstart=time.time()\nisr.brighterFatterCorrection(exposure,kernel,bf_maxiter,bf_threshold,False)\nprint(\"Brighter-fatter correction took\",time.time()-tstart,\" seconds\")\n#takes 99 seconds for 4kx4k exposure, 21x21 kernel, 20 iterations, 10 thresh\n\n# Plot kernel and image differences\nplt.figure(),plt.title('BF kernel')\nplt.imshow(kernel),plt.colorbar()\n\nimagediff=(pre_bfcorr_exposure.image.array-exposure.image.array)\nimagediffpct=np.sum(imagediff)/np.sum(pre_bfcorr_exposure.image.array)*100.\nprint(str(imagediffpct)[:5],' percent change in flux')\n\nplt.figure(figsize=(16,10))\nplt.subplot(231),plt.title('Before')\nplt.imshow(pre_bfcorr_exposure.image.array,vmin=0,vmax=1e3,origin='lower'),plt.colorbar()\nplt.subplot(232),plt.title('After')\nplt.imshow(exposure.image.array,vmin=0,vmax=1e3,origin='lower'),plt.colorbar()\nplt.subplot(233),plt.title('Before - After')\nvmin,vmax=-10,10\nplt.imshow(imagediff,vmin=vmin,vmax=vmax,origin='lower'),plt.colorbar()\n\nnbins=1000\nplt.subplot(234)\nplt.hist(pre_bfcorr_exposure.image.array.flatten(),bins=nbins,histtype='step',label='before')\nplt.yscale('log')\nplt.subplot(235)\nplt.hist(exposure.image.array.flatten(),bins=nbins,histtype='step',label='after')\nplt.yscale('log')\nplt.subplot(236)\nplt.hist(imagediff.flatten(),bins=nbins,histtype='step',label='difference')\nplt.yscale('log')\nplt.legend()\nplt.xlabel('Pixel values [e-]')\n",
"_____no_output_____"
],
[
"kernel.sum(),kernel.shape",
"_____no_output_____"
]
],
[
[
"The above figures illustrate the way that the brighter-fatter correction works: by iteratively convolving a physically-motivated kernel with the electron image to redistribute charge from the periphery to the center of objects.",
"_____no_output_____"
],
[
"## Step 5: Run the above steps (with and without brighter-fatter correction) on 20 exposures of increasing exposure time\nHere we re-do all of the previous work, which was done with one image, on a series of images with increasing exposure times. We will generate this series of catalogs both with and without applying the brighter-fatter correction, allowing us to test the fidelity of the brighter-fatter correction with our beam simulator images. To do this in a simple way, we create a function to perform all of the above tasks, called `make_bf_catalogs`, which only takes in a list of filenames but uses some of the same global configuration values (`charTask.config` and `calTask.config`) which we set above.",
"_____no_output_____"
]
],
[
[
"fitsglob='/project/shared/data/beamsim/bfcorr/*part.fits' # fits filenames to read in\nfitsfilelist=np.sort(glob.glob(fitsglob))\n\ndef make_bf_catalogs(fitsfilelist,do_bf_corr=False,do_verbose_print=True):\n for fitsfilename in fitsfilelist:\n tstart=time.time()\n image_array=afwImage.ImageF.readFits(fitsfilename)\n image = afwImage.ImageF(image_array)\n\n exposure = afwImage.ExposureF(image.getBBox())\n exposure.setImage(image)\n\n updateVariance(exposure.maskedImage, gain, readNoise)\n \n # start the characterization and measurement, \n # optionally beginning with the brighter-fatter correction\n if do_bf_corr:\n isr.brighterFatterCorrection(exposure,kernel,bf_maxiter,bf_threshold,False)\n # print(\"Brighter-fatter correction took\",str(time.time()-tstart)[:4],\" seconds\")\n # for stack v16.0+22 use charTask.run() and calTask.run()\n charResult = charTask.run(exposure) \n calResult = calTask.run(charResult.exposure, background=charResult.background,\n icSourceCat = charResult.sourceCat)\n src=calResult.sourceCat\n\n # write out the source catalog, appending -bfcorr for the corrected catalogs\n catfilename=cat_dir+fitsfilename.replace('.fits','.cat').split('/')[-1]#\n if do_bf_corr: catfilename=catfilename.replace('.cat','-bfcorr.cat')\n src.writeFits(catfilename)\n\n if do_verbose_print: \n print(fitsfilename.split('/')[-1],\" char. & calib. took \",\n str(time.time()-tstart)[:4],\" seconds to measure \",\n len(calResult.sourceCat),\" objects \")\n\n \n# Run the catalog maker on the series of uncorrected and corrected images\nmake_bf_catalogs(fitsfilelist,do_bf_corr=True,do_verbose_print=True)\nmake_bf_catalogs(fitsfilelist,do_bf_corr=False,do_verbose_print=False)",
"_____no_output_____"
]
],
[
[
"Now read in those catalogs, both corrected and uncorrected (this could be improved with e.g. pandas)",
"_____no_output_____"
]
],
[
[
"cat_arr = []\ncatglob=cat_dir+'ITL*part.cat' # uncorrected catalogs\nfor catfilename in np.sort(glob.glob(catglob)): cat_arr.append(fits.getdata(catfilename))\n\nbf_cat_arr = []\ncatglob=cat_dir+'ITL*part-bfcorr.cat' # corrected catalogs\nfor catfilename in np.sort(glob.glob(catglob)): bf_cat_arr.append(fits.getdata(catfilename))\nncats=len(cat_arr)",
"_____no_output_____"
],
[
"# Show issues with multiply detected sources which which we remedy with matching rejection\nfor i in range(ncats):\n xfoo,yfoo=cat_arr[i]['base_SdssCentroid_x'],cat_arr[i]['base_SdssCentroid_y']\n plt.plot(xfoo,yfoo,'o',alpha=.4)\nplt.title('Centroids of sequential exposures')",
"_____no_output_____"
]
],
[
[
"The above image illustrates a problem with comparing images with different exposure times. Namely, that different sets of objects may be detected. To remedy this, we use a fiducial frame as reference and simply match the catalogs by looking for *single* object matches within a specified distance of those fiducial objects. We then collect a shape measurement (e.g. `base_SdssShape_xx/yy`) for that object as well as a brightness measurement (e.g. `base_SdssShape_flux`) to test for a trend in size vs. brightness.",
"_____no_output_____"
]
],
[
[
"fidframe=10 # frame number to compare to\nmaxdist=.5 # max distance to match objects between frames\n\n# choose which stack measurements to use for centroids and shape\n# +TODO use 'ext_shapeHSM_HsmSourceMoments_xx','ext_shapeHSM_HsmSourceMoments_yy'\ncen_param1,cen_param2='base_SdssCentroid_x','base_SdssCentroid_y'\nbf_param1,bf_param2='base_SdssShape_xx','base_SdssShape_yy'\nflux_param='base_CircularApertureFlux_25_0_instFlux'#'base_CircularApertureFlux_25_0_Flux' #'base_GaussianFlux_flux' # or or 'base_SdssShape_flux'\n\n# get the centroids (used for matching) from the fiducial frame \nx0s,y0s=cat_arr[fidframe][cen_param1],cat_arr[fidframe][cen_param2]\nnspots=len(x0s)\n\n# make an array to hold that number of objects and their centroid/shape/flux measurements\n# the 8 rows collect x/y centroid, x/y shape, x/y corrected shape, flux, and corrected flux\nbf_dat=np.empty((ncats,nspots),\n dtype=np.dtype([('x', float), ('y', float),('shapex', float), ('shapey', float),\n ('corrshapex', float), ('corrshapey', float),\n ('flux', float), ('corrflux', float)]))\nbf_dat[:]=np.nan # so that un-matched objects aren't plotted/used by default\n\n\n# loop over catalogs\nfor i in range(ncats):\n # get the centroids of objects in the bf-corrected and uncorrected images\n x1,y1=cat_arr[i][cen_param1],cat_arr[i][cen_param2]\n x1_bf,y1_bf=bf_cat_arr[i][cen_param1],bf_cat_arr[i][cen_param2]\n # loop over fiducial frame centroids to find matches\n for j in range(nspots): \n x0,y0=x0s[j],y0s[j] # fiducial centroid to match\n # find objects in both catalogs which are within maxdist\n bf_gd=np.where(np.sqrt((x1_bf-x0)**2+(y1_bf-y0)**2)<maxdist)[0]\n gd=np.where(np.sqrt((x1-x0)**2+(y1-y0)**2)<maxdist)[0]\n if (len(bf_gd)==1 & len(gd)==1): # only take single matches\n xx,yy=cat_arr[i][bf_param1][gd],cat_arr[i][bf_param2][gd] # centroids\n xx_bf,yy_bf=bf_cat_arr[i][bf_param1][bf_gd],bf_cat_arr[i][bf_param2][bf_gd] # sizes\n flux,flux_bf=cat_arr[i][flux_param][gd],bf_cat_arr[i][flux_param][bf_gd] # fluxes\n bf_dat[i,j]=x0,y0,xx,yy,xx_bf,yy_bf,flux,flux_bf # keep those above measurements",
"_____no_output_____"
]
],
[
[
"## Plot the brighter-fatter effect on those shape measurements and the corrected version\nAlongside stamps of each object, below we show the trend of X and Y sizes before and after brighter-fatter correction. By default this makes a dozen plots in as many seconds and saves them to the catalog directory.",
"_____no_output_____"
]
],
[
[
"np.arange(10,20)",
"_____no_output_____"
],
[
"sz=13 # stamp size\n\n# loop over some objects and save a summary figure\n# [0,6,12,18,23,35,44,46,52,56,69,71,114] are good indices to look with default values \n# or e.g. np.random.choice(np.arange(nspots),size=10)\nindexfoo=[12,14,29,34,41,56]\n\nfor index in indexfoo:\n plt.figure(figsize=(14,4)),plt.subplots_adjust(wspace=.3)\n \n # grab a postage stamp, integerizing the centroid and shipping\n # if it is near the edge, +TODO in a stackly manner\n xc,yc=bf_dat['x'][fidframe,index].astype('int')+1,bf_dat['y'][fidframe,index].astype('int')+1\n if ((np.abs(xc-250)>250 - sz ) | (np.abs(yc-250)>250 - sz )): continue\n stamp=exposure.getImage().array[yc-sz:yc+sz,xc-sz:xc+sz]\n \n # show the stamp with log scale (1,max)\n plt.subplot(131),plt.title('stamp '+str(index).zfill(3)+' (log scale)')\n plt.imshow(stamp,origin='lower',norm=LogNorm(1,stamp.max())),plt.colorbar()\n \n # x size vs flux\n plt.subplot(132),plt.title('x (row) size vs. flux')\n plt.plot(bf_dat['flux'][:,index],bf_dat['shapex'][:,index],'r.',label='Uncorrected')\n plt.plot(bf_dat['corrflux'][:,index],bf_dat['corrshapex'][:,index],'g.',label='Corrected')\n plt.xlabel(flux_param),plt.ylabel(bf_param1),plt.xscale('log')\n plt.legend(loc='upper left')\n\n # y size vs flux\n plt.subplot(133),plt.title('y (column) size vs. flux')\n plt.plot(bf_dat['flux'][:,index],bf_dat['shapey'][:,index],'r.',label='Uncorrected')\n plt.plot(bf_dat['corrflux'][:,index],bf_dat['corrshapey'][:,index],'g.',label='Corrected')\n plt.xlabel(flux_param),plt.ylabel(bf_param2)\n plt.xscale('log')\n plt.savefig(cat_dir+str(index).zfill(5)+'bfcorr.png')",
"_____no_output_____"
]
],
[
[
"The above figures illustrate the brighter-fatter effect (slight increasing size with flux) in the red dots, and the corrected image analysis in green. Curiously, some of the objects indicate that the default correction method is properly correcting star-like objects, but over- or under-correcting the effect in galaxy images. This could be due to a violation of some of the underlying assumptions in the method, including the small-pixel approximation or the linearity of kernel correction. Some of the remaining trends could be related to an increase in signal-to-noise in the images, however this is a universally applicable issue with shape measurement and is beyond the scope of this notebook. In some of the figures, a rapid increase in size can be seen at the highest fluxes, indicating saturation of pixel wells which is unrelated to the brighter-fatter effect.",
"_____no_output_____"
],
[
"Now plot the flux lost/gained in the process of brighter-fatter correction, by subtracting the flux of the corrected images from the uncorrected ones. The flux measurement is the same as the one used in the above figures and is measured in electrons.",
"_____no_output_____"
]
],
[
[
"colorpalette=cycle(plt.cm.viridis(np.linspace(0,1,len(indexfoo))))\nstylepalette=cycle(['s','*','o'])\nplt.figure(figsize=(14,8))\nfor nfoo in indexfoo:\n flux_foo=bf_dat['flux'][:,nfoo]\n fluxdiffpct_foo=(bf_dat['flux'][:,nfoo]-bf_dat['corrflux'][:,nfoo])/bf_dat['flux'][:,nfoo]*100.\n plt.plot(flux_foo,fluxdiffpct_foo,label=str(nfoo).zfill(3),c=next(colorpalette),marker=next(stylepalette))\nplt.xscale('log')#,plt.yscale('symlog')\nplt.legend()\nplt.xlabel(flux_param,fontsize=20)\nplt.ylabel('Measured flux change due to correction \\n (before - after) [%]',fontsize=20)\n#plt.savefig(cat_dir+'BF_corr_flux_change.png',dpi=150)",
"_____no_output_____"
],
[
"# +TODO other ways of doing matching, catalog stacking",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ecbc33adc8dc7f574f0ad1e96b928c6aa7bb1c5b | 465,359 | ipynb | Jupyter Notebook | docs/notebooks/.ipynb_checkpoints/Define ecoinvent Rest-of-Worlds-checkpoint.ipynb | cmutel/ecoinvent-row-report | 694717115e79161567105f7061292cd91ac40180 | [
"BSD-3-Clause"
] | null | null | null | docs/notebooks/.ipynb_checkpoints/Define ecoinvent Rest-of-Worlds-checkpoint.ipynb | cmutel/ecoinvent-row-report | 694717115e79161567105f7061292cd91ac40180 | [
"BSD-3-Clause"
] | null | null | null | docs/notebooks/.ipynb_checkpoints/Define ecoinvent Rest-of-Worlds-checkpoint.ipynb | cmutel/ecoinvent-row-report | 694717115e79161567105f7061292cd91ac40180 | [
"BSD-3-Clause"
] | null | null | null | 44.068087 | 600 | 0.49358 | [
[
[
"# Define `Rest-of-World`s in ecoinvent 3.01-3.2\n\nThis notebook shows how the various `Rest-of-World`s were spatially defined for ecoinvent versions 3.01, 3.1, and 3.2 (all system models).\n\n## Start new project",
"_____no_output_____"
]
],
[
[
"from brightway2 import *\nfrom bw2regional.ecoinvent import *\nimport pyprind",
"_____no_output_____"
],
[
"projects.current = \"Ecoinvent 3\"",
"_____no_output_____"
],
[
"bw2setup()",
"Creating default biosphere\n\n"
]
],
[
[
"## Import ecoinvents",
"_____no_output_____"
]
],
[
[
"ecoinvents = [\n# (\"ecoinvent 3.01 cutoff\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.01/cutoff/datasets\"),\n# (\"ecoinvent 3.01 apos\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.01/default/datasets\"),\n# (\"ecoinvent 3.01 consequential\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.01/consequential/datasets\"),\n# (\"ecoinvent 3.1 consequential\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.1/consequential/datasets\"),\n# (\"ecoinvent 3.1 cutoff\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.1/cutoff/datasets\"),\n# (\"ecoinvent 3.1 apos\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.1/default/datasets\"),\n# (\"ecoinvent 3.2 consequential\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.2/consequential/datasets\"),\n (\"ecoinvent 3.2 cutoff\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.2/cutoff/datasets\"),\n# (\"ecoinvent 3.2 apos\", \"/Users/cmutel/Documents/LCA Documents/Ecoinvent/3.2/apos/datasets\")\n]",
"_____no_output_____"
],
[
"def import_all_ecoinvents():\n for name, fp in ecoinvents:\n print(name)\n ei = SingleOutputEcospold2Importer(fp, name)\n ei.apply_strategies()\n if ei.statistics()[2]:\n ei.drop_unlinked(True)\n ei.write_database()\n ei = None",
"_____no_output_____"
],
[
"if 'ecoinvent 3.01 cutoff' not in databases:\n import_all_ecoinvents()",
"ecoinvent 3.2 cutoff\nExtracting XML data from 12916 datasets\nExtracted 12916 datasets in 19.16 seconds\nApplying strategy: normalize_units\nApplying strategy: remove_zero_amount_coproducts\nApplying strategy: remove_zero_amount_inputs_with_no_activity\nApplying strategy: remove_unnamed_parameters\nApplying strategy: es2_assign_only_product_with_amount_as_reference_product\nApplying strategy: assign_single_product_as_activity\nApplying strategy: create_composite_code\nApplying strategy: drop_unspecified_subcategories\nApplying strategy: link_biosphere_by_flow_uuid\nApplying strategy: link_internal_technosphere_by_composite_code\nApplying strategy: delete_exchanges_missing_activity\nApplying strategy: delete_ghost_exchanges\nApplying strategy: nuncertainty\nApplied 13 strategies in 3.09 seconds\n12916 datasets\n459268 exchanges\n0 unlinked exchanges\n \n"
]
],
[
[
"# Fix `ecoinvent` shortnames\n\nThere are a few changes which haven't propogated to ecoinvent master yet.",
"_____no_output_____"
]
],
[
[
"for name, fp in ecoinvents:\n print(name)\n db = Database(name)\n db.make_unsearchable()\n\n for act in pyprind.prog_bar(db):\n old_location = act['location']\n new_location = convert_default_ecoinvent_locations(old_location)\n if old_location != new_location:\n act['location'] = new_location\n act.save()\n \n print()",
"0% 100%\n[ ]"
]
],
[
[
"# Call `discretize_rest_of_world` on ecoinvent 3.2 cutoff\n\nThis function will return the following:\n\n* `activity_dict`: Dictionary from activity keys to `(activity name, reference product)`\n* `row_locations`: List of `(tuple excluded locations, new RoW label)`, where new RoW labels are like `\"RoW-42\"`.\n* `locations`: Dictionary from keys of `(activity name, reference product)` to all the specific locations defined for this activity/product combination.\n* `exceptions`: List of `(activity name, reference product)` markets for which no `RoW` activity is present.",
"_____no_output_____"
]
],
[
[
"activity_dict, row_locations, locations, exceptions = discretize_rest_of_world(\"ecoinvent 3.2 cutoff\", warn=False)",
"_____no_output_____"
],
[
"len(activity_dict), len(row_locations), len(locations), len(exceptions)",
"Warning: DisplayFormatter._ipython_display_formatter_default is deprecated: use @default decorator instead.\nWarning: DisplayFormatter._formatters_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._deferred_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._singleton_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._type_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._singleton_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._type_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._deferred_printers_default is deprecated: use @default decorator instead.\n"
],
[
"activity_dict",
"_____no_output_____"
],
[
"row_locations",
"_____no_output_____"
],
[
"locations",
"_____no_output_____"
],
[
"name, product = ('field application of compost', 'phosphate fertiliser, as P2O5')\n[x for x in Database(\"ecoinvent 3.2 cutoff\") if x['name'] == name and x['reference product'] == product]",
"_____no_output_____"
]
],
[
[
"# Check to see if other databases have new RoWs",
"_____no_output_____"
]
],
[
[
"all_rows = {x[0] for x in row_locations}\nrows_in_databases = {'ecoinvent 3.2 cutoff': all_rows}",
"_____no_output_____"
],
[
"for name, fp in ecoinvents[-1::-1]:\n if name == 'ecoinvent 3.2 cutoff':\n continue\n else:\n _, data, _, _ = discretize_rest_of_world(name, warn=False)\n new_rows = {x[0] for x in data}\n print(name, len(new_rows.difference(all_rows)))\n rows_in_databases[name] = new_rows\n all_rows = all_rows.union(new_rows)",
"ecoinvent 3.2 apos 4\necoinvent 3.2 consequential 0\necoinvent 3.1 apos 38\necoinvent 3.1 cutoff 0\necoinvent 3.1 consequential 0\necoinvent 3.01 consequential 19\necoinvent 3.01 apos 1\necoinvent 3.01 cutoff 0\n"
],
[
"len(all_rows)",
"Warning: DisplayFormatter._ipython_display_formatter_default is deprecated: use @default decorator instead.\nWarning: DisplayFormatter._formatters_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._deferred_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._singleton_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._type_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._singleton_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._type_printers_default is deprecated: use @default decorator instead.\nWarning: PlainTextFormatter._deferred_printers_default is deprecated: use @default decorator instead.\n"
],
[
"all_rows_sorted = sorted(all_rows, key=lambda x: str(x))",
"_____no_output_____"
],
[
"all_rows_sorted = [(\"RoW-{}\".format(i + 1), x) for i, x in enumerate(all_rows_sorted)]",
"_____no_output_____"
],
[
"all_rows_sorted",
"_____no_output_____"
],
[
"import json\n\nwith open(\"rows-ecoinvent.json\", \"w\") as f:\n json.dump(all_rows_sorted, f, indent=2)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbc34da3a69e4ff774d5da5f94ea3355cada4f5 | 1,679 | ipynb | Jupyter Notebook | ps_ml/Notebooks/PythonFirstSteps.ipynb | grrakesh4769/python-l | 57820c111f8821cbb5665fc9eadb774649596778 | [
"MIT"
] | null | null | null | ps_ml/Notebooks/PythonFirstSteps.ipynb | grrakesh4769/python-l | 57820c111f8821cbb5665fc9eadb774649596778 | [
"MIT"
] | null | null | null | ps_ml/Notebooks/PythonFirstSteps.ipynb | grrakesh4769/python-l | 57820c111f8821cbb5665fc9eadb774649596778 | [
"MIT"
] | null | null | null | 16.959596 | 49 | 0.451459 | [
[
[
"# Learning\n\nThis notebook contains basics of *Jupyter*",
"_____no_output_____"
],
[
"## Example 1: Hello, World",
"_____no_output_____"
]
],
[
[
"my_name = \"Name\"\nh_stmt= \"Hello, \"+my_name\nprint(h_stmt, end=\"\\n\\n\")\nx=10",
"Hello, Name\n\n"
]
],
[
[
"## Example 2: Loop",
"_____no_output_____"
]
],
[
[
"for j in range(1,5):\n x=x+j\n print(\"j={0}, and x={1}\".format(j,x))",
"j=1, and x=11\nj=2, and x=13\nj=3, and x=16\nj=4, and x=20\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbc40fd688b16cfcdeaa2f90d0f0a7e4eab7202 | 816,643 | ipynb | Jupyter Notebook | notebooks/dynophore.ipynb | MQSchleich/dylightful | 6abbb690c8387c522c9bff21c72b5c66aab77ede | [
"MIT"
] | null | null | null | notebooks/dynophore.ipynb | MQSchleich/dylightful | 6abbb690c8387c522c9bff21c72b5c66aab77ede | [
"MIT"
] | 5 | 2022-02-05T12:47:42.000Z | 2022-03-16T11:42:20.000Z | notebooks/dynophore.ipynb | MQSchleich/dylightful | 6abbb690c8387c522c9bff21c72b5c66aab77ede | [
"MIT"
] | null | null | null | 414.539594 | 243,443 | 0.925775 | [
[
[
"# Dynophore notebook",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"### What is a dynophore?\n\n* A **dynophore** is a collection of so-called superfeatures. \n* A **superfeature** is defined as a pharmacophore feature on the ligand side — defined by a feature type and one or more ligand atoms — that occurs at least once during an MD simulation. Example: HBA[4618] (feature type, ligand atom numbers)\n* A superfeature has a **point cloud**, where each point corresponds to the centroid of feature during one frame of the trajectory.\n* A superfeature can have one or more interaction partner(s) on the macromolecule side. These interaction partners are called **environmental partners**. Example: GLN-131-A[2057] (residue name, residue number, chain, atom serial numbers).",
"_____no_output_____"
],
[
"### How to work with a dynophore?\n\n* **Dynophore raw data** can be analyzed conveniently right here in this notebook by working with the `Dynophore` class.\n* **Dynophore 2D view** shows all superfeatures on a 2D view of the structure-bound ligand using `rdkit`.\n* **Dynophore 3D view** maps each superfeature's point cloud in 3D using `nglview`, allowing for an easy visual inspection of the dynamic macromolecule-ligand interactions. Point clouds are rendered alongside the complex structure's topology and (optionally) the trajectory underlying the dynophore.\n* **Dynophore statistics** cover the occurrence of superfeatures and their environmental partners as well as distances between them.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"%matplotlib widget",
"_____no_output_____"
],
[
"from pathlib import Path\n\nimport nglview as nv\n\n# Import Dynophore class - contains all dynophore data\nimport dynophores as dyno",
"_____no_output_____"
]
],
[
[
"## Set data paths",
"_____no_output_____"
]
],
[
[
"dyno_path = Path(\"dynophores-master/dynophores/tests/data/out\")\npdb_path = Path(\"dynophores-master/dynophores/tests/data/in/startframe.pdb\")\ndcd_path = Path(\"dynophores-master/dynophores/tests/data/in/trajectory.dcd\")",
"_____no_output_____"
]
],
[
[
"__Note__: You can set `dcd_path = None` in case you do not want to view the trajectory.",
"_____no_output_____"
],
[
"## Load data as `Dynophore` object\n\nYou can load the dynophore data as `Dynophore` object. We will need this object below for visualization purposes but you can also use the raw data for your own customized analyses.\n\n__Note__: Check out [this tutorial](https://dynophores.readthedocs.io/en/latest/tutorials/explore_data.html) on the dynophore's data structure.",
"_____no_output_____"
]
],
[
[
"dynophore = dyno.Dynophore.from_dir(dyno_path)",
"_____no_output_____"
]
],
[
[
"## 2D view\n\nInvestigate the dynophore's superfeatures in 2D; display the atom serial numbers (those will show up in the superfeatures' identifiers in the plots below).",
"_____no_output_____"
]
],
[
[
"dyno.view2d.interactive.show(dynophore)",
"_____no_output_____"
]
],
[
[
"## 3D view\n\nInvestigate the dynophore in 3D - you have different options that you can change in the method signature below:\n\n* `pdb_path` and `dcd_path` have been defined at the beginning of this notebook; these are the file paths to your complex structure's topology and trajectory (if you do not want to load the trajectory, set `dcd_path=None`).\n* `visualization_type`: `spheres` or `points`\n * [Default] Show each frames features as small spheres with `visualization_type=spheres`.\n * [Work-In-Progress] Render the dynophore cloud as more burred and connected points using `visualization_type=points` (still has some NGL rendering issues that we cannot fix on our end, see [NGL GitHub issue](https://github.com/nglviewer/ngl/issues/868))\n* `color_cloud_by_frame`: `False` or `True`\n * [Default] Color cloud by superfeature type. Example: The points belonging to a HBA-based superfeature will all be colored red.\n * Color cloud by superfeature type *and* frame index. Example: The points belonging to a HBA-based superfeauture will be colored from red (first frame) to light red (last frame).\n* `macromolecule_color`: Set a color for the macromolecule; defaults to blue.\n* `frame_range`: Show a selected frame range only, e.g. `frame_range=[100, 1000]`. By default, all frames are shown with `frame_range=None`.\n\nInteract directly with the 3D visualization using the NGL GUI:\n\n* Toogle on/off macromolecule > *cartoon*\n* Toogle on/off ligand > *hyperball*\n* Toogle on/off pocket residue side chains > *licorice*\n* Toogle on/off superfeatures > superfeature identifier e.g. *HBA[4618]*\n* Run trajectory if loaded",
"_____no_output_____"
]
],
[
[
"view = dyno.view3d.show(\n dynophore,\n pdb_path=pdb_path,\n dcd_path=dcd_path,\n visualization_type=\"spheres\",\n color_cloud_by_frame=False,\n macromolecule_color=\"#005780\",\n frame_range=None,\n)\nview.display(gui=True, style=\"ngl\")",
"_____no_output_____"
]
],
[
[
"In case a trajectory is loaded, use the `TrajectoryPlayer` for more visualization options:",
"_____no_output_____"
]
],
[
[
"nv.player.TrajectoryPlayer(view)",
"_____no_output_____"
]
],
[
[
"## Statistics",
"_____no_output_____"
],
[
"### Plot interactions overview (heatmap)\n\nCheck how often each superfeature interacts with which environmental partners throughout the MD simulation (in %).",
"_____no_output_____"
]
],
[
[
"dyno.plot.interactive.superfeatures_vs_envpartners(dynophore)",
"_____no_output_____"
]
],
[
[
"### Plot superfeature occurrences (time series)\n\nCheck when (barcode) and how often (in %) a superfeature $S$ occurs throughout the MD simulation.\n\n$S\\,\\text{occurrence [%]} = \\frac{\\text{Number of frames in which}\\,S\\,\\text{occurs}}{\\text{Number of frames}} \\times 100$",
"_____no_output_____"
]
],
[
[
"dyno.plot.interactive.superfeatures_occurrences(dynophore)",
"_____no_output_____"
]
],
[
[
"### Plot interactions for example superfeature (time series)",
"_____no_output_____"
],
[
"#### Interaction occurrence\n\nCheck when (barcode) and how often (in %\\) each environmental partner $E$ interacts in context of a superfeature $S$ ($E_S$).\n\n$E_S\\,\\text{occurrence [%]} = \\frac{\\text{Number of frames where}\\,E\\,\\text{interacts in context of}\\,S}{\\text{Number of frames where}\\,S\\,\\text{occurs}} \\times 100$",
"_____no_output_____"
]
],
[
[
"dyno.plot.interactive.envpartners_occurrences(dynophore)",
"_____no_output_____"
]
],
[
[
"#### Interaction distances\n\nCheck for each superfeature, the distances to all environmental partners throughout the MD simulation. \n\n* **Time series**: Distances are shown for all frames regardless of whether that frame shows an interaction between the environmental partner and the superfeature's ligand atoms or not. Interactions are indicated with a dot in the plot.\n* **Histogram**: Only distances are shown that belong to frames in which an interaction between the environmental partner and the superfeature's ligand atoms ocurrs.",
"_____no_output_____"
]
],
[
[
"dyno.plot.interactive.envpartners_distances(dynophore)",
"_____no_output_____"
]
],
[
[
"#### Interaction profile (all-in-one)\n\nThis is a summary of the plots shown above. Note that in this case *all* distances throughout the MD simulation are shown (regardless of whether the frame shows an interaction or not).",
"_____no_output_____"
]
],
[
[
"dyno.plot.interactive.envpartners_all_in_one(dynophore)",
"_____no_output_____"
]
],
[
[
"# Hidden Markov Model\n\nThe aim is to derive time information from the dynophore in form of the transition matrix. Therefore, one wants to find the transition probability matrix $\\Gamma$. \n\n1.) Generate Time series of Superfeatures \n\n2.) Do modelling ",
"_____no_output_____"
],
[
"## Generate Time Series\n\nrecognize in the above code the sequence of occurences was generated, there is already code to generate the time series",
"_____no_output_____"
]
],
[
[
"print(\"There are\", dynophore.n_superfeatures, \"superfeatures present.\")",
"There are 10 superfeatures present.\n"
]
],
[
[
"As there are 10 superfeatures present, there could be $2^n = 2^{10}=1024$ observations present ",
"_____no_output_____"
]
],
[
[
"time_ser = dynophore.superfeatures_occurrences\ntime_ser",
"_____no_output_____"
]
],
[
[
"### Therefore, there are distinct observations present",
"_____no_output_____"
]
],
[
[
"cols = dynophore.superfeatures_occurrences.columns.to_list()\nprint(cols)\ndynophore.superfeatures_occurrences[cols[0]]",
"['H[4599,4602,4601,4608,4609,4600]', 'H[4615,4623,4622,4613,4621,4614]', 'HBA[4596]', 'HBA[4619]', 'HBD[4612]', 'AR[4622,4615,4623,4613,4614,4621]', 'HBA[4606]', 'HBD[4598]', 'HBA[4618]', 'AR[4605,4607,4603,4606,4604]']\n"
]
],
[
[
"# Training the HMM \n1.) Find emission matrix \n ii.) Find number of observations \n2.) Then train model \n3.) Add slider",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"time_ser = dynophore.superfeatures_occurrences\nobs = time_ser.drop_duplicates()\nnum_obs = len(obs)\nprint(\"There are actually \", num_obs, \" present.\")\nobs = obs.to_numpy()\ntime_ser = time_ser.to_numpy()\nprint(\"The length of the observation sequence is \", len(time_ser))",
"There are actually 28 present.\nThe length of the observation sequence is 1002\n"
],
[
"model = hmm.GaussianHMM(n_components=3, covariance_type=\"full\")\n\nmodel.startprob_ = np.array([0.6, 0.3, 0.1])\n\nmodel.transmat_ = np.array([[0.7, 0.2, 0.1], [0.3, 0.5, 0.2], [0.3, 0.3, 0.4]])\n\nmodel.means_ = np.array([[0.0, 0.0], [3.0, -3.0], [5.0, 10.0]])\n\nmodel.covars_ = np.tile(np.identity(2), (3, 1, 1))\n\nX, Z = model.sample(100)",
"_____no_output_____"
],
[
"model = hmm.GaussianHMM(n_components=5, n_iter=10000, params=\"st\", init_params=\"st\")\nmodel.fit(X)\nmodel.score_samples(X)",
"_____no_output_____"
],
[
"from hmmlearn import hmm\n\nmodel = hmm.GaussianHMM(n_components=5, n_iter=10000, params=\"st\", init_params=\"st\")\nmodel.fit(time_ser)\n\n\ndef calculate_probas(time_ser, model):\n probas = model.predict_proba(time_ser)\n states = model.predict(time_ser)\n prob_ser = np.zeros(probas.shape)\n for i in range(len(states)):\n prob_ser[i, states[i]] = probas[i, states[i]]\n return np.mean(prob_ser, axis=0)",
"_____no_output_____"
],
[
"model.score_samples(time_ser)",
"_____no_output_____"
],
[
"model.decode(time_ser)[1] == model.predict(time_ser)",
"_____no_output_____"
],
[
"model.monitor_.converged",
"_____no_output_____"
],
[
"model.transmat_",
"_____no_output_____"
],
[
"from ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\n\n\n@interact(num_hidden_states=(1, 28))\ndef markov_model(num_hidden_states):\n model = hmm.GMMHMM(\n n_components=num_hidden_states, n_iter=10000, params=\"st\", init_params=\"st\"\n )\n model.fit(time_ser)\n plt.cla()\n plt.clf()\n ser = model.predict(time_ser)\n plt.scatter(np.linspace(0, len(ser), len(ser)), ser)\n plt.show()",
"_____no_output_____"
],
[
"@interact(num_hidden_states=(1, 28))\ndef markov_model(num_hidden_states):\n model = hmm.GaussianHMM(\n n_components=num_hidden_states, n_iter=10000, params=\"st\", init_params=\"st\"\n )\n model.fit(time_ser)\n plt.cla()\n ser = model.predict(time_ser)\n plt.scatter(np.linspace(0, len(ser), len(ser)), ser)\n plt.show()",
"_____no_output_____"
]
],
[
[
"### There is no need for the Hidden Markov model to be a gaussian mixture model. We could just use the simple Gaussian HMM. However, numerically the Baum-Welch algorithm is not stable and a Monte Carlo based approach should be tested to see if the results are similar. ",
"_____no_output_____"
],
[
"# Finding the best hyperparameter ",
"_____no_output_____"
]
],
[
[
"def calculate_probas(time_ser, model):\n probas = model.predict_proba(time_ser)\n states = model.predict(time_ser)\n prob_ser = np.zeros(probas.shape)\n for i in range(len(states)):\n prob_ser[i, states[i]] = probas[i, states[i]]\n return np.amin(np.amin(prob_ser, axis=0))",
"_____no_output_____"
],
[
"from hmmlearn import hmm\n\nscores = np.zeros(15)\nprobas = np.zeros(15)\nfor i in range(1, 15):\n model = hmm.GaussianHMM(n_components=i, n_iter=10000)\n try:\n model.fit(time_ser)\n scores[i] = model.score(time_ser)\n probas[i] = calculate_probas(time_ser, model)\n except:\n scores[i] = 0\n probas[i] = 0",
"_____no_output_____"
],
[
"plt.cla()\nplt.clf()\nplt.plot(scores)\nplt.show()",
"_____no_output_____"
],
[
"plt.cla()\nplt.clf()\nplt.plot(np.arange(1, 16, 1), probas)\nplt.show()",
"_____no_output_____"
],
[
"print(probas)",
"_____no_output_____"
]
],
[
[
"# Example load only the Dynophore Trajectory",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ntime_ser = pd.read_json(\"Trajectories/ZIKV/ZIKV-Pro-427-1_dynophore_time_series.json\")\nobs = time_ser.drop_duplicates()\nnum_obs = len(obs)\nprint(\"There are actually \", num_obs, \" present.\")\nobs = obs.to_numpy()\ntime_ser = time_ser.to_numpy()\nprint(\"The length of the observation sequence is \", len(time_ser))",
"There are actually 77 present.\nThe length of the observation sequence is 5001\n"
],
[
"from hmmlearn import hmm\n\nnum_sim = 10\nscores = np.zeros(num_sim)\nprobas = np.zeros(num_sim)\nfor i in range(1, num_sim):\n model = hmm.GMMHMM(n_components=i, n_iter=10000, params=\"st\", init_params=\"st\")\n try:\n model.fit(time_ser)\n scores[i] = model.score(time_ser)\n probas[i] = calculate_probas(time_ser, model)\n except:\n scores[i] = 0\n probas[i] = 0\nplt.cla()\nplt.clf()\nplt.plot(scores)",
"_____no_output_____"
],
[
"plt.show()",
"_____no_output_____"
],
[
"probas",
"_____no_output_____"
],
[
"plt.cla()\nplt.clf()\nplt.title(\"ZIKV Inhibitor\")\nplt.xlabel(\"Number of states\")\nplt.ylabel(\"Probability\")\nplt.plot(probas)\nplt.show()",
"_____no_output_____"
],
[
"model = hmm.GMMHMM(n_components=2, n_iter=10000, params=\"st\", init_params=\"st\")\nmodel.fit(time_ser)",
"_____no_output_____"
],
[
"model.transmat_",
"_____no_output_____"
],
[
"# Different Probability function",
"_____no_output_____"
],
[
"from deeptime.decomposition import VAMP, TICA\nfrom deeptime.data import ellipsoids",
"_____no_output_____"
],
[
"data = ellipsoids(seed=17)\n\ndiscrete_trajectory = data.discrete_trajectory(n_steps=1000)\n\nfeature_trajectory = data.map_discrete_to_observations(discrete_trajectory)\nprint(discrete_trajectory)\nprint(feature_trajectory)",
"[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1\n 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0\n 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1\n 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0]\n[[-0.3648853 -0.95137019]\n [-1.66736297 -1.30518374]\n [-2.76912575 -2.17251814]\n ...\n [-1.04560003 -0.78904337]\n [-1.72331087 -1.7127156 ]\n [-0.70384717 -1.0944021 ]]\n"
],
[
"tica = TICA(dim=10, lagtime=1)\n\ntica = tica.fit(time_ser).fetch_model()\n\ntica_projection = tica.transform(time_ser)\n\ndxy_tica = tica.singular_vectors_left[:, 0]\nprint(tica_projection)\nprint(time_ser)",
"[[-1.56359433e+00 5.61524382e-01 4.39292650e+00 ... 1.27425906e-02\n -3.98442748e-02 8.08724223e-02]\n [-7.31048876e-01 4.28722813e-01 1.99045342e+00 ... -2.78988116e-02\n 1.75073968e-01 -3.74831836e-01]\n [-2.96558066e+00 4.52463919e-01 4.22002140e+00 ... 3.60931193e-03\n -2.03617008e-02 4.88848894e-02]\n ...\n [ 3.36603857e-01 4.91591219e-01 -4.25534573e-02 ... -1.77949946e-02\n -9.52754230e-03 1.06425357e-03]\n [ 3.36603857e-01 4.91591219e-01 -4.25534573e-02 ... -1.77949946e-02\n -9.52754230e-03 1.06425357e-03]\n [ 3.36603857e-01 4.91591219e-01 -4.25534573e-02 ... -1.77949946e-02\n -9.52754230e-03 1.06425357e-03]]\n[[ 1 1 1 ... 1 1 5001]\n [ 1 1 1 ... 1 1 5001]\n [ 1 1 1 ... 1 0 5001]\n ...\n [ 1 1 1 ... 1 1 5001]\n [ 1 1 1 ... 1 1 5001]\n [ 1 1 1 ... 1 1 5001]]\n"
],
[
"plt.plot(tica_projection)\nplt.show()",
"_____no_output_____"
],
[
"tica = TICA(dims=3, lagtime=1)\ndata = np.random.uniform(size=(1000, 6))\ntica = tica.fit(data).fetch_model()\nprint(data)\nprint(tica.singular_values)",
"_____no_output_____"
]
],
[
[
"# Monte Carlo Approach",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecbc43120a7649f4a070c72366e7bbe35d34efd5 | 315,243 | ipynb | Jupyter Notebook | Prediction models/SEIR_COVID19.ipynb | anuradhakar49/Coro-Lib | 49e42a48c433bd50267bc779f748fb6467d8b5fe | [
"MIT"
] | null | null | null | Prediction models/SEIR_COVID19.ipynb | anuradhakar49/Coro-Lib | 49e42a48c433bd50267bc779f748fb6467d8b5fe | [
"MIT"
] | null | null | null | Prediction models/SEIR_COVID19.ipynb | anuradhakar49/Coro-Lib | 49e42a48c433bd50267bc779f748fb6467d8b5fe | [
"MIT"
] | null | null | null | 349.493348 | 62,154 | 0.917613 | [
[
[
"Source: https://github.com/alsnhll",
"_____no_output_____"
],
[
"## Model\n\n### Equations\n\n\\begin{equation}\n\\begin{split}\n\\dot{S} &= -\\beta_1 I_1 S -\\beta_2 I_2 S - \\beta_3 I_3 S\\\\\n\\dot{E} &=\\beta_1 I_1 S +\\beta_2 I_2 S + \\beta_3 I_3 S - a E \\\\\n\\dot{I_1} &= a E - \\gamma_1 I_1 - p_1 I_1 \\\\\n\\dot{I_2} &= p_1 I_1 -\\gamma_2 I_2 - p_2 I_2 \\\\\n\\dot{I_3} & = p_2 I_2 -\\gamma_3 I_3 - \\mu I_3 \\\\\n\\dot{R} & = \\gamma_1 I_1 + \\gamma_2 I_2 + \\gamma_3 I_3 \\\\\n\\dot{D} & = \\mu I_3\n\\end{split}\n\\end{equation}\n\n### Variables\n* $S$: Susceptible individuals\n* $E$: Exposed individuals - infected but not yet infectious or symptomatic\n* $I_i$: Infected individuals in severity class $i$. Severity increaes with $i$ and we assume individuals must pass through all previous classes\n * $I_1$: Mild infection (hospitalization not required)\n * $I_2$: Severe infection (hospitalization required)\n * $I_3$: Critical infection (ICU required)\n* $R$: individuals who have recovered from disease and are now immune\n* $D$: Dead individuals\n* $N=S+E+I_1+I_2+I_3+R+D$ Total population size (constant)\n\n### Parameters\n* $\\beta_i$ rate at which infected individuals in class $I_i$ contact susceptibles and infect them\n* $a$ rate of progression from the exposed to infected class\n* $\\gamma_i$ rate at which infected individuals in class $I_i$ recover from disease and become immune\n* $p_i$ rate at which infected individuals in class $I_i$ progress to class $I_{I+1}$\n* $\\mu$ death rate for individuals in the most severe stage of disease\n\n### Basic reproductive ratio\n\nIdea: $R_0$ is the sum of \n1. the average number of secondary infections generated from an individual in stage $I_1$\n2. the probability that an infected individual progresses to $I_2$ multiplied by the average number of secondary infections generated from an individual in stage $I_2$\n3. the probability that an infected individual progresses to $I_3$ multiplied by the average number of secondary infections generated from an individual in stage $I_3$\n\n\\begin{equation}\n\\begin{split}\nR_0 & = N\\frac{\\beta_1}{p_1+\\gamma_1} + \\frac{p_1}{p_1 + \\gamma_1} \\left( \\frac{N \\beta_2}{p_2+\\gamma_2} + \\frac{p_2}{p_2 + \\gamma_2} \\frac{N \\beta_3}{\\mu+\\gamma_3}\\right)\\\\\n&= N\\frac{\\beta_1}{p_1+\\gamma_1} \\left(1 + \\frac{p_1}{p_2 + \\gamma_2}\\frac{\\beta_2}{\\beta_1} \\left( 1 + \\frac{p_2}{\\mu + \\gamma_3} \\frac{\\beta_3}{\\beta_2} \\right) \\right)\n\\end{split}\n\\end{equation}",
"_____no_output_____"
]
],
[
[
"import numpy as np, matplotlib.pyplot as plt\nfrom scipy.integrate import odeint",
"_____no_output_____"
],
[
"#Defining the differential equations\n\n#Don't track S because all variables must add up to 1 \n#include blank first entry in vector for beta, gamma, p so that indices align in equations and code. \n#In the future could include recovery or infection from the exposed class (asymptomatics)\n\ndef seir(y,t,b,a,g,p,u,N): \n dy=[0,0,0,0,0,0]\n S=N-sum(y);\n dy[0]=np.dot(b[1:3],y[1:3])*S-a*y[0] # E\n dy[1]= a*y[0]-(g[1]+p[1])*y[1] #I1\n dy[2]= p[1]*y[1] -(g[2]+p[2])*y[2] #I2\n dy[3]= p[2]*y[2] -(g[3]+u)*y[3] #I3\n dy[4]= np.dot(g[1:3],y[1:3]) #R\n dy[5]=u*y[3] #D\n\n return dy",
"_____no_output_____"
],
[
"# Define parameters based on clinical observations\n\n#I will add sources soon\n# https://github.com/midas-network/COVID-19/tree/master/parameter_estimates/2019_novel_coronavirus\n\nIncubPeriod=5 #Incubation period, days\nDurMildInf=10 #Duration of mild infections, days\nFracMild=0.8 #Fraction of infections that are mild\nFracSevere=0.15 #Fraction of infections that are severe\nFracCritical=0.05 #Fraction of infections that are critical\nCFR=0.02 #Case fatality rate (fraction of infections resulting in death)\nTimeICUDeath=7 #Time from ICU admission to death, days\nDurHosp=11 #Duration of hospitalization, days\n",
"_____no_output_____"
],
[
"# Define parameters and run ODE\n\nN=1000\nb=np.zeros(4) #beta\ng=np.zeros(4) #gamma\np=np.zeros(3)\n\na=1/IncubPeriod\n\nu=(1/TimeICUDeath)*(CFR/FracCritical)\ng[3]=(1/TimeICUDeath)-u\n\np[2]=(1/DurHosp)*(FracCritical/(FracCritical+FracSevere))\ng[2]=(1/DurHosp)-p[2]\n\ng[1]=(1/DurMildInf)*FracMild\np[1]=(1/DurMildInf)-g[1]\n\n#b=2e-4*np.ones(4) # all stages transmit equally\nb=2.5e-4*np.array([0,1,0,0]) # hospitalized cases don't transmit\n\n#Calculate basic reproductive ratio\nR0=N*((b[1]/(p[1]+g[1]))+(p[1]/(p[1]+g[1]))*(b[2]/(p[2]+g[2])+ (p[2]/(p[2]+g[2]))*(b[3]/(u+g[3]))))\nprint(\"R0 = {0:4.1f}\".format(R0))",
"R0 = 2.5\n"
],
[
"print(b)\nprint(a)\nprint(g)\nprint(p)\nprint(u)",
"[0. 0.00025 0. 0. ]\n0.2\n[0. 0.08 0.06818182 0.08571429]\n[0. 0.02 0.02272727]\n0.057142857142857134\n"
],
[
"tmax=365\ntvec=np.arange(0,tmax,0.1)\nic=np.zeros(6)\nic[0]=1\n\nsoln=odeint(seir,ic,tvec,args=(b,a,g,p,u,N))\nsoln=np.hstack((N-np.sum(soln,axis=1,keepdims=True),soln))\n\nplt.figure(figsize=(13,5))\nplt.subplot(1,2,1)\nplt.plot(tvec,soln)\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number per 1000 People\")\nplt.legend((\"S\",\"E\",\"I1\",\"I2\",\"I3\",\"R\",\"D\"))\nplt.ylim([0,1000])\n\n#Same plot but on log scale\nplt.subplot(1,2,2)\nplt.plot(tvec,soln)\nplt.semilogy()\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number per 1000 People\")\nplt.legend((\"S\",\"E\",\"I1\",\"I2\",\"I3\",\"R\",\"D\"))\nplt.ylim([1,1000])\n#plt.tight_layout()",
"_____no_output_____"
],
[
"# get observed growth rate r (and doubling time) for a particular variable between selected time points\n#(all infected classes eventually grow at same rate during early infection)\n\n#Don't have a simple analytic formula for r for this model due to the complexity of the stages\n\ndef growth_rate(tvec,soln,t1,t2,i):\n i1=np.where(tvec==t1)[0][0]\n i2=np.where(tvec==t2)[0][0]\n r=(np.log(soln[i2,1])-np.log(soln[i1,1]))/(t2-t1)\n DoublingTime=np.log(2)/r\n\n return r, DoublingTime",
"_____no_output_____"
],
[
"(r,DoublingTime)=growth_rate(tvec,soln,10,20,1)\nprint(\"The epidemic growth rate is = {0:4.2f} per day and the doubling time {1:4.1f} days \".format(r,DoublingTime))",
"The epidemic growth rate is = 0.08 per day and the doubling time 9.0 days \n"
]
],
[
[
"### Repeat but with a social distancing measure that reduces transmission rate",
"_____no_output_____"
]
],
[
[
"bSlow=0.6*b\nR0Slow=N*((bSlow[1]/(p[1]+g[1]))+(p[1]/(p[1]+g[1]))*(bSlow[2]/(p[2]+g[2])+ (p[2]/(p[2]+g[2]))*(bSlow[3]/(u+g[3]))))\n\nsolnSlow=odeint(seir,ic,tvec,args=(bSlow,a,g,p,u,N))\nsolnSlow=np.hstack((N-np.sum(solnSlow,axis=1,keepdims=True),solnSlow))\n\nplt.figure(figsize=(13,5))\nplt.subplot(1,2,1)\nplt.plot(tvec,solnSlow)\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number per 1000 People\")\nplt.legend((\"S\",\"E\",\"I1\",\"I2\",\"I3\",\"R\",\"D\"))\nplt.ylim([0,1000])\n\n#Same plot but on log scale\nplt.subplot(1,2,2)\nplt.plot(tvec,solnSlow)\nplt.semilogy()\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number per 1000 People\")\nplt.legend((\"S\",\"E\",\"I1\",\"I2\",\"I3\",\"R\",\"D\"))\nplt.ylim([1,1000])\n\n(rSlow,DoublingTimeSlow)=growth_rate(tvec,solnSlow,30,40,1)\n\nplt.show()\nprint(\"R0 under intervention = {0:4.1f}\".format(R0Slow))\nprint(\"The epidemic growth rate is = {0:4.2f} per day and the doubling time {1:4.1f} days \".format(rSlow,DoublingTimeSlow))",
"_____no_output_____"
]
],
[
[
"#### Compare epidemic growth with and without intervention",
"_____no_output_____"
]
],
[
[
"### All infectious cases (not exposed)\n\nplt.figure(figsize=(13,5))\nplt.subplot(1,2,1)\nplt.plot(tvec,np.sum(soln[:,2:5],axis=1,keepdims=True))\nplt.plot(tvec,np.sum(solnSlow[:,2:5],axis=1,keepdims=True))\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number per 1000 People\")\nplt.legend((\"No intervention\",\"Intervention\"))\nplt.ylim([0,1000])\nplt.title('All infectious cases')",
"_____no_output_____"
]
],
[
[
"## COVID19 Cases vs Hospital Capacity",
"_____no_output_____"
],
[
"Depending on the severity ($I_i$) stage of COVID-19 infection, patients need different level of medical care. \n\nIndividuals in $I_1$ have \"mild\" infection, meaning they have cough/fever/other flu-like symptoms and may also have mild pneumonia. Mild pneumonia does not require hospitalization, although in many outbreak locations like China and South Korea all symptomatic patients are being hospitalized. This is likely to reduce spread and to monitor these patients in case they rapidly progress to worse outcome. However, it is a huge burden on the health care system.\n\nIndividuals in $I_2$ have \"severe\" infection, which is categorized medically as having any of the following: \"dyspnea, respiratory frequency 30/min, blood oxygen saturation 93%, partial pressure of arterial oxygen to fraction of inspired oxygen ratio $<$300, lung infiltrates $>$50% within 24 to 48 hours\". These individuals require hospitalization but can be treated on regular wards. They may require supplemental oxygen. \n\nIndividuals in $I_3$ have \"critical\" infection, which is categorized as having any of the following: \"respiratory failure, septic shock, and/or multiple organ dysfunction or failure\".\nThey require ICU-level care, generally because they need mechanical ventilation. \n\nWe consider different scenarios for care requirements. One variation between scenarios is whether we include hospitalization for all individuals or only those with severe or critical infection. Another is the care of critical patients. If ICUs are full, hospitals have protocols developed for pandemic influenza to provide mechanical ventilation outside regular ICU facility and staffing requirements. Compared to \"conventional\" ventilation protocols, there are \"contingency\" and \"crisis\" protocols that can be adopted to increase patient loads. These protocols involve increasing patient:staff ratios, using non-ICU beds, and involving non-critical care specialists in patient care. \n\n",
"_____no_output_____"
]
],
[
[
"#Parameter sources: https://docs.google.com/spreadsheets/d/1zZKKnZ47lqfmUGYDQuWNnzKnh-IDMy15LBaRmrBcjqE\n\n# All values are adjusted for increased occupancy due to flu season\n\nAvailHospBeds=2.6*(1-0.66*1.1) #Available hospital beds per 1000 ppl in US based on total beds and occupancy\nAvailICUBeds=0.26*(1-0.68*1.07) #Available ICU beds per 1000 ppl in US, based on total beds and occupancy. Only counts adult not neonatal/pediatric beds\nConvVentCap=0.062 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using conventional protocols\nContVentCap=0.15 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using contingency protocols\nCrisisVentCap=0.42 #Estimated excess # of patients who could be ventilated in US (per 1000 ppl) using crisis protocols\n",
"_____no_output_____"
]
],
[
[
"### Assumptions 1\n* Only severe or critical cases go to the hospital\n* All critical cases require ICU care and mechanical ventilation\n",
"_____no_output_____"
]
],
[
[
"NumHosp=soln[:,3]+soln[:,4]\nNumICU=soln[:,4]\n\nplt.figure(figsize=(13,4.8))\nplt.subplot(1,2,1)\nplt.plot(tvec,NumHosp)\nplt.plot(np.array((0, tmax)),AvailHospBeds*np.ones(2),color='C0',linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing Hospitalization\",\"Available Hospital Beds\"))\nipeakHosp=np.argmax(NumHosp) #find peak\npeakHosp=10*np.ceil(NumHosp[ipeakHosp]/10)#find time at peak\nplt.ylim([0,peakHosp])\n\nplt.subplot(1,2,2)\nplt.plot(tvec,NumICU,color='C1')\nplt.plot(np.array((0, tmax)),AvailICUBeds*np.ones(2),color='C1',linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing ICU\",\"Available ICU Beds\"))\nipeakICU=np.argmax(NumICU) #find peak\npeakICU=10*np.ceil(NumICU[ipeakICU]/10)#find time at peak\nplt.ylim([0,peakICU])\nplt.ylim([0,10])\n\n#Find time when hospitalized cases = capacity\nicross=np.argmin(np.abs(NumHosp[0:ipeakHosp]-AvailHospBeds)) #find intersection before peak\nTimeFillBeds=tvec[icross]\n\n#Find time when ICU cases = capacity\nicross=np.argmin(np.abs(NumICU[0:ipeakICU]-AvailICUBeds)) #find intersection before peak\nTimeFillICU=tvec[icross]\n\nplt.show()\nprint(\"Hospital and ICU beds are filled by COVID19 patients after {0:4.1f} and {1:4.1f} days\".format(TimeFillBeds,TimeFillICU))",
"_____no_output_____"
]
],
[
[
"Note that we have not taken into account the limited capacity in the model itself. If hospitals are at capacity, then the death rate will increase, since individuals with severe and critical infection will often die without medical care. The transmission rate will probably also increase, since any informal home-care for these patients will likely not include the level of isolation/precautions used in a hospital.",
"_____no_output_____"
],
[
"#### Allow for mechanical ventilation outside of ICUs using contingency or crisis capacity",
"_____no_output_____"
]
],
[
[
"plt.plot(tvec,NumICU)\nplt.plot(np.array((0, tmax)),ConvVentCap*np.ones(2),linestyle=\":\")\nplt.plot(np.array((0, tmax)),ContVentCap*np.ones(2),linestyle=\":\")\nplt.plot(np.array((0, tmax)),CrisisVentCap*np.ones(2),linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing Mechanical Ventilation\",\"Conventional Capacity\",\"Contingency Capacity\",\"Crisis Capacity\"))\nplt.ylim([0,peakICU])\nplt.ylim([0,10])\n\n#Find time when ICU cases = conventional capacity\nicrossConv=np.argmin(np.abs(NumICU[0:ipeakICU]-ConvVentCap)) #find intersection before peak\nTimeConvCap=tvec[icrossConv]\nicrossCont=np.argmin(np.abs(NumICU[0:ipeakICU]-ContVentCap)) #find intersection before peak\nTimeContCap=tvec[icrossCont]\nicrossCrisis=np.argmin(np.abs(NumICU[0:ipeakICU]-CrisisVentCap)) #find intersection before peak\nTimeCrisisCap=tvec[icrossCrisis]\n\nplt.show()\nprint(\"Capacity for mechanical ventilation is filled by COVID19 patients after {0:4.1f} (conventional), {1:4.1f} (contingency) and {2:4.1f} (crisis) days\".format(TimeConvCap,TimeContCap,TimeCrisisCap))",
"_____no_output_____"
]
],
[
[
"Compare to the case with intervention",
"_____no_output_____"
]
],
[
[
"NumHospSlow=solnSlow[:,3]+solnSlow[:,4]\nNumICUSlow=solnSlow[:,4]\n\nplt.figure(figsize=(13,4.8))\nplt.subplot(1,2,1)\nplt.plot(tvec,NumHosp)\nplt.plot(tvec,NumHospSlow,color='C0',linestyle=\"--\")\nplt.plot(np.array((0, tmax)),AvailHospBeds*np.ones(2),color='C0',linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing Hospitalization\",\"Cases Needing Hospitalization (Intervetion)\",\"Available Hospital Beds\"))\nplt.ylim([0,peakHosp])\n\nplt.subplot(1,2,2)\nplt.plot(tvec,NumICU,color='C1')\nplt.plot(tvec,NumICUSlow,color='C1',linestyle=\"--\")\nplt.plot(np.array((0, tmax)),AvailICUBeds*np.ones(2),color='C1',linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing ICU\",\"Cases Needing ICU (Intervetion)\",\"Available ICU Beds\"))\nplt.ylim([0,peakICU])\n\n#Find time when hospitalized cases = capacity\nipeakHospSlow=np.argmax(NumHospSlow) #find peak\nicross=np.argmin(np.abs(NumHospSlow[0:ipeakHospSlow]-AvailHospBeds)) #find intersection before peak\nTimeFillBedsSlow=tvec[icross]\n\n#Find time when ICU cases = capacity\nipeakICUSlow=np.argmax(NumICUSlow) #find peak\nicross=np.argmin(np.abs(NumICUSlow[0:ipeakICU]-AvailICUBeds)) #find intersection before peak\nTimeFillICUSlow=tvec[icross]\n\nplt.show()\nprint(\"With intervention, hospital and ICU beds are filled by COVID19 patients after {0:4.1f} and {1:4.1f} days\".format(TimeFillBedsSlow,TimeFillICUSlow))",
"_____no_output_____"
]
],
[
[
"And for expanded mechanical ventilation capacity",
"_____no_output_____"
]
],
[
[
"plt.plot(tvec,NumICU)\nplt.plot(tvec,NumICUSlow)\nplt.plot(np.array((0, tmax)),ConvVentCap*np.ones(2),linestyle=\":\")\nplt.plot(np.array((0, tmax)),ContVentCap*np.ones(2),linestyle=\":\")\nplt.plot(np.array((0, tmax)),CrisisVentCap*np.ones(2),linestyle=\":\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"Number Per 1000 People\")\nplt.legend((\"Cases Needing Mechanical Ventilation\",\"Cases Needing Mechanical Ventilation (Intervention)\",\"Conventional Capacity\",\"Contingency Capacity\",\"Crisis Capacity\"))\nplt.ylim([0,peakICU])\n\n#Find time when ICU cases = conventional capacity (with intervention)\nicrossConvSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-ConvVentCap)) #find intersection before peak\nTimeConvCapSlow=tvec[icrossConvSlow]\nicrossContSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-ContVentCap)) #find intersection before peak\nTimeContCapSlow=tvec[icrossContSlow]\nicrossCrisisSlow=np.argmin(np.abs(NumICUSlow[0:ipeakICUSlow]-CrisisVentCap)) #find intersection before peak\nTimeCrisisCapSlow=tvec[icrossCrisisSlow]\n\nplt.show()\nprint(\"Capacity for mechanical ventilation is filled by COVID19 patients after {0:4.1f} (conventional), {1:4.1f} (contingency) and {2:4.1f} (crisis) days\".format(TimeConvCapSlow,TimeContCapSlow,TimeCrisisCapSlow))",
"_____no_output_____"
]
],
[
[
"Interpretation: While interventions that reduce infectiousness do \"flatten the curve\", cases are still WAY over hospital capacity. There is no way to get anywhere close to staying under hospital bed capacity or mechanical ventilation capacity without reducing $R_0<1$. ",
"_____no_output_____"
],
[
"### Assumptions 2\n* All cases go to the hospital\n* All critical cases require ICU care and mechanical ventilation\n\nNote: No point running this scenario because it would be even more extreme than Assumption 1 (mild cases stayed home) and Assumption 1 already lead to rapid overlow of hospital resources",
"_____no_output_____"
],
[
"\n### Assumptions 3\n* Only severe or critical cases go to the hospital\n* All critical cases require ICU care and mechanical ventilation\n* When hospital capacity is exceed, individual dies\n\nNote: Could be used to simulate expected increases in death if capacity exceeded\n",
"_____no_output_____"
],
[
"## Alternative Models\n\nTo be continued, including\n* Assuming ~30% of cases are asymptomatic (as seen on Diamond Princess) (This would lead to a re-interpretation of the reported rates of severe and critical infection, so the prevalence of these stages would decrease)\n* A parallel instead of series model of disease course (because it is unclear if it is realistic that individuals who pass through the mild stage on the way to a severe state spend as long in the mild stage as individuals who never progress)\n* Including pre-symptomatic transmission (for about last ~2 days of exposed period, as estimated in some studies)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecbc4eaaee6fd3358d0562b86d88ccaa3bc72c02 | 9,691 | ipynb | Jupyter Notebook | eig4Ali.ipynb | rheiland/gui4Ali | 88f1b0a000382cce5193ad71609aad22ee3676cf | [
"BSD-3-Clause"
] | null | null | null | eig4Ali.ipynb | rheiland/gui4Ali | 88f1b0a000382cce5193ad71609aad22ee3676cf | [
"BSD-3-Clause"
] | null | null | null | eig4Ali.ipynb | rheiland/gui4Ali | 88f1b0a000382cce5193ad71609aad22ee3676cf | [
"BSD-3-Clause"
] | null | null | null | 32.962585 | 140 | 0.400371 | [
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import sys, os\nsys.path.insert(0, os.path.abspath('bin'))\nimport eig4Ali",
"_____no_output_____"
],
[
"eig4Ali.gui",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ecbc76b529a8c5380b4545f90a4d8410f0c49a0d | 64,180 | ipynb | Jupyter Notebook | Notebook/Section 1 - Introduction to PyTorch-v2.ipynb | PacktPublishing/Practical-Deep-Learning-with-PyTorch | 5507decaf76a73b3e2465f0fb0818d0b9c8dd83b | [
"MIT"
] | 17 | 2019-04-11T14:22:57.000Z | 2021-07-24T13:32:00.000Z | Notebook/Section 1 - Introduction to PyTorch-v2.ipynb | PacktPublishing/Practical-Deep-Learning-with-PyTorch | 5507decaf76a73b3e2465f0fb0818d0b9c8dd83b | [
"MIT"
] | null | null | null | Notebook/Section 1 - Introduction to PyTorch-v2.ipynb | PacktPublishing/Practical-Deep-Learning-with-PyTorch | 5507decaf76a73b3e2465f0fb0818d0b9c8dd83b | [
"MIT"
] | 14 | 2019-04-11T15:31:29.000Z | 2021-04-18T21:19:50.000Z | 20.049984 | 250 | 0.479931 | [
[
[
"<div class=\"alert alert-block alert-info\">\n<font size=\"6\"><b><center> Section 1</font></center>\n<br>\n<font size=\"6\"><b><center> Introduction to PyTorch </font></center>\n</div>",
"_____no_output_____"
],
[
"# Fundamental Building Blocks",
"_____no_output_____"
],
[
"* Tensor and Tensor Operations\n\n* PyTorch’s Tensor Libraries\n\n* Computational Graph\n\n* Gradient Computation\n\n* Linear Mapping\n\n* PyTorch’s non-linear activation functions:\n * Sigmoid, tanh, ReLU, Leaky ReLU\n\n* Loss Function\n\n* Optimization algorithms used in training deep learning models",
"_____no_output_____"
],
[
"# A 2-Layer Feed-Forward Neural Network Architecture",
"_____no_output_____"
],
[
"## Some notations on a simple feed-forward network",
"_____no_output_____"
],
[
"$\\bar{\\mathbf{X}} = \\{X_1, X_2, \\dots, X_K, 1 \\}$ is a $n \\times K$ matrix of $K$ input features from $n$ training examples\n\n$X_k$ $ (k = 1,\\dots,K) $ is a $n \\times 1$ vector of n examples corresponding to feature $k$\n\n$\\bar{\\mathbf{W}}_{Xh} = \\{w_1, w_2 \\dots, w_p \\}$\n\n$\\bar{\\mathbf{W}}_{Xh}$ of size $PK$ \n\nwhere $P$ is the number of units in the hidden layer 1 \n\nand K is the number of input features\n\n$\\mathbf{b}$ bias\n\n",
"_____no_output_____"
],
[
"## A Simple Neural Network Architeture",
"_____no_output_____"
],
[
"The input layer contains $d$ nodes that transmit the $d$ features $\\mathbf{X} = \\{x_1, \\dots, x_d, 1 \\}$ with edges of weights $\\mathbf{W} = \\{w_1, \\dots, w_d, b \\}$ to an output node.\n\nLinear function (or linear mapping of data): $\\mathbf{W} \\cdot \\mathbf{X} + b = b + \\sum_{i=1}^d w_i x_i $\n\n$ y = b + \\sum_{i=1}^d w_i x_i $ where $w$'s and $b$ are parameters to be learned",
"_____no_output_____"
],
[
"# Tensor and Tensor Operations",
"_____no_output_____"
],
[
"There are many types of tensor operations, and we will not cover all of them in this introduction. We will focus on operations that can help us start developing deep learning models immediately.\n\nThe official documentation provides a comprehensive list: [pytorch.org](https://pytorch.org/docs/stable/torch.html#tensors)\n\n\n * Creation ops: functions for constructing a tensor, like ones and from_numpy \n \n * Indexing, slicing, joining, mutating ops: functions for changing the shape, stride or content a tensor, like transpose\n\n * Math ops: functions for manipulating the content of the tensor through computations\n\n * Pointwise ops: functions for obtaining a new tensor by applying a function to each element independently, like abs and cos\n\n * Reduction ops: functions for computing aggregate values by iterating through tensors, like mean, std and norm\n\n * Comparison ops: functions for evaluating numerical predicates over tensors, like equal and max\n\n * Spectral ops: functions for transforming in and operating in the frequency domain, like stft and hamming_window\n\n * Other operations: special functions operating on vectors, like cross, or matrices, like trace \n \n * BLAS and LAPACK operations: functions following the BLAS (Basic Linear Algebra Subprograms) specification for scalar, vector-vector, matrix-vector and matrix-matrix operations \n \n * Random sampling: functions for generating values by drawing randomly from probability distributions, like randn and normal\n\n * Serialization: functions for saving and loading tensors, like load and save\n\n * Parallelism: functions for controlling the number of threads for parallel CPU execution, like set_num_threads\n\n",
"_____no_output_____"
]
],
[
[
"# Import torch module\nimport torch\ntorch.version.__version__",
"_____no_output_____"
]
],
[
[
"## Creating Tensors and Examining tensors",
"_____no_output_____"
],
[
"* `rand()`\n\n* `randn()`\n\n* `zeros()`\n\n* `ones()`\n\n* using a `Python list`",
"_____no_output_____"
],
[
"### Create a 1-D Tensor",
"_____no_output_____"
],
[
" - PyTorch provides methods to create random or zero-filled tensors\n - Use case: to initialize weights and bias for a NN model",
"_____no_output_____"
]
],
[
[
"import torch",
"_____no_output_____"
]
],
[
[
"`torch.rand()` returns a tensor of random numbers from a uniform [0,1) distribution\n \n[Source: Torch's random sampling](https://pytorch.org/docs/stable/torch.html#random-sampling)",
"_____no_output_____"
],
[
"Draw a sequence of 10 random numbers",
"_____no_output_____"
]
],
[
[
"x = torch.rand(10)",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"x.size()",
"_____no_output_____"
],
[
"print(x.min(), x.max())",
"tensor(0.0871) tensor(0.9805)\n"
]
],
[
[
"Draw a matrix of size (10,3) random numbers",
"_____no_output_____"
]
],
[
[
"W = torch.rand(10,3)",
"_____no_output_____"
],
[
"type(W)",
"_____no_output_____"
],
[
"W.size()",
"_____no_output_____"
],
[
"W",
"_____no_output_____"
]
],
[
[
"Another common random sampling is to generate random number from the standard normal distribution",
"_____no_output_____"
],
[
"`torch.randn()` returns a tensor of random numbers from a standard normal distribution (i.e. a normal distribution with mean 0 and variance 1)\n\n[Source: Torch's random sampling](https://pytorch.org/docs/stable/torch.html#random-sampling)",
"_____no_output_____"
]
],
[
[
"W2 = torch.randn(10,3)",
"_____no_output_____"
],
[
"type(W2)",
"_____no_output_____"
],
[
"W2.dtype",
"_____no_output_____"
],
[
"W2.shape",
"_____no_output_____"
],
[
"W2",
"_____no_output_____"
]
],
[
[
"**Note: Though it looks like it is similar to a list of number objects, it is not. A tensor stores its data as unboxed numeric values, so they are not Python objects but C numeric types - 32-bit (4 bytes) float**",
"_____no_output_____"
],
[
"`torch.zeros()` can be used to initialize the `bias`",
"_____no_output_____"
]
],
[
[
"b = torch.zeros(10)",
"_____no_output_____"
],
[
"type(b)",
"_____no_output_____"
],
[
"b.shape",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
]
],
[
[
"Likewise, `torch.ones()` can be used to create a tensor filled with 1",
"_____no_output_____"
]
],
[
[
"a = torch.ones(3)",
"_____no_output_____"
],
[
"type(a)",
"_____no_output_____"
],
[
"a.shape",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"A = torch.ones((3,3,3))",
"_____no_output_____"
],
[
"A",
"_____no_output_____"
]
],
[
[
"Convert a Python list to a tensor",
"_____no_output_____"
]
],
[
[
"A.shape",
"_____no_output_____"
],
[
"l = [1.0, 4.0, 2.0, 1.0, 3.0, 5.0]\ntorch.tensor(l)",
"_____no_output_____"
]
],
[
[
"Subsetting a tensor: extract the first 2 elements of a 1-D tensor",
"_____no_output_____"
]
],
[
[
"torch.tensor([1.0, 4.0, 2.0, 1.0, 3.0, 5.0])[:2]",
"_____no_output_____"
]
],
[
[
"### Create a 2-D Tensor",
"_____no_output_____"
]
],
[
[
"a = torch.ones(3,3)",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a.size()",
"_____no_output_____"
],
[
"b = torch.ones(3,3)",
"_____no_output_____"
],
[
"type(b)",
"_____no_output_____"
]
],
[
[
"Simple addition",
"_____no_output_____"
]
],
[
[
"c = a + b",
"_____no_output_____"
],
[
"type(c)",
"_____no_output_____"
],
[
"c.type()",
"_____no_output_____"
],
[
"c.size()",
"_____no_output_____"
]
],
[
[
"Create a 2-D tensor by passing a list of lists to the constructor",
"_____no_output_____"
]
],
[
[
"d = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 5.0]])",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d.size()",
"_____no_output_____"
],
[
"# We will obtain the same result by using `shape`\nd.shape",
"_____no_output_____"
]
],
[
[
"$[3,2]$ indicates the size of the tensor along each of its 2 dimensions",
"_____no_output_____"
]
],
[
[
"# Using the 0th-dimension index to get the 1st dimension of the 2-D tensor. \n# Note that this is not a new tensor; this is just a different (partial) view of the original tensor\nd[0]",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d.storage()",
"_____no_output_____"
],
[
"e = torch.tensor([[[1.0, 3.0],\n [5.0, 7.0]],\n [[2.0, 4.0],\n [6.0, 8.0]],\n ])",
"_____no_output_____"
],
[
"e.storage()",
"_____no_output_____"
],
[
"e.shape",
"_____no_output_____"
],
[
"e.storage_offset()",
"_____no_output_____"
],
[
"e.stride()",
"_____no_output_____"
],
[
"e.size()",
"_____no_output_____"
],
[
"inputs = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 5.0]])",
"_____no_output_____"
],
[
"inputs",
"_____no_output_____"
],
[
"inputs.size()",
"_____no_output_____"
],
[
"inputs.stride()",
"_____no_output_____"
],
[
"inputs.storage()",
"_____no_output_____"
]
],
[
[
"## Subset a Tensor",
"_____no_output_____"
]
],
[
[
"inputs[2]",
"_____no_output_____"
],
[
"inputs[:2]",
"_____no_output_____"
],
[
"inputs[1:] # all rows but first, implicitly all columns",
"_____no_output_____"
],
[
"inputs[1:, :] # all rows but first, explicitly all columns",
"_____no_output_____"
],
[
"inputs[0,0]",
"_____no_output_____"
],
[
"inputs[0,1]",
"_____no_output_____"
],
[
"inputs[1,0]",
"_____no_output_____"
],
[
"inputs[0]",
"_____no_output_____"
]
],
[
[
"**Note the changing the `sub-tensor` extracted (instead of cloned) from the original will change the original tensor**",
"_____no_output_____"
]
],
[
[
"second_inputs = inputs[0]",
"_____no_output_____"
],
[
"second_inputs",
"_____no_output_____"
],
[
"second_inputs[0] = 100.0",
"_____no_output_____"
],
[
"inputs",
"_____no_output_____"
],
[
"inputs[0,0]",
"_____no_output_____"
]
],
[
[
"**If we don't want to change the original tensure when changing the `sub-tensor`, we will need to clone the sub-tensor from the original**",
"_____no_output_____"
]
],
[
[
"a = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 5.0]])",
"_____no_output_____"
],
[
"b = a[0].clone()",
"_____no_output_____"
],
[
"b[0] = 100.0",
"_____no_output_____"
]
],
[
[
"## Transpose a Tensor",
"_____no_output_____"
],
[
"### Transposing a matrix",
"_____no_output_____"
]
],
[
[
"a",
"_____no_output_____"
],
[
"a_t = a.t()",
"_____no_output_____"
],
[
"a_t",
"_____no_output_____"
],
[
"a.storage()",
"_____no_output_____"
],
[
"a_t.storage()",
"_____no_output_____"
]
],
[
[
"**Transposing a tensor does not change its storage**",
"_____no_output_____"
]
],
[
[
"id(a.storage()) == id(a_t.storage())",
"_____no_output_____"
]
],
[
[
"### Transposing a Multi-Dimensional Array",
"_____no_output_____"
]
],
[
[
"A = torch.ones(3, 4, 5)",
"_____no_output_____"
],
[
"A",
"_____no_output_____"
]
],
[
[
"To transpose a multi-dimensional array, the dimension along which the tanspose is performed needs to be specified",
"_____no_output_____"
]
],
[
[
"A_t = A.transpose(0,2)",
"_____no_output_____"
],
[
"A.size()",
"_____no_output_____"
],
[
"A_t.size()",
"_____no_output_____"
],
[
"A.stride()",
"_____no_output_____"
],
[
"A_t.stride()",
"_____no_output_____"
]
],
[
[
"### NumPy Interoperability",
"_____no_output_____"
]
],
[
[
"x = torch.ones(3,3)",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"x_np = x.numpy()",
"_____no_output_____"
],
[
"x.dtype",
"_____no_output_____"
],
[
"x_np.dtype",
"_____no_output_____"
],
[
"x2 = torch.from_numpy(x_np)\nx2.dtype",
"_____no_output_____"
]
],
[
[
"### Tensors on GPU",
"_____no_output_____"
],
[
"We will discuss more about this in the last section of the course",
"_____no_output_____"
],
[
"```python\n matrix_gpu = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 4.0]], device='cuda')\n # transfer a tensor created on the CPU onto GPU using the to method \n x2_gpu = x2.to(device='cuda') \n \n points_gpu = points.to(device='cuda:0') \n```\n\nCPU vs. GPU Performance Comparison \n```python\na = torch.rand(10000,10000)\nb = torch.rand(10000,10000)\na.matmul(b)\n\n#Move the tensors to GPU\na = a.cuda()\nb = b.cuda()\na.matmul(b)\n\n```",
"_____no_output_____"
],
[
"# Gradient Computation",
"_____no_output_____"
],
[
"Partial derivative of a function of several variables:\n\n$$ \\frac{\\partial f(x_1, x_2, \\dots, x_p)}{\\partial x_i} |_{\\text{other variables constant}}$$",
"_____no_output_____"
],
[
"* `torch.Tensor`\n\n* `torch.autograd` is an engine for computing vector-Jacobian product\n\n* `.requires_grad`\n\n* `.backward()`\n\n* `.grad`\n\n* `.detach()`\n\n* `with torch.no_grad()`\n\n* `Function`\n\n* `Tensor` and `Function` are connected and build up an acyclic graph, that encodes a complete history of computation.",
"_____no_output_____"
],
[
"Let's look at a couple of examples:",
"_____no_output_____"
],
[
"Example 1",
"_____no_output_____"
],
[
"1. Create a variable and set `.requires_grad` to True",
"_____no_output_____"
]
],
[
[
"import torch\nx = torch.ones(5,requires_grad=True)",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"x.type",
"_____no_output_____"
],
[
"x.grad",
"_____no_output_____"
]
],
[
[
"Note that at this point, `x.grad` does not output anything because there is no operation performed on the tensor `x` yet. However, let's create another tensor `y` by performing a few operations (i.e. taking the mean) on the original tensor `x`.",
"_____no_output_____"
]
],
[
[
"y = x + 2\nz = y.mean()",
"_____no_output_____"
],
[
"z.type",
"_____no_output_____"
],
[
"z",
"_____no_output_____"
],
[
"z.backward()\nx.grad",
"_____no_output_____"
],
[
"x.grad_fn",
"_____no_output_____"
],
[
"x.data",
"_____no_output_____"
],
[
"y.grad_fn",
"_____no_output_____"
],
[
"z.grad_fn",
"_____no_output_____"
]
],
[
[
"Example 2",
"_____no_output_____"
]
],
[
[
"x = torch.ones(2, 2, requires_grad=True)\ny = x + 5\nz = 2 * y * y # 2*(x+5)^2\nh = z.mean()",
"_____no_output_____"
],
[
"z",
"_____no_output_____"
],
[
"z.shape",
"_____no_output_____"
],
[
"h.shape",
"_____no_output_____"
],
[
"h",
"_____no_output_____"
],
[
"h.backward()",
"_____no_output_____"
],
[
"print(x.grad)",
"tensor([[6., 6.],\n [6., 6.]])\n"
]
],
[
[
"# Lab 1",
"_____no_output_____"
]
],
[
[
"# Create a tensor of 20 random numbers from the uniform [0,1) distribution\n# YOUR CODE HERE (1 line)\nimport torch\nz = torch.rand(20)",
"_____no_output_____"
],
[
"# What is the mean of these numbers?\nimport numpy as np\n# YOUR CODE HERE (1 line)\nnp.mean(x.numpy())",
"_____no_output_____"
],
[
"# Create a tensor of 5 zeros\n# YOUR CODE HERE (1 line)\nb = torch.zeros(5)",
"_____no_output_____"
],
[
"# Create a tensor of 5 ones\n# YOUR CODE HERE (1 line)\na = torch.ones(5)",
"_____no_output_____"
],
[
"# Given the follow tensor, subset the first 2 rows and first 2 columns of this tensor.\nA = torch.rand(4,4)\n# YOUR CODE HERE (1 line)\nA[:2,:2]",
"_____no_output_____"
],
[
"# What is the shape of the following tensor?\nX = torch.randint(0, 10, (2, 5, 5))\n# YOUR CODE HERE (1 line)\nX.shape",
"_____no_output_____"
],
[
"# Consider the following tensor.\n# What are the gradients after the operation?\n\np = torch.ones(10, requires_grad=True) \nq = p + 2\nr = q.mean()\n\n# YOUR CODE HERE (2 lines)",
"_____no_output_____"
],
[
"r.backward()\np.grad",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbc82bc84dfb7791d8e090ff4937d090dd94977 | 69,366 | ipynb | Jupyter Notebook | _notebooks/2020-01-06-Movies.ipynb | ashudva/fastpage_testing | 8294305fa9dfa60ac8d4eea3acdcf0936a6ef690 | [
"Apache-2.0"
] | null | null | null | _notebooks/2020-01-06-Movies.ipynb | ashudva/fastpage_testing | 8294305fa9dfa60ac8d4eea3acdcf0936a6ef690 | [
"Apache-2.0"
] | 2 | 2021-01-10T09:49:19.000Z | 2021-09-28T05:42:53.000Z | _notebooks/2020-01-06-Movies.ipynb | ashudva/fastpage_testing | 8294305fa9dfa60ac8d4eea3acdcf0936a6ef690 | [
"Apache-2.0"
] | null | null | null | 76.310231 | 36,516 | 0.715206 | [
[
[
"# \"Using Regression for Revenue Prediction of movies\"\n> \"Using TMDb dataset we'll try to predict the revenue of a movie based on the characteristics of the movie, and predict whether a movie's revenue will exceed its budget or not\"\n\n- toc: false\n- branch: master\n- badges: true\n- comments: true\n- categories: [fastpages, jupyter]\n- image: images/vignette/movies.png\n- hide: false\n- search_exclude: true",
"_____no_output_____"
],
[
"Throughout the case study/analysis we'll be using the following libraries:\n\n| Library | Purpose |\n| ----------- | ----------- |\n| `sklearn` | Modelling |\n| `matplotlib`, `bokeh` | Visualization |\n| `numpy`, `pandas` | Data Manipulation |",
"_____no_output_____"
],
[
"In this case study I am going to do several things first, I want to **predict the revenue** of a movie based on the characteristics of the movie, second I want to **predict whether a movie's revenue will exceed its budget or not**.",
"_____no_output_____"
],
[
"# Drudgery: import and take a look at the data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nfrom sklearn.model_selection import cross_val_predict\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import r2_score\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.style.use(\"ggplot\")\n%matplotlib inline\n\ndf = pd.read_csv(\"data/processed_data.csv\", index_col=0)\n\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 4803 entries, 0 to 4802\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 budget 4803 non-null int64 \n 1 genres 4775 non-null object \n 2 homepage 1712 non-null object \n 3 id 4803 non-null int64 \n 4 keywords 4391 non-null object \n 5 original_language 4803 non-null object \n 6 original_title 4803 non-null object \n 7 overview 4800 non-null object \n 8 popularity 4803 non-null float64\n 9 production_companies 4452 non-null object \n 10 production_countries 4629 non-null object \n 11 release_date 4802 non-null object \n 12 revenue 4803 non-null int64 \n 13 runtime 4801 non-null float64\n 14 spoken_languages 4716 non-null object \n 15 status 4803 non-null object \n 16 tagline 3959 non-null object \n 17 title 4803 non-null object \n 18 vote_average 4803 non-null float64\n 19 vote_count 4803 non-null int64 \n 20 movie_id 4803 non-null int64 \n 21 cast 4760 non-null object \ndtypes: float64(3), int64(5), object(14)\nmemory usage: 863.0+ KB\n"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing\nOur Second step would be to clean and transform the data so that we could apply Regression or Classification algorithm on the data.",
"_____no_output_____"
],
[
"## Defining Regression and Classification Outcomes\nFor regression we'll be using `revenue` as the target for outcomes, and for classification we'll construct an indicator of profitability for each movie. Let's define new column `profitable` such that:\n$$\nprofitable = 1\\ \\ if\\ revenue > budget,\\ 0\\ \\ otherwise\n$$",
"_____no_output_____"
]
],
[
[
"df['profitable'] = df.revenue > df.budget\ndf['profitable'] = df['profitable'].astype(int)\n\nregression_target = 'revenue'\nclassification_target = 'profitable'\n\ndf['profitable'].value_counts()",
"_____no_output_____"
]
],
[
[
"2585 out of all movies in the dataset were profitable\n## Handling missing and infinite values\nLooking at the data we can easily guess that many of the columns are non-numeric and using a technique other than ommiting the columns might be a bit overhead. So I'm going to stick with plane and simple technique of ommiting the column with missing or infinite values.",
"_____no_output_____"
],
[
"1. Replace any `np.inf` or `-np.inf` occuring in the dataset with np.nan",
"_____no_output_____"
]
],
[
[
"df = df.replace([np.inf, -np.inf], np.nan)\nprint(df.shape)\ndf.info()",
"(4803, 23)\n<class 'pandas.core.frame.DataFrame'>\nInt64Index: 4803 entries, 0 to 4802\nData columns (total 23 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 budget 4803 non-null int64 \n 1 genres 4775 non-null object \n 2 homepage 1712 non-null object \n 3 id 4803 non-null int64 \n 4 keywords 4391 non-null object \n 5 original_language 4803 non-null object \n 6 original_title 4803 non-null object \n 7 overview 4800 non-null object \n 8 popularity 4803 non-null float64\n 9 production_companies 4452 non-null object \n 10 production_countries 4629 non-null object \n 11 release_date 4802 non-null object \n 12 revenue 4803 non-null int64 \n 13 runtime 4801 non-null float64\n 14 spoken_languages 4716 non-null object \n 15 status 4803 non-null object \n 16 tagline 3959 non-null object \n 17 title 4803 non-null object \n 18 vote_average 4803 non-null float64\n 19 vote_count 4803 non-null int64 \n 20 movie_id 4803 non-null int64 \n 21 cast 4760 non-null object \n 22 profitable 4803 non-null int32 \ndtypes: float64(3), int32(1), int64(5), object(14)\nmemory usage: 881.8+ KB\n"
]
],
[
[
"Notice that `homepage` column accounts for maximun `null` or minimun `non-null` values in the dataset, and we can discard it as a feature for more data.\n\n2. Drop any column with `na` and drop `homepage` column",
"_____no_output_____"
]
],
[
[
"df.drop('homepage', axis=1, inplace=True)\ndf = df.dropna(how=\"any\")\ndf.shape",
"_____no_output_____"
]
],
[
[
"## Transform `genre` column using `OneHotEncoding`",
"_____no_output_____"
],
[
"Since `genres` column consists of strings with comma separated genres e.g. `\"Action, Adventure, Fantasy\"` as a value for a particular movie, I'll convert string to list, then extract all unique genres in the list and finally add a column for each unique genre. Value of a specific genre will be `0` if it is present in `genres` otherwise `0`. ",
"_____no_output_____"
]
],
[
[
"list_genres = df.genres.apply(lambda x: x.split(\",\"))\ngenres = []\nfor row in list_genres:\n row = [genre.strip() for genre in row]\n for genre in row:\n if genre not in genres:\n genres.append(genre)\n\nfor genre in genres:\n df[genre] = df['genres'].str.contains(genre).astype(int)\n\ndf[genres].head()",
"_____no_output_____"
]
],
[
[
"## Extract numerical variables\nMany of the variables in the dataset are already numerical which will be useful in regression, we'll be extracting these variables and we'll also calculate `skew` of the continuous variables, and `plot` these variables.",
"_____no_output_____"
]
],
[
[
"continuous_covariates = ['budget', 'popularity',\n 'runtime', 'vote_count', 'vote_average']\noutcomes_and_continuous_covariates = continuous_covariates + \\\n [regression_target, classification_target]\nplotting_variables = ['budget', 'popularity', regression_target]\n\naxes = pd.plotting.scatter_matrix(df[plotting_variables], alpha=0.15,\n color=(0, 0, 0), hist_kwds={\"color\": (0, 0, 0)}, facecolor=(1, 0, 0))\nplt.show()",
"_____no_output_____"
],
[
"df[outcomes_and_continuous_covariates].skew()",
"_____no_output_____"
]
],
[
[
"Since **\"Linear algorithms love normally distributed data\"**, and several of the variables `budget, popularity, runtime, vote_count, revenue` are right skewed. So now we'll remove skewness from these variables using `np.log10` to make it symmetric. But first we need to add very small positive number to all the columns as some of values are `0` and `log10(0) = -inf`.\n{% include info.html text=\"We are not actually removing skewness from the data instead we're only appliying a non-linear transformation on the variables to make it symmetric. If you transform skewed data to make it symmetric, and then fit it to a symmetric distribution (e.g., the normal distribution) that is implicitly the same as just fitting the raw data to a skewed distribution in the first place.\" %}",
"_____no_output_____"
]
],
[
[
"for covariate in ['budget', 'popularity', 'runtime', 'vote_count', 'revenue']:\n df[covariate] = df[covariate].apply(lambda x: np.log10(1+x))\n \ndf[outcomes_and_continuous_covariates].skew()",
"_____no_output_____"
]
],
[
[
"### Save this dataframe separately for modelling",
"_____no_output_____"
]
],
[
[
"df.to_csv(\"data/movies_clean.csv\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbc9a2f0fa3af0b16827223528cc52041136ca1 | 134,063 | ipynb | Jupyter Notebook | Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/Introduction-to-data-visualization-and-graphs-with-matplotlib.ipynb | minhtumn/DigitalHistory | ac2dfa95a2eb294fcd2be0140343e70d3ad632a4 | [
"MIT"
] | null | null | null | Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/Introduction-to-data-visualization-and-graphs-with-matplotlib.ipynb | minhtumn/DigitalHistory | ac2dfa95a2eb294fcd2be0140343e70d3ad632a4 | [
"MIT"
] | null | null | null | Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/Introduction-to-data-visualization-and-graphs-with-matplotlib.ipynb | minhtumn/DigitalHistory | ac2dfa95a2eb294fcd2be0140343e70d3ad632a4 | [
"MIT"
] | null | null | null | 45.108681 | 36,574 | 0.631591 | [
[
[
"<a href=\"https://colab.research.google.com/github/bitprj/DigitalHistory/blob/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/Introduction-to-data-visualization-and-graphs-with-matplotlib.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/icons/bitproject.png?raw=1\" width=\"200\" align=\"left\"> \n\n<img src=\"https://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/icons/data-science.jpg?raw=1\" width=\"300\" align=\"right\">",
"_____no_output_____"
],
[
"# <div align=\"center\">Introduction to Data Visualization </div>",
"_____no_output_____"
],
[
"# Table of Contents\n\n",
"_____no_output_____"
],
[
"\n- What is Data Visualization.\n- Why should we use data visualization\n- About the datasets\n - Stock Market Index\n - California Housing \n- Goals\n - Matplotlib\n - Functional Method\n - Object Oriented Method\n - Pandas\n - DataFrame\n - Series\n- Let's get started\n - Loading Files\n - Working with Time-Series\n - Figures and Subplots\n - The ```plt.plot``` function\n - **1.0 - Now Try This**\n - Colors, Markers, and Line Styles\n - **2.0 - Now Try This**\n - Ticks, Labels and Legends\n - **3.0 - Now Try This**\n - Adding Legends\n - **4.0 - Now Try This**\n - Saving plots to file.\n- Types of Graphs and Charts\n - Bar Charts\n - **5.0 - Now Try This**\n - Histogram\n - **6.0 - Now Try This**\n - Line Charts\n - Scatter plots\n - **7.0 - Now Try This**\n- Tutorial - Analyzing the California Housing Dataset\n - About the Dataset\n - Import Libraries and unpack file\n - Visualizing the DataFrames\n - Mapping Geographical Data\n- References\n- Appendix",
"_____no_output_____"
],
[
"## Recap:\n\nBy this point through our journey you have learned the following:\n- Loading and editing google co lab notebooks.\n- Learning and using basic ```Python``` and it's *mathy* library ```NumPy```\n- Use the ```Pandas``` library to load simple ```.csv``` datasets and run simple tasks such as finding values using ```dataFrame.loc``` or ```dataFrame.iloc```, cleaning our dataset by removing null values using ```df.dropna()```\n \nAll three are crucial to the foundation we are building. You have probably seen data visualizations all around you and have probably used it for work or presentation. Now, you will be able to visualize datasets that have more information and columns.",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/data-visualization.png?raw=1\" width=\"1500\" height = \"500\" align=\"center\">",
"_____no_output_____"
],
[
"# Data Visualization",
"_____no_output_____"
],
[
"## What is Data Visualization \n\nData visualization is where a given dataset is presented in a graphical format. It helps us in detecting patterns, trends and correlations that usually go undetected in text-based data. In simple terms, We primarily use data visualization:\n- To *explore* data\n- To *communicate* data",
"_____no_output_____"
],
[
"## Why should we use data Visualization\nAs the world becomes more connected due to an increasing number of electronic devices, the volume of data will also continue to grow at an unprecedented rate. Data visualizations make big and small data easier for the human brain to understand, and visualization also makes it easier to detect patterns, trends, and outliers in groups of data.\n\nData visualization is truly important for any career; from teachers trying to make sense of student test results to computer scientists trying to develop the next big thing in artificial intelligence, it’s hard to imagine a field where people don’t need to better understand data.",
"_____no_output_____"
],
[
"## How should we use Data Visualization\n\nGood data visualizations should place meaning into complicated datasets so that their message is clear and concise.\n\nWhether you use a basic bar graph or an intricate infographic, data visualization makes large amounts of numbers and statistics accessible to both business holders and their audience.\n\nIt can (and should) be used to influence your decision making, and it’s especially important when trying to analyze and implement strategies to improve. For example, a websit's data can be easily and efficiently visualized using charts and graphs instead of skimming through the cluttered data. Visualization makes it easy to ensure your website is as optimized as possible.\n\nVisualization can help you begin to ask the right questions, and it makes the data more memorable for stakeholders, researchers etc.\n",
"_____no_output_____"
],
[
"# About the Datasets\n",
"_____no_output_____"
],
[
"### Stock Market Index\n<img src=\"https://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/wallstreet.png?raw=1\" width=\"300\" align=\"right\">\n\n\nThis is a simple introductory dataset. The dataset contains a shape of 5473 rows and 10 columns, the first row is the header row, therefore the shape of our dataset is (5472,10). The first column is the datetime ranging from 1990 to 2012. The remaining 9 columns are numerical values that indicate the stock prices of companies such as:\n- AA \n- AAPL \n- GE \n- IBM \n- JNJ \n- MSFT \n- PEP \n- SPX \n- XOM\n",
"_____no_output_____"
],
[
"### California Housing Information <a name=\"CaliforniaHousing\"></a>\n<img src=\"https://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/california.png?raw=1\" width=\"300\" align=\"right\">\nThis dataset serves as an excellent introduction to visualizing simple numerical data. The data contains information from the 1990 California census.\n\nThe following is the data methodology described in the paper where this dataset was published.\n\n**Content**\nThe data pertains to the houses found in a given California district and some summary stats about them based on the 1990 census data. The columns are as follows, their names are self explanatory:\n\n- longitude\n- latitude\n- housing median age\n- total_rooms\n- total_bedrooms\n- population\n- households\n- median_income\n- median house value\n- ocean_proximity",
"_____no_output_____"
],
[
"# Goals:\n\n",
"_____no_output_____"
],
[
"- Building a plot step by step using matplotlib.\n- Loading datasets using pandas and visualizing selected columns.\n- Using the stock-market dataset to visualize trends in data from 1992-2016 for 6 listed companies.\n- Breaking down the ```matplotlib``` function ```pyplot.plot```\n- Introduction to Pandas\n - Figures and subplots\n - 1.0 - **Now Try This**\n - Colors, Markers, and Line Styles\n - 2.0 - **Now Try This**\n - Ticks, Labels and Legends\n - 3.0 - **Now Try This**\n - 4.0 - **Now Try This**\n- Plotting simple relational graphs such as:\n - Bar plots\n - 5.0 - **Now Try This**\n - Histograms\n - 6.0 - **Now Try This**\n - Line plots\n - Scatter plots\n - 7.0 - **Now Try This**\n- Using our tutorial to map longitude and latitude data.",
"_____no_output_____"
],
[
"# Grading\n\nIn order to work on the NTT sections and submit them for grading, you'll need to run the code block below. It will ask for your student ID number and then create a folder that will house your answers for each question. At the very end of the notebook, there is a code section that will download this folder as a zip file to your computer. This zip file will be your final submission.",
"_____no_output_____"
]
],
[
[
"import os\nimport shutil\n\n!rm -rf sample_data\n\nstudent_id = input('Please Enter your Student ID: ') # Enter Student ID.\n\nwhile len(student_id) != 9:\n student_id = int('Please Enter your Student ID: ') \n \nfolder_location = f'{student_id}/Week_Four/Now_Try_This'\nif not os.path.exists(folder_location):\n os.makedirs(folder_location)\n print('Successfully Created Directory, Lets get started')\nelse:\n print('Directory Already Exists')",
"_____no_output_____"
]
],
[
[
"<img src=\"ttps://raw.githubusercontent.com/bitprj/DigitalHistory/master/Week4-Introduction-to-data-visualization-and-graphs-with-matplotlib/assets/matplotlib-logo.png?raw=1\" width=\"400\" align=\"right\">\n\n# Matplotlib \n \n\n",
"_____no_output_____"
],
[
"The matplotlib library is a Python 2-Dimensional plotting (x and y) library which allows us to generate:\n- Line plots\n- Scatter plots\n- Histograms\n- Barplots\n\nIn our case, we will be using a specific set of functions in ```matplotlib.pyplot```.\n\nyou can find more in depth information about matplotlib [here.](https://matplotlib.org/)\n\nNote: Matplotlib like pandas is not a core part of the Python Library, therefore we have to download it by using:\n\n```python -m pip install matplotlib```\n\nLuckily, in google colab most of the python libraries are added already, so you won't have to worry about it.\n\nWe will be using ```matplotlib.pyplot``` class. ```pyplot``` maintains an internal state in which we can build visualization step by step.",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot as plt",
"_____no_output_____"
]
],
[
[
"is the same as",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"With matplotlib there are two methods of plotting:\n\n### Functional Method \nUsing the basic matplotlib command, we can easily create a plot. Remember, if no plot was displayed or if you’re using Matplotlib from within a python script, don’t forget to add plt.show() at the last line to display your plot.\n\n**Steps In**\n1. **plt.plot(x,y,style)** Plot y versus x as lines and/or markers.\n2. **plt.xlabel(“Your Text”)** Set the x-axis label of the current axes.\n3. **plt.ylabel(“Your Text”)** Set the y-axis label of the current axes.\n4. **plt.set_title(“Your Title”)** Set a title of the current axes.\n5. **plt.show()** Display a figure.\n\n### Object-Oriented Method\nThe object-oriented method offers another way to create a plot. The idea here is to create figure objects and call methods of it. To create a figure, we use the .figure() method. Once you created a figure, you need to add a set of axes to it using the ```.add_axes()``` method.\n\n**Steps In**\n1. **fig = plt.figure()** Creates a new figure.\n2. **axes = fig.add_axes([left,bottom,width,height])** Adds an axes at position [left, bottom, width, height] where all quantities are in fractions of figure width and height\n3. **axes.plot(x,y)** Plot x versus y as lines and/or markers.\n4. **axes.set_xlabel(“Your Text”)** Set the label for the x-axis.\n5. **axes.set_ylabel(“Your Text”)** Set the label for the x-axis.\n6. **axes.set_title(“Your Title”)** Set the title of the current axes.\n\n\nThe code might not make sense right now, but throughout this notebook, we will be going through these methods step by step. For now, The good thing about these two methods is that the output is the same, these are just two different implementations. Therefore, we can use them side by side with a program.",
"_____no_output_____"
],
[
"# Pandas",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"Matplotlib is fairly a basic tool with limited features. This is because its main purpose is to draw simple plots, involving numerical data, e.g:\n- Type of plot\n - line\n - bar\n - box\n - scatter\n - contour etc,\n- legend\n- title\n- tick labels\n\n\nBy now you must have noticed that we've repeated the names of the types of plots at least three times. This is because these are the most fundamental concepts to understand and most complex graphs such as 3D are built on top of these.\n\nWith pandas, we have multiple columns of data with row and column labels; pandas has built-in methods that simplify creating visualizations from DataFrames and Series Objects. It does this by using features from matplotlib and adding new methods on top of them.\n\nAnother library is ```seaborn``` which is a statistical graphics library.\nWe will be using seaborn in the latter half of our course.\n\nOne of the biggest benefits of using pandas is the method of loading data files (in our case ```pd.read_csv```) as a dataframe. This makes our work efficient and saves us more time to play with the dataset.",
"_____no_output_____"
],
[
"### DataFrame",
"_____no_output_____"
],
[
" Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns.\nFeatures of DataFrame\n\n- Potentially columns are of different types\n- Size – Mutable\n- Labeled axes (rows and columns)\n- Can Perform Arithmetic operations on rows and columns\n",
"_____no_output_____"
],
[
"### Load File",
"_____no_output_____"
]
],
[
[
"url = 'https://tinyurl.com/y64sugg4'\n\ndf = pd.read_csv(url)\ndf.plot()",
"_____no_output_____"
]
],
[
[
"- The first line of code is a defined variable 'url' which is the link to our dataset.\n- The Second line of code loads our dataset as a csv file:\n- ```df``` is what we have defined our data frame as (short for DataFrame). ```pd.read_csv``` is the built in function in pandas which helps us load csv files specifically. Inside the bracket is the path to our .csv file. We have simply mentioned the variable url in the brackets.\n- On the Third line, by calling ```df.plot``` pandas immediately plots all the columns in the dataset. \n\n*Note: This graph is not correct, although it gives us a complete plot, the ```x-axis``` doesn't make sense*.",
"_____no_output_____"
],
[
"### Series",
"_____no_output_____"
],
[
"A Series has the same structure as a DataFrame, the main difference is that instead of multiple there is only a single column. So if we were to plot columns individually, we would be plotting series.",
"_____no_output_____"
]
],
[
[
"s = pd.Series(df['AA'])\ns.plot()",
"_____no_output_____"
]
],
[
[
"In this plot above, here are the important points we should observe:\n- The Series object's index is passed to matplotlib for plotting on the x-axis. In our case our object is ```df['AA']``` which is the column name ```AA``` in our dataframe ```df```\n- The x-axis and y-axis properties can be modified by using ```xticks```, ```xlim```, ```yticks``` and ```ylim```\n\n*Note: Same as the previous plot, the ```x-axis``` plot is not correct.*",
"_____no_output_____"
],
[
"## Working with Time-Series",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"This is the dataset we loaded previously but there is something strange about it. We have an ```index``` column at the far left of the table, however, right next to it we have a date and time column which hasn't been named and has the column value 0.\n\nWhat this means is that the column doesn't have a name assigned to it. What happened is the function ```read_csv``` automatically assigned the column value 0 in order to mark it. The column with index value 0 can be the index column *(The possible reason why it wasn't given a name)*. Therefore we can get rid of the left most header-less column, which is the current index.\n\nLuckily, this can be easily fixed when we load our dataset using ```pd.read_csv```. This time we will be making two changes:\n1. Add a parameter ```index_col``` and equate it to ```0```. This signifies that we want to assign the index to the column with id 0. \n2. Set ```parse_dates``` = ```True```. \n\nNote: Sometimes you might want to set a different column as the index and it could be a string instead of an integer. In a case like that you can simply do that by setting the ```index_col``` parameter to ```\"Date\"```\n\nThis is important because the dataframe doesn't know that the index column is a time series column. \nCurrently it's just a string without any datetime properties. Fortunately, this function converts the strings to a datetime category.\n\n\n",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(url,index_col = 0, parse_dates= True)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"Observing the new dataframe, we can see the changes in the left-most column. The datetime has become our index column. \n\n*Remember, an index column doesn't necessarily have to be the date, but it is always better since datetime datasets usually point towards predicting and finding underlying patterns with respect to the change in time periods.*",
"_____no_output_____"
]
],
[
[
"df.dtypes",
"_____no_output_____"
]
],
[
[
"as expected all the values are ```floats``` i.e decimals.",
"_____no_output_____"
],
[
"## Figures and Subplots\n\n",
"_____no_output_____"
],
[
"Now we know how to make a simple plot for a DataFrame and a Series (a column from our dataset). However, in most cases, it is more useful to plot multiple plots side by side. This is done by using the ```plt.figure()``` function. \n\n**One a side note:** *Remember ```plt``` is actually ```matplotlib.pyplot```, so we are calling the ```figure()``` method, which is located in the ```pyplot``` class in the ```matplotlib``` package.\nIn short this order is helpful to understand ```package.class.method(paramters)```.*\n\n\n\nSince plt is a class, we can simply add the methods to the object by using the ```class.method()``` notation.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()",
"_____no_output_____"
]
],
[
[
"\nLet's dive into ```plt.figure```, it has several parameters, in our use cases```figsize()``` is important. This will guarantee the figure has a certain size and aspect ratio if saved to disk.\n\n**Note**: In Co-lab / Jupyter, nothing will be shown until a few more commands are entered. We cannot make a plot with a blank figure. Therefore we will create one or more subplots using ```add_subplot```.",
"_____no_output_____"
]
],
[
[
"ax1 = fig.add_subplot(2,2,1)\nfig",
"_____no_output_____"
]
],
[
[
"The above code means that we are transforming our original ```fig``` to a ```2 row``` and ```2 column``` figure. Next, we are placing the plot ```ax``` to the first row, the first column. In total, this means there will be 4 plots, of which we have selected the first.\n\nIf we create the next two subplots, we'll end up with a visualization that looks exactly like ",
"_____no_output_____"
]
],
[
[
"ax2 = fig.add_subplot(2,2,2)\nax3 = fig.add_subplot(2,2,3)\nfig",
"_____no_output_____"
]
],
[
[
"# The ```plt.plot``` function",
"_____no_output_____"
]
],
[
[
"plt.plot(df['AA'])",
"_____no_output_____"
]
],
[
[
"Above the ```plot``` is the function/method and inside it we define the parameter to be ```df['AA']``` which means the column *AA* from the dataframe ```df```. Once we run this command it plots the column with respect to the index (which is the date-time).\n\n**Next**, we will run all the commands together. One slight change will be filling a subplot. ",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax1 = fig.add_subplot(2,2,1)\nax2 = fig.add_subplot(2,2,2)\nax3 = fig.add_subplot(2,2,3)\n\nplt.plot(df['AA'])",
"_____no_output_____"
]
],
[
[
"**Notice Something?**\n\n\nWe did not ascribe a subplot figure to the plot, it automatically took the last plot.\n\n",
"_____no_output_____"
],
[
"## 1.0 - Now Try This:\n- ```ax2.plot``` to plot the dataset for ```df['AAPL']```.\n- plot the dataframe for ```GE``` on the ```ax3``` axis. \n\n**Which spot will it take?**\n\n**Answer**:\n*Hint: Use ```fig``` to check.*",
"_____no_output_____"
]
],
[
[
"# This code is for the following Now Try This question, please DO NOT MODIFY IT.\n# Run it before attempting the question.\nfig = plt.figure() \nax1 = fig.add_subplot(2,2,1) \nax2 = fig.add_subplot(2,2,2) \nax3 = fig.add_subplot(2,2,3) \n\nplt.plot(df['AA'])",
"_____no_output_____"
],
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/1.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"# Colors, Markers, and Line Styles\n",
"_____no_output_____"
],
[
"\nBy now, you probably noticed that the main function you are using in Matplotlib is the ```plot``` function.\nThis function accepts arrays of x and y coordinates and optional arguments such as color and line style and figure size.\n\n```\nax.plot(x,y,'g--')\n```\n\nWe can show the same plot more explicitly by adding a linestyle:\n\n```\nax.plot(x,y,linestyle='--',color='g')\n```\n\nLine plots can also have *markers* in order to highlight the actual data points. \n\nWhen matplotlib creates plots, they are a continuous line plot (interpolating between points), it can occasionally be unclear where the points are. Markers help us observe the *interpolation* in a clearer manner. \n\n*I used interpolation here because it's a more mathematical term when it comes to plot points. with respect to a given axis (in our case data). What it really means in our case is plotting y with respect to x and joining those points.*",
"_____no_output_____"
]
],
[
[
"fig = plt.figure() ## figsize= (20,10)\nax1 = fig.add_subplot(2,1,1)\nax2 = fig.add_subplot(2,1,2)\n\nplt.plot(df['AA'],'bo--') # Plot 2\n# Which is the same as:\nax1.plot(df['AAPL'],color = 'b',linestyle = '--',marker = 'o') # Plot 1\n\n",
"_____no_output_____"
]
],
[
[
"For line plots, we can notice that the points are interpolated linearly by default.\n\nWe can change this by using the ```drawstyle``` option. The drawstyle determines how the points are connected\nhere are some of the avaiable options for this parameter:\n- ```default```\n- ```steps```\n- ```steps-pre```\n- ```steps-mid'```\n- ```steps-post```",
"_____no_output_____"
]
],
[
[
"plt.plot(df['IBM'],'k--',label = 'Default')\nplt.plot(df['GE'],'k-',drawstyle = 'steps-post',label = 'steps-post')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## 2.0 - Now Try This:\n1. Create a new ```plt.figure()``` with and added parameter```figsize = (20,10)```\n2. Create two separate ```plt.plots``` with ```df['AAPL']``` and ```df['MSFT']``` respectively.\n3. for the first plot:\n - set linestyle as ```'-'``` \n - set color as ```'b'```\n - add a label and name it after the specific stock\n4. for the second plot:\n - set linestyle as ```'--'```\n - set color as ```g````\n - add a ```drawstyle```\n5. add ```plt.legend()```\n\n\n**Answer**:",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/2.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"# Ticks, Labels and Legends\n\n",
"_____no_output_____"
],
[
"The ```pyplot``` interface designed for interactive use, consists of methods like:\n- ```xticks```\n- ```xticklabels```\n\nThese methods control the plot range, tick locations, and tick labels.\nThere are two ways to apply such parameters:\n1. Called with no arguments returns the current parameter value (e.g, ```plt.xlim()``` returns the current x-axis plotting range.\n2. Called with parameters sets the parameter value(e.g, plt.xlim([0,10]),sets the x-axis range from 0 to 10)\n\n",
"_____no_output_____"
]
],
[
[
"x = [1,2,3,4,5,6]\ny = [2,4,9,15,25,36]\n\nplt.plot(x,y)\nplt.xlim(3,6)",
"_____no_output_____"
]
],
[
[
"As you can see above the ```xlim``` is very easy to set up using the ```plt.xlim()```. However, when working with a Time-Series data set such as the stock market one simply stating ```plt.x_lim(2005,2008)```. This won't work nor will ```plt.xlim(2005-05-13,2007-05-16)```.\n\nLuckily we can use ```datetime``` indexes by simply including the built-in library ```datetime```. Lets look at a quick example on how to use it.",
"_____no_output_____"
]
],
[
[
"import datetime # importing the datetime library\n\nplt.plot(df['SPX'])\nplt.xlim(datetime.date(2005,1,1),datetime.date(2009,1,1))\n",
"_____no_output_____"
]
],
[
[
"Above we are importing datetime in the first line. Next we plot our dataframe column. In the third line the first thing to notice is the ```datetime.date()``` method. This takes in three parameters ```Year, Month, Date```. and looks for the range on the x-axis. In our case from 2005-01-01 to 2009-01-01",
"_____no_output_____"
],
[
"**Setting the title, axis labels, ticks, and ticklabels**\n\nLet's look at the plot below:",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize= (20,10))\nax = fig.add_subplot(1,1,1)\nax.plot(df['AA'])\n",
"_____no_output_____"
]
],
[
[
"**Next**, we are going to change the x-axis ticks, it's easier to use ```set_xticks``` and ```set_xticklabels```. The former instructs matplotlib where to place the ticks along with the date range. However, we can set any other values as the labels using set_xticklabels: ",
"_____no_output_____"
]
],
[
[
"labels = ax.set_xticklabels(['one','two','three','four','five','six','seven'],\n rotation = 30,\n fontsize = 'small')",
"_____no_output_____"
]
],
[
[
"The ```rotation``` option sets the x-tick labels at a 30-degree rotation.\nLastly, ```set_xlabel``` gives a name to the x-axis and ```set_title``` adds to the subplot title.",
"_____no_output_____"
]
],
[
[
"ax.set_title('My first matplotlib plot')\nfig",
"_____no_output_____"
],
[
"ax.set_xlabel('Stages')\nfig",
"_____no_output_____"
],
[
"ax.plot(df['GE'])\nfig",
"_____no_output_____"
]
],
[
[
"Looking at the 3 code cells above:\n- The first one adds a title.\n- The second one adds an label on the x-axis called Stages.\n- The last one Simply just adds another column plot to the axes.",
"_____no_output_____"
]
],
[
[
"# Bonus: we can also write this as:\nprops = {\n 'title': 'This is a trend for stock market prices',\n 'xlabel': 'All stages'\n}\nax.set(**props)\nfig",
"_____no_output_____"
]
],
[
[
"## 3.0 - Now Try This:\n- Use the functional method of programming to build two subplot figures ```ax``` and ```ax1``` respectively.\n- First create a fig by calling the ```plt.figure``` function, set the ```figsize``` to 20,10.\n- Create the subplots.\n**Hint**: ```fig.add_subplot(something,something,something)```.\n- Plot ```df['AAPL']``` for ```ax```.\n- Plot ```df['SPX']``` for ```ax1```.\n- Add tick labels for both ax and ax1\n - Name them in the sequence ```['Phase I','Phase II',...,'PHASE VII']```\n - Both should have a ```rotation = 45```.\n- The title for ```ax``` and ```ax1``` by using ```set_title```. The title should be the name of your stock index.\n- Set xlabel for both charts by using ```set_xlabel``` to 'Stages'\n\n\n**Answer**",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/3.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE\n# INSERT CODE HERE\n\n# INSERT CODE HERE\n\n# INSERT CODE HERE\n# INSERT CODE HERE\n \n# INSERT CODE HERE\n# INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"### Adding Legends\n\nLegends are an important element in order to identify our elements. The easiset way to add one is to pass the label argument when adding each piece",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize = (20,10))\n\nax=fig.add_subplot(1,1,1)\nax.plot(df['MSFT'],\n 'b', # Color blue\n label ='MSFT')\nax.plot(df['AA'],\n 'r--', # Color red, with dashed lines\n label = 'AA')\nax.plot(df['GE'], # Green with dots as indicators\n 'g.',\n label = 'GE')",
"_____no_output_____"
],
[
"ax.legend(loc = 'best') # The loc method tells matplotlib where to place the plot. \n # if you're not picky 'best' works.\nfig",
"_____no_output_____"
]
],
[
[
"## Saving plots to File\nWe can save the active figure to file using plt.savefig.\n\nExamples:\n\n```\nplt.savefig('figpath.jpeg')\n```\n\n*Note: The file type is the file extension our file will be saved as. So for example if we used .pdf instead, we would get a pdf.*\n\n\n\n\n\nWe can save the previous graph by simply running the following command:",
"_____no_output_____"
]
],
[
[
"plt.savefig('microsoft-apple-ge-stocks.jpeg')",
"_____no_output_____"
]
],
[
[
"## 4.0 - Now Try This:\n\nFor this exercise you will be plotting a single graph with 3 line plots (dataframe columns) and adding a legend to it. Here are some important notes:\n- figsize = (20,10)\n- The data should be distinguishble between the different stocks\n- Save the image as ```NAMESTOCK1-NAMESTOCK2-NAMESTOCK3.jpeg```\n\n*Note: the line of code to save the image should be before the ```plt.show()``` method (which is always the last)\n\n**Answer**",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/4.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n\nplt.figure( #INSERT CODE HERE)\n\nplt.plot( #INSERT CODE HERE)\nplt.plot( #INSERT CODE HERE)\nplt.plot( #INSERT CODE HERE )\n\nplt.title(#INSERT CODE HERE )\n#INSERT CODE HERE\n#INSERT CODE HERE\n#INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n*Note : Don't worry about the colors*\n\n",
"_____no_output_____"
],
[
"# Types of Graphs and Charts",
"_____no_output_____"
],
[
"# Bar Graphs\n\n",
"_____no_output_____"
],
[
"- This type of graph is a good choice when we want to show that *some quantity varies with respect to some set of items (are usually ```strings```)*. \n\n### Example :\n#### How many academy awards were won by each movie",
"_____no_output_____"
]
],
[
[
"movies = [\"Annie Hall\",\"Ben-Hur\",\"Casablanca\",\"Gandhi\",\"West Side Story\"]\nnum_of_oscars = [5,11,3,8,10]\n\n# plot bars with \n# x-cordinates [movies]\n# y-cordinates [num_of_oscars]\nplt.bar(movies,num_of_oscars)\n\n# add title\nplt.title(\"My favourite Movies\")\n\n# label the y-axis\nplt.ylabel(\"# of Academy Awards\")\n# Label x-axis with movie titles\nplt.xlabel('Movies')\nplt.show()",
"_____no_output_____"
]
],
[
[
"In the code above movies is our ```x-axis```, so we are measuring the quantities with respect to the movie names. This makes our ```y-axis```, ```num_of_oscars```.\n\n- To plot the bar plot we use the ```plt.bar``` and declare the x and y inside.\n",
"_____no_output_____"
],
[
"## 5.0 Now Try This:\nIn the next cell, I have added two list types by the name of ```cal_state``` and ```enrollment```. Your task is to do the following\n- Check if the two lists are equal.\n**Hint**: Use one ```if``` and one ```else``` statmenent to check if the two lists are equal using ```len()``` to comapare the two.\n- If lists are not equal, find the error.\n- Create a ```plt.figure``` with a size of (20,10).\n- Plot the bar plot of ```cal_states``` vs ```enrollemnts```. \n- Add a ```title```, ```ylabel``` and ```xticks```.\n- Set the ```rotation``` for the ```x-ticks``` to 90.\n\n**Answer**:",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/5.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\ncal_states= ['Bakersfield','Channel Islands','Chico','Dominguez Hills','East Bay','Fresno','Fullerton''Humboldt','Long Beach','Los Angeles','Maritime Academy','Monterey Bay','Northridge','Pomona','Sacramento','San Bernardino','San Diego','San Francisco','San José','San Luis Obispo','San Marcos','Sonoma','Stanislaus','International Programs','CalState TEACH']\nenrollment = [11199,7093,17019,17027,14705,24139,39868,6983,38074,26361,911,7123,38391,27914,31156,20311,35081,28880,33282,21242,14519,8649,10614,455,933]\n\n## INSERT CODE HERE\n\nif ## INSERT THE CODE HERE \n print('These are not equal')\n## INSERT CODE HERE\n print('There are equal, you may proceed')\n \n## INSERT CODE HERE\n\n# add title\n# INSERT CODE HERE\n\n# label the y-axis\n# INSERT CODE HERE\n\n# Label x-axis with movie titles\n#INSERT CODE HERE\n\n# INSERT CODE HERE\n",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"**Extra**:\n Convert the two columns into a single dataFrame",
"_____no_output_____"
],
[
"Another good use of a bar chart can be for plotting histograms of numeric values. This can help us visualize distributions.\n\n",
"_____no_output_____"
],
[
"# Histograms",
"_____no_output_____"
],
[
"Luckily, you won't have to manually set up plots when you're working with ```DataFrames```.\n\nWe can just declare the column we are using inside the ```plt.hist()``` method and define the column inside as ```df['AAPL']```. We will also make another parameter call which is bins. ```bins``` means the number of columns we want our x-axis to be divided into.\n\nExample: if your x-axis is from 0 to 100 and you declare bins as 10, 0 to 10, would be enclosed in 1 bin, 11-20 in the second all the way to the 10th bin.",
"_____no_output_____"
]
],
[
[
"plt.hist(df['AAPL'],bins = 20)",
"_____no_output_____"
]
],
[
[
"## 6.0 Now Try this\nI want to be able to see the frequency of a value in the SPX Index. I can do this by plotting a histogram of the respective index.\n\n- Plot a ```hist``` for the ```SPX``` column.\n- Set the ```bins``` to 30.\n\n**Answer**:",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/6.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n\n# INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"# Line Charts\n\nThe good thing is it is easy to make line charts simple using ```plt.plot``` these are good for showing trends.\n",
"_____no_output_____"
]
],
[
[
"years = [1950,1960,1970,1980,1990,2000,2010]\ngdp = [300.2,543.2,1075.9,2862.5,5979.6,10289.7,14958.3]\n\n# Create a line chart, \n# x-axis : years\n# y-axis : gdp\n\nplt.plot(years,gdp,color = 'red',marker = 'x',linestyle = 'solid')\n\n# add a title \nplt.title(\"Nominal GDP\")\n\n# Add a label to the y-axis\nplt.ylabel(\"Billions of $\")\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nx = [0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5] # Just a simple list from 0 to 5, with increments of 1/2\ny = [i**4for i in x] # for every value in list x, multiply it by 4 and save it as a list to x\nz = [i//3 for i in y][::-1] # for every value in list y, divide by 3 and save it as a list to z.\n # [::-1] reverses the list after it has been made\n\nfig = plt.figure(figsize=(20,10))\n\nplt.plot(x, x, color=\"red\", linewidth=1.00,marker = 'x') \nplt.plot(x, y, color=\"blue\", linewidth=2.00, marker = 'o')\nplt.plot(x, z, color=\"green\", linewidth=3, linestyle='--')\n\n",
"_____no_output_____"
]
],
[
[
"# Scatterplots\n\nA scatter plot is the right choice for visualizing the relationship between two paired sets of data. \n",
"_____no_output_____"
]
],
[
[
"friends = [70,65,72,63,71,64,60,64,67]\nminutes = [175,170,205,120,220,130,105,145,190]\n\nplt.scatter(friends,minutes)\n\n \nplt.title(\"Daily minutes vs Number of Friends\")\nplt.xlabel(\"# of friends\")\nplt.ylabel(\"Daily minutes spent on the site\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 7.0 Now Try This:\n\n- Use the ```plt.figure()``` method to plot a ```scatter``` plot for the ```SPX``` column.\n- For the ```x-axis``` use ```df.index.values``` for the first parameter and the column for the second.\n- The figure size of this plot should be (20,10).\n\n**Answer**:",
"_____no_output_____"
]
],
[
[
"#Once your have verified your answer please uncomment the line below and run it, this will save your code \n#%%writefile -a {folder_location}/7.py\n#Please note that if you uncomment and run multiple times, the program will keep appending to the file.\n\n\n# INSERT CODE HERE\n# INSERT CODE HERE",
"_____no_output_____"
]
],
[
[
"### Hint:\n",
"_____no_output_____"
],
[
"This is what the output should look like:\n\n",
"_____no_output_____"
],
[
"# Tutorial \n\n\n",
"_____no_output_____"
],
[
"## Mapping the California Housing",
"_____no_output_____"
],
[
"Before we get started, I would recommend you read the information we have added about the dataset [here](#CaliforniaHousing). \n\nBy this point, we have learned how to use Matplotlib and basic pandas plotting. In this tutorial, we will be focusing on analyzing the housing dataset by simply mapping it. The perk of this exercise is that the whole visualizing can be done using a single method ```df.plot```. \n\nIt is also important to note that our limitation is that we are restricted to work with numerical data since Matplotlib cannot plot categorical data with numerical. \n\n*Note: There is a method to convert categorical into numerical. However, that is not a part of this course.*",
"_____no_output_____"
],
[
"## Import Libraries and unpack file\n\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### Load File",
"_____no_output_____"
]
],
[
[
"url = 'https://tinyurl.com/yxre8r5n'\nhousing = pd.read_csv(url)\nhousing.head()",
"_____no_output_____"
]
],
[
[
"Each row represents one district. There are 10 attributes: ```longitude```, ```latitude```, ```housing_median_age```,\n```total_rooms```, ```total_bedrooms```, ```population```, ```households```, ```median_income```,\n```median_house_value```, and ```ocean_proximity```",
"_____no_output_____"
]
],
[
[
"\n# info functions helps us to understand the data type of all the columns\nhousing.info()",
"_____no_output_____"
]
],
[
[
"All attributes are numerical, except the ```ocean_proximity``` field. Its type is\n```object```, what this means is that it can hold any kind of Python object. But, since you loaded this\ndata from a ```CSV``` file, you know that it must be a text value. When you\nlooked at the top five rows, you probably noticed that the values in the\n```ocean_proximity``` column was repetitive, which means that it is probably a\ncategorical value. ",
"_____no_output_____"
]
],
[
[
"# describe function gives a summary like mean, quartiles, median, std, count, etc for the numeric columns\nhousing.describe()",
"_____no_output_____"
]
],
[
[
"The count, mean, min, and max rows are self-explanatory. \n\nNote that the null\nvalues are ignored (so, for example, the count of total_bedrooms is 20,433,\nnot 20,640). The std row shows the standard deviation, which measures how\ndispersed the values are. The 25%, 50%, and 75% rows show the\ncorresponding percentiles: a percentile indicates the value below which a\ngiven percentage of observations in a group of observations fall. \n\nFor example,\n25% of the districts have a housing_median_age lower than 18, while 50%\nare lower than 29 and 75% are lower than 37. These are often called the 25th\npercentile (or first quartile), the median, and the 75th percentile (or third\nquartile).\n\n\nAnother quick way to get a feel of the type of data you are dealing with is to plot a histogram for each numerical column. A histogram shows the number\nof instances (on the vertical axis) that have a given value range (on the\nhorizontal axis). You can plot this one value at a time or you can call the hist() method on the whole dataset (as shown in the following code\nexample), and it will plot a histogram for each numerical attribute.",
"_____no_output_____"
],
[
"## Visualizing the DataFrame",
"_____no_output_____"
],
[
"To visualize our data frame we will execute the following code:\n- We import ```matplotlib.pyplot``` as ```plt```. Note: We already imported this initially at the start of this notebook, therefore it is just a precaution.\n- We make a histogram of the whole housing dataframe by calling ```housing.hist``` and setting the ```bin``` and ```figize``` parameters inside.\n\n- The last call ```plt.show()``` is something you have frequently used by now and is simply used to display the plot.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()",
"_____no_output_____"
]
],
[
[
"\nThere are a few things you might notice in these histograms:\n1. First, the median income attribute does not look like it is expressed\nin US dollars (USD). The data is scaled and capped at 15\n(actually, 15.0001) for higher median incomes, and at 0.5 (actually, 0.4999) for lower median incomes. The numbers represent roughly\ntens of thousands of dollars (e.g., 3 actually means about $30,000). This kind of data is common but we should try to understand how the data was computed.\n\n2. The housing median age and the median house value were also\ncapped. If we want to find patterns for beyond $500,000, then you have two options:\n\n a. Collect more data for the districts whose labels were\ncapped.\n\n b. Remove those districts from the training set.\n\n3. These attributes have very different scales. This means that one might be scaled to 1, while another one might be scaled to 10. \n\n(If you don't know how scaling works let me give you a quick example. Think of numbers from 1 to 10 so 1,2,3,4,5,6,7,8,9,10. now what will happen if you dividen all these numbers by 10. The result would be. .1,.2,.3,.4,.5,.6,.7,.8,.9,1. So now all these numbers are within the range of 0-1).\n\n4. Finally, many histograms are tail-heavy: they extend much farther to\nthe right of the median than to the left. Although this is not our concern right now it's a tricky situation as it is difficult to detect patterns through visualization.",
"_____no_output_____"
]
],
[
[
"housing[\"median_income\"].hist()",
"_____no_output_____"
],
[
"housing['ocean_proximity'].hist(color = 'red')",
"_____no_output_____"
],
[
"housing.plot(x = 'median_income',\n y = 'median_house_value',\n kind = 'scatter',\n alpha = 0.1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"This plot reveals a few things. First, the correlation is indeed very strong; we\ncan see the upward trend, and the points are not too dispersed. Second,\nthe price cap that we noticed earlier is visible as a horizontal line at\n$500,000. \n\nBut this plot reveals other less obvious straight lines: a horizontal\nline around 450,000, another around \n350,000, perhaps one around $280,000,\nand a few more below that.",
"_____no_output_____"
],
[
"## Mapping Geographical Data",
"_____no_output_____"
],
[
"### Step 1\n",
"_____no_output_____"
]
],
[
[
"housing.plot(kind=\"scatter\",\n x=\"longitude\",\n y=\"latitude\")",
"_____no_output_____"
]
],
[
[
"### Observation:\nBy plotting the longitude vs the latitude, we can see that it's California. However, note that it's almost impossible to see any particular pattern. \nThe next step is to be able to separate the high-density data points from the lower ones. \n\nFor this, we will use the ```alpha``` option in the plot function. We will set alpha to ```0.1```. \n\n#### What is Alpha?\n*Matplotlib* allows you to adjust the transparency of a graph plot using the alpha attribute. If you want to make the graph plot more transparent, then you can make ```alpha``` less than ```1```, such as ```0.5``` or ```0.25```. If you want to make the graph plot less transparent, then you can make alpha close to ```1```.",
"_____no_output_____"
],
[
"### Step 2\n",
"_____no_output_____"
]
],
[
[
"housing.plot(kind=\"scatter\",\n x=\"longitude\",\n y=\"latitude\",\n alpha=0.1)\n",
"_____no_output_____"
]
],
[
[
"### Observation:\nYou can see the difference between the high-density areas, for example the Bay Area, Los Angeles, San Diego and a little in the Central Valley.\n\nNow we have a pattern, but it's not something very useful to us. So, let's play with the visualization a little.\n",
"_____no_output_____"
],
[
"### Step 3",
"_____no_output_____"
]
],
[
[
"housing.plot(kind=\"scatter\",\n x=\"longitude\",\n y=\"latitude\",\n alpha=0.4,\n figsize=(10,7),\n c=\"median_house_value\", \n cmap=plt.get_cmap(\"jet\"),\n colorbar=True)\n",
"_____no_output_____"
]
],
[
[
"**Note**: Notice the fact that we are only using pandas's ```df.plot``` function. This is because ```matplotlib.pylot``` features are built-in pandas therefore when we call ```plt.plot()```, it is very similar to the pandas function. The main difference is we directly identify the ```x``` and ```y``` axis. and instead of ```plt``` we connect our dataframe.\n\n### Observation\nIn order to gather more information from the data plot, we added four features:\n- ```figsize = (10,7)```, this is not a necessary parameter but it doesn't hurt add it, since compact plots are difficult to extract data from.\n- ```c='median_house_value'```, this parameter defines the column we are going color indicate through color based off of value.\n- ```cmap= plt.get_cmap('jet')```, this is simple ascribing a color palette to ```c``` in this case I have chosen the color palette available in ```plt``` which is the jet palette.\n- ```colorbar = True```, This one makes the color bar visible.\n\n\n\n\n\n",
"_____no_output_____"
],
[
"### Step 4",
"_____no_output_____"
],
[
"Great, now we have a general idea of where median incomes are high: the coastal areas. But could there also be another factor? Let's add population to our graph and see what we can find.",
"_____no_output_____"
]
],
[
[
"housing.plot(kind=\"scatter\",\n x=\"longitude\",\n y=\"latitude\",\n alpha=0.4,\n s =housing[\"population\"]/100,\n label=\"population\",\n figsize=(20,10),\n c=\"median_house_value\", \n cmap=plt.get_cmap(\"jet\"),\n colorbar=True)\nplt.legend()\n\n",
"_____no_output_____"
]
],
[
[
"### Observation:\nLet's see what we added in code to do that. In our ```housing.plot()``` function, we added two more parameters:\n- ```s = housing['population']/100```, this parameter plots the given column as a radius on the map. The reason why we chose to divide it by 100 is in order to make the radius smaller.\n- ```label = 'population'```, a simple label to mention what the radius is.\n\nWe also changed the ```figsize``` to make the map larger and wrote ```plt.legend``` outside the function.",
"_____no_output_____"
],
[
"## Closing Notes",
"_____no_output_____"
],
[
"By the end of this lesson you should have a clearer idea of:\n- How to plot ```DataFrames``` using matplotlib and pandas\n- How to make subplots and set axis.\n- How documentation such as labels, legends, colors, markers are added it. \n- How to plot bar, histogram, scatter, and line plots using lists.\n- How to plot the longitude and latitude.\n- How to add a third data parameter with ```df.plot``` by using ```c```.\n- How to add a fourth parameter ```s``` to our plots.\n\n\nIn the next lesson, we will use the basics of Python, Pandas, and Matplotlib library to clean and visualize a dataset.\n",
"_____no_output_____"
],
[
"\n## Submission\nRun this code block to download your answers.",
"_____no_output_____"
]
],
[
[
"from google.colab import files\n!zip -r \"{student_id}.zip\" \"{student_id}\"\nfiles.download(f\"{student_id}.zip\")",
"_____no_output_____"
]
],
[
[
"## References\n\n- [What is Data Visualization](https://www.import.io/post/what-is-data-visualization/)\n- [What Makes A Data Visualisation Elegant?](https://medium.com/nightingale/what-makes-a-data-visualisation-elegant-fb032c3a259e)\n- [Data Visualization: What It Is, Why It’s Important](https://www.searchenginejournal.com/what-is-data-visualization-why-important-seo/288127/#:~:text=Data%20visualization%20is%20the%20act,outliers%20in%20groups%20of%20data.)\n- [California Housing Dataset](https://www.kaggle.com/camnugent/california-housing-prices)",
"_____no_output_____"
],
[
"## Appendix",
"_____no_output_____"
],
[
"### Plotting Shapes",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nrect = plt.Rectangle((0.2,0.75),0.4,0.15,color = 'k',\n alpha = 0.3)\ncirc = plt.Circle((0.7,0.2),0.15,color = 'b',alpha = 0.3)\npgon = plt.Polygon([[0.15,0.15],[0.35,.4],[0.2,0.6]],color = 'g',alpha = 0.3)\n\nax.add_patch(rect)\nax.add_patch(circ)\nax.add_patch(pgon)",
"_____no_output_____"
]
],
[
[
"### Annotations and Drawing on a Subplot\n\nIn addition to standard plot types, we may wish to draw our own plot annotations. These can consist of text, arrows or other shapes. We can add annotations and text using the ```text```,```arrow``` and ```annotate``` functions. ```text``` draws test at given coordinates (x,y) on the plot with optional custom syling:\n\n```ax.text(x,y,'Hello world!',family='monospace',fontsize=10)```\n",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\n\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nspx = df['SPX']\n",
"_____no_output_____"
],
[
"spx.plot(ax=ax, style='g-')\nfig",
"_____no_output_____"
],
[
"\n\ncrisis_data = [\n (datetime(2007, 10, 11), 'Peak of bull market'),\n (datetime(2008, 3, 12), 'Bear Stearns Fails'),\n (datetime(2008, 9, 15), 'Lehman Bankruptcy')\n]\n\nfor date, label in crisis_data:\n ax.annotate(label, xy=(date, spx.asof(date) + 75),\n xytext=(date, spx.asof(date) + 225),\n arrowprops=dict(facecolor='black', headwidth=4, width=2,\n headlength=4),\n horizontalalignment='left', verticalalignment='top')\nfig",
"_____no_output_____"
],
[
"\n# Zoom in on 2007-2010\nax.set_xlim(['1/1/2007', '1/1/2011'])\nax.set_ylim([600, 1800])\n\nax.set_title('Important dates in the 2008-2009 financial crisis')\nfig",
"_____no_output_____"
]
],
[
[
"The ```plot``` attribute contains a \"family\" of methods for different plot types",
"_____no_output_____"
],
[
"## Connecting to Your Google Drive\n",
"_____no_output_____"
]
],
[
[
"# Start by connecting google drive into google colab\n\nfrom google.colab import drive\n\ndrive.mount('/content/gdrive')",
"_____no_output_____"
],
[
"!ls \"/content/gdrive/My Drive/DigitalHistory\"",
"_____no_output_____"
],
[
"cd \"/content/gdrive/My Drive/DigitalHistory/Week_3\"\n",
"_____no_output_____"
],
[
"ls",
"_____no_output_____"
]
],
[
[
"### Saving Figures:\nSome important options for publishing graphics are:\n- ```\ndpi\n``` : controls the dots-per-inch resolution.\n- ```\nbbox_inches\n``` : Trims the whitespace around the actual figure.The options in bbox_inches are 'tight' or 'None'.\n\n- ```\n plt.savefig('figpath.avg',dpi = 400,bbox_inches = 'tight')\n```",
"_____no_output_____"
],
[
"### An Alternative method of loading the California Housing Dataset",
"_____no_output_____"
]
],
[
[
"import os\nimport tarfile\nimport urllib\n\nDOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml/master/\"\nHOUSING_PATH = os.path.join(\"datasets\", \"housing\")\nHOUSING_URL = DOWNLOAD_ROOT + \"datasets/housing/housing.tgz\"\n\ndef fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n os.makedirs(housing_path, exist_ok=True)\n tgz_path = os.path.join(housing_path, \"housing.tgz\")\n urllib.request.urlretrieve(housing_url, tgz_path)\n housing_tgz = tarfile.open(tgz_path)\n housing_tgz.extractall(path=housing_path)\n housing_tgz.close()",
"_____no_output_____"
],
[
"fetch_housing_data()",
"_____no_output_____"
],
[
"import pandas as pd\n\ndef load_housing_data(housing_path=HOUSING_PATH):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)",
"_____no_output_____"
]
],
[
[
"## Step 4 [OPTIONAL]\n\nNow, we know that housing prices are directly affected by the population density and the median house income. We can at least visualize our data. For the next section, we are going to use an image of the California map and directly imprint it on our image.\n\nNote: We will not be changing the original ```housing.plot()``` function we wrote. Instead, we will just save it as an ```object```",
"_____no_output_____"
]
],
[
[
"image_url = 'https://github.com/bitprj/DigitalHistory/blob/master/Week4-Introduction-to-Data-Visualization-Graphs-Charts-and-Tables/images/california.png?raw=true'\nimport matplotlib.image as mpimg\ncalifornia_img=mpimg.imread(image_url)\nax = housing.plot(kind=\"scatter\",\n x=\"longitude\",\n y=\"latitude\",\n figsize=(10,7),\n s=housing['population']/100,\n label=\"Population\",\n c=\"median_house_value\",\n cmap=plt.get_cmap(\"jet\"),\n colorbar=False,\n alpha=0.4,\n )\nplt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5,\n cmap=plt.get_cmap(\"jet\"))\n\n\nplt.ylabel(\"Latitude\", fontsize=14)\nplt.xlabel(\"Longitude\", fontsize=14)\n\nprices = housing[\"median_house_value\"]\ntick_values = np.linspace(prices.min(), prices.max(), 11)\ncbar = plt.colorbar()\ncbar.ax.set_yticklabels([\"$%dk\"%(round(v/1000)) for v in tick_values], fontsize=14)\ncbar.set_label('Median House Value', fontsize=16)\n\nplt.legend(fontsize=16)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbc9b59c77fbe9422bb040e12e722e5f9d20114 | 8,689 | ipynb | Jupyter Notebook | Python/add_cogs_as_asset_BNL_UAS_example.ipynb | TESTgroup-BNL/exploringCOGs | 1c9403354b34615628a03a37cc79597b0c5aac78 | [
"MIT"
] | 1 | 2021-05-20T20:25:40.000Z | 2021-05-20T20:25:40.000Z | Python/add_cogs_as_asset_BNL_UAS_example.ipynb | TESTgroup-BNL/exploringCOGs | 1c9403354b34615628a03a37cc79597b0c5aac78 | [
"MIT"
] | null | null | null | Python/add_cogs_as_asset_BNL_UAS_example.ipynb | TESTgroup-BNL/exploringCOGs | 1c9403354b34615628a03a37cc79597b0c5aac78 | [
"MIT"
] | null | null | null | 37.291845 | 287 | 0.528599 | [
[
[
"# Example source: https://developers.google.com/earth-engine/Earth_Engine_asset_from_cloud_geotiff\n\n# This has details about the Earth Engine Python Authenticator client.\nfrom ee import oauth\nfrom google_auth_oauthlib.flow import Flow\nimport json",
"_____no_output_____"
],
[
"# Build the `client_secrets.json` file by borrowing the\n# Earth Engine python authenticator.\nclient_secrets = {\n 'web': {\n 'client_id': oauth.CLIENT_ID,\n 'client_secret': oauth.CLIENT_SECRET,\n 'redirect_uris': [oauth.REDIRECT_URI],\n 'auth_uri': 'https://accounts.google.com/o/oauth2/auth',\n 'token_uri': 'https://accounts.google.com/o/oauth2/token'\n }\n}",
"_____no_output_____"
],
[
"# Write to a json file.\nclient_secrets_file = 'client_secrets.json'\nwith open(client_secrets_file, 'w') as f:\n json.dump(client_secrets, f, indent=2)",
"_____no_output_____"
],
[
"# Start the flow using the client_secrets.json file.\nflow = Flow.from_client_secrets_file(client_secrets_file,\n scopes=oauth.SCOPES,\n redirect_uri=oauth.REDIRECT_URI)",
"_____no_output_____"
],
[
"# Get the authorization URL from the flow.\nauth_url, _ = flow.authorization_url(prompt='consent')",
"_____no_output_____"
]
],
[
[
"# Print instructions to go to the authorization URL.\noauth._display_auth_instructions_with_print(auth_url)\nprint('\\n')\nprint('\\n')\nprint(\"after entering key hit enter to store\")\n\n# The user will get an authorization code.\n# This code is used to get the access token.\ncode = input('Enter the authorization code: \\n')\nflow.fetch_token(code=code)\n\n# Get an authorized session from the flow.\nsession = flow.authorized_session()\n",
"_____no_output_____"
]
],
[
[
"#Request body\n#The request body is an instance of an EarthEngineAsset. This is where the path to the COG is specified, along with other useful properties. Note that the image is a small area exported from the composite made in this example script. See this doc for details on exporting a COG.\n\n#Earth Engine will determine the bands, geometry, and other relevant information from the metadata of the TIFF. The only other fields that are accepted when creating a COG-backed asset are properties, start_time, and end_time.\n\n\n# Request body as a dictionary.\nrequest = {\n 'type': 'IMAGE',\n 'gcs_location': {\n 'uris': ['gs://bnl_uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog.tif']\n },\n 'properties': {\n 'source': 'https://osf.io/erv4m/download'\n },\n 'startTime': '2018-07-25T00:00:00.000000000Z',\n 'endTime': '2018-07-26T00:00:00.000000000Z',\n}\n\nfrom pprint import pprint\npprint(json.dumps(request))",
"('{\"type\": \"IMAGE\", \"gcs_location\": {\"uris\": '\n '[\"gs://bnl_uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog.tif\"]}, '\n '\"properties\": {\"source\": \"https://osf.io/erv4m/download\"}, \"startTime\": '\n '\"2018-07-25T00:00:00.000000000Z\", \"endTime\": '\n '\"2018-07-26T00:00:00.000000000Z\"}')\n"
],
[
"#Send the request\n#Make the POST request to the Earth Engine CreateAsset endpoint.\n\n# Where Earth Engine assets are kept.\nproject_folder = 'earthengine-legacy'\n# Your user folder name and new asset name.\nasset_id = 'users/serbinsh/uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog'\n\nurl = 'https://earthengine.googleapis.com/v1alpha/projects/{}/assets?assetId={}'\n\nresponse = session.post(\n url = url.format(project_folder, asset_id),\n data = json.dumps(request)\n)\n\npprint(json.loads(response.content))",
"{'bands': [{'dataType': {'precision': 'FLOAT'},\n 'grid': {'affineTransform': {'scaleX': 0.01,\n 'scaleY': -0.01,\n 'translateX': 508169.23,\n 'translateY': 7226835.705},\n 'crsCode': 'EPSG:32603',\n 'dimensions': {'height': 14148, 'width': 22850}},\n 'id': 'B0',\n 'pyramidingPolicy': 'MEAN'},\n {'dataType': {'precision': 'FLOAT'},\n 'grid': {'affineTransform': {'scaleX': 0.01,\n 'scaleY': -0.01,\n 'translateX': 508169.23,\n 'translateY': 7226835.705},\n 'crsCode': 'EPSG:32603',\n 'dimensions': {'height': 14148, 'width': 22850}},\n 'id': 'B1',\n 'pyramidingPolicy': 'MEAN'},\n {'dataType': {'precision': 'FLOAT'},\n 'grid': {'affineTransform': {'scaleX': 0.01,\n 'scaleY': -0.01,\n 'translateX': 508169.23,\n 'translateY': 7226835.705},\n 'crsCode': 'EPSG:32603',\n 'dimensions': {'height': 14148, 'width': 22850}},\n 'id': 'B2',\n 'pyramidingPolicy': 'MEAN'}],\n 'cloudStorageLocation': {'uris': ['gs://bnl_uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog.tif#1626117340536036']},\n 'endTime': '2018-07-26T00:00:00Z',\n 'geometry': {'coordinates': [[-164.82568886491507, 65.164827027763],\n [-164.82569739856973, 65.16355748353689],\n [-164.82082213773242, 65.16355175522504],\n [-164.82081353430473, 65.16482131284346],\n [-164.82568886491507, 65.164827027763]],\n 'type': 'LineString'},\n 'id': 'users/serbinsh/uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog',\n 'name': 'projects/earthengine-legacy/assets/users/serbinsh/uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_RGB_cog',\n 'properties': {'source': 'https://osf.io/erv4m/download'},\n 'startTime': '2018-07-25T00:00:00Z',\n 'type': 'IMAGE',\n 'updateTime': '2021-07-12T19:56:09.061480Z'}\n"
]
]
] | [
"code",
"raw",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code"
]
] |
ecbc9b831fe0af64cf1df65c361f61dbf360e1e3 | 14,115 | ipynb | Jupyter Notebook | Deep_learning/exercise-transfer-learning.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | Deep_learning/exercise-transfer-learning.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | Deep_learning/exercise-transfer-learning.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | 36.950262 | 376 | 0.596599 | [
[
[
"**[Deep Learning Course Home Page](https://www.kaggle.com/learn/deep-learning)**\n\n---\n",
"_____no_output_____"
],
[
"# Exercise Introduction\n\nThe cameraman who shot our deep learning videos mentioned a problem that we can solve with deep learning. \n\nHe offers a service that scans photographs to store them digitally. He uses a machine that quickly scans many photos. But depending on the orientation of the original photo, many images are digitized sideways. He fixes these manually, looking at each photo to determine which ones to rotate.\n\nIn this exercise, you will build a model that distinguishes which photos are sideways and which are upright, so an app could automatically rotate each image if necessary.\n\nIf you were going to sell this service commercially, you might use a large dataset to train the model. But you'll have great success with even a small dataset. You'll work with a small dataset of dog pictures, half of which are rotated sideways.\n\nSpecifying and compiling the model look the same as in the example you've seen. But you'll need to make some changes to fit the model.\n\n**Run the following cell to set up automatic feedback.**",
"_____no_output_____"
]
],
[
[
"# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.deep_learning.exercise_4 import *\nprint(\"Setup Complete\")",
"Setup Complete\n"
]
],
[
[
"# 1) Specify the Model\n\nSince this is your first time, we'll provide some starter code for you to modify. You will probably copy and modify code the first few times you work on your own projects.\n\nThere are some important parts left blank in the following code.\n\nFill in the blanks (marked with `____`) and run the cell\n",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.applications import ResNet50\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D\n\nnum_classes = 2\nresnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'\n\nmy_new_model = Sequential()\nmy_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))\nmy_new_model.add(Dense(num_classes, activation='softmax'))\n\n# Indicate whether the first layer should be trained/changed or not.\nmy_new_model.layers[0].trainable = False\n\nstep_1.check()",
"_____no_output_____"
],
[
"# step_1.hint()\n# step_1.solution()",
"_____no_output_____"
]
],
[
[
"# 2) Compile the Model\n\nYou now compile the model with the following line. Run this cell.",
"_____no_output_____"
]
],
[
[
"my_new_model.compile(optimizer='sgd', \n loss='categorical_crossentropy', \n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"That ran nearly instantaneously. Deep learning models have a reputation for being computationally demanding. Why did that run so quickly?\n\nAfter thinking about this, check your answer by uncommenting the cell below.",
"_____no_output_____"
]
],
[
[
"# step_2.solution()",
"_____no_output_____"
]
],
[
[
"# 3) Review the Compile Step\nYou provided three arguments in the compile step. \n- optimizer\n- loss\n- metrics\n\nWhich arguments could affect the accuracy of the predictions that come out of the model? After you have your answer, run the cell below to see the solution.",
"_____no_output_____"
]
],
[
[
" step_3.solution()",
"_____no_output_____"
]
],
[
[
"# 4) Fit Model\n\n**Your training data is in the directory `../input/dogs-gone-sideways/images/train`. The validation data is in `../input/dogs-gone-sideways/images/val`**. Use that information when setting up `train_generator` and `validation_generator`.\n\nYou have 220 images of training data and 217 of validation data. For the training generator, we set a batch size of 10. Figure out the appropriate value of `steps_per_epoch` in your `fit_generator` call.\n\nFill in all the blanks (again marked as `____`). Then run the cell of code. Watch as your model trains the weights and the accuracy improves.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.applications.resnet50 import preprocess_input\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nimage_size = 224\ndata_generator = ImageDataGenerator(preprocess_input)\n\ntrain_generator = data_generator.flow_from_directory(\n directory= \"../input/dogs-gone-sideways/images/train\",\n target_size=(image_size, image_size),\n batch_size=10,\n class_mode='categorical')\n\nvalidation_generator = data_generator.flow_from_directory(\n directory=\"../input/dogs-gone-sideways/images/val\",\n target_size=(image_size, image_size),\n class_mode='categorical')\n\n# fit_stats below saves some statistics describing how model fitting went\n# the key role of the following line is how it changes my_new_model by fitting to data\nfit_stats = my_new_model.fit_generator(train_generator,\n steps_per_epoch=22,\n validation_data=validation_generator,\n validation_steps=1)\n\nstep_4.check()",
"Found 220 images belonging to 2 classes.\nFound 217 images belonging to 2 classes.\n"
],
[
"# step_4.solution()",
"_____no_output_____"
]
],
[
[
"\nCan you tell from the results what fraction of the time your model was correct in the validation data? \n\nIn the next step, we'll see if we can improve on that.\n\n# Keep Going\nMove on to learn about **[data augmentation](https://www.kaggle.com/dansbecker/data-augmentation/)**. It is a clever and easy way to improve your models. Then you'll apply data augmentation to this automatic image rotation problem.\n",
"_____no_output_____"
],
[
"---\n**[Deep Learning Course Home Page](https://www.kaggle.com/learn/deep-learning)**\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecbc9d4776714bdec17fab4fda66254409e23708 | 113,037 | ipynb | Jupyter Notebook | machine-learning-ex1/ex1.ipynb | Ji-Ping-Dai/ML_exercise_Andrew_Ng_Python | 3ac35c90170985ec15c39541885951902104ac59 | [
"MIT"
] | null | null | null | machine-learning-ex1/ex1.ipynb | Ji-Ping-Dai/ML_exercise_Andrew_Ng_Python | 3ac35c90170985ec15c39541885951902104ac59 | [
"MIT"
] | null | null | null | machine-learning-ex1/ex1.ipynb | Ji-Ping-Dai/ML_exercise_Andrew_Ng_Python | 3ac35c90170985ec15c39541885951902104ac59 | [
"MIT"
] | null | null | null | 339.45045 | 38,940 | 0.932642 | [
[
[
"This is the firsr exercise of Andrew Ng's [Machine Learning](https://www.coursera.org/learn/machine-learning/home/welcome) written with Python3",
"_____no_output_____"
],
[
"## 1. Linear regression",
"_____no_output_____"
]
],
[
[
"#import modules\nimport numpy as np\nimport matplotlib.pyplot as plt \nimport func\n%matplotlib inline\nplt.rc('text',usetex=True)\nplt.rc('font',family='Times New Roman')\n#read the data\ndata = np.loadtxt('data/ex1data1.txt',delimiter=',')\nX = data[:,0].reshape([-1,1]) #input #reshape!!!!!\ny = data[:,1].reshape([-1,1]) #output\n#plot the data\nfig = plt.figure(figsize=[7,5])\nax = plt.axes()\nfunc.plotdata(X,y,ax)",
"_____no_output_____"
],
[
"# Add a column of ones to X (bias unit)\nXb = np.concatenate([np.ones([len(X),1]),X],axis=1)\n# initialize fitting parameters\ntheta = np.zeros([X.shape[1]+1,1])\n# Some gradient descent settings\niterations = 1500\nalpha = 0.01\n# Test cost function\nJ = func.costfunc(Xb,y,theta)\nprint('With theta = [0 ; 0],Cost computed = {0}'.format(J))\nprint('Expected cost value (approx) 32.07')",
"With theta = [0 ; 0],Cost computed = 32.072733877455676\nExpected cost value (approx) 32.07\n"
],
[
"# run gradient descent\ntheta_result, cost_iter = func.gradientDescent(Xb,y,theta,iterations,alpha)\nprint('Theta found by gradient descent is {0}'.format(theta_result.reshape(1,-1)))\nprint('Expected theta values (approx): [-3.6303, 1.1664]')\n#plot the cost function with iterations\nplt.plot(np.arange(1,iterations+1),cost_iter,color='g',ls='--')\nplt.xticks(fontsize=15)\nplt.yticks(fontsize=15)\nplt.ylabel('Cost Function',size=15)\nplt.xlabel('Iterations',size=15);",
"Theta found by gradient descent is [[-3.63029144 1.16636235]]\nExpected theta values (approx): [-3.6303, 1.1664]\n"
],
[
"#plot the linear fit\nfig = plt.figure(figsize=[7,5])\nax = plt.axes()\nfunc.plotdata(X,y,ax)\nax.plot(Xb[:,1],theta_result[0]+theta_result[1]*Xb[:,1],color='k',label='Linear regression')\nax.legend(fontsize=15);",
"_____no_output_____"
],
[
"# Predict valuse\npredict1 = np.array([1,7])@theta_result\nprint('For population = 35,000, we predict a profit of ${0:.2f}'.format(predict1[0]*10000))",
"For population = 35,000, we predict a profit of $45342.45\n"
],
[
"# Visualizing J(theta_0, theta_1)\ntheta_0 = np.linspace(-10,10,100)\ntheta_1 = np.linspace(-1,4,100)\nT1, T2 = np.meshgrid(theta_0, theta_1)\ncost2D = np.zeros([100,100])\nfor i in range(100):\n for j in range(100):\n Ta = np.array([[theta_0[i]],[theta_1[j]]])\n cost2D[i,j] = func.costfunc(Xb, y, Ta)\ncon=plt.contour(T1, T2, cost2D.T, np.logspace(-1,3,10)) #Because of the way meshgrids work\n#an expample theta_0*theta1 T1*T2 \nplt.clabel(con,inline=True)\nplt.scatter(theta_0[31],theta_1[43],marker='*',c='r', s=100)\nplt.xlabel(r'$\\theta_0$',size=15)\nplt.ylabel(r'$\\theta_1$',size=15);",
"_____no_output_____"
]
],
[
[
"## 2. Linear regression with multiple variables",
"_____no_output_____"
]
],
[
[
"#Load Data\ndata = np.loadtxt('data/ex1data2.txt', delimiter=',')\nX = data[:,:2]\ny = data[:,2].reshape([-1,1])\n# Scale features and set them to zero mean\nX, mu, sigma = func.featureNormalize(X)\n# Add a column of ones to X (bias unit)\nXb = np.concatenate([np.ones([len(y),1]),X],axis=1)",
"_____no_output_____"
],
[
"# same as one variable, because we use vectorization calculation\nalpha = 0.1\niterations = 50\ntheta = np.zeros([X.shape[1]+1,1])\ntheta_result, cost_iter = func.gradientDescent(Xb,y,theta,iterations,alpha)\nprint('Theta found by gradient descent is {0}'.format(theta_result.reshape(1,-1)))\n#plot the cost function with iterations\nplt.plot(np.arange(1,iterations+1),cost_iter,color='g',ls='--')\nplt.xticks(fontsize=15)\nplt.yticks(fontsize=15)\nplt.ylabel('Cost Function',size=15)\nplt.xlabel('Iterations',size=15);",
"Theta found by gradient descent is [[338658.2492493 103322.82942954 -474.74249522]]\n"
],
[
"# Estimate the price of a 1650 sq-ft, 3 br house\nprice=np.array([1, (1650-mu[0])/sigma[0], (3-mu[1])/sigma[1]])@theta_result\nprint('Predicted price of a 1650 sq-ft, 3 br house (using gradient descent) is ${0:.2f}'.format(price[0]))",
"Predicted price of a 1650 sq-ft, 3 br house (using gradient descent) is $292679.07\n"
],
[
"#Normal Equations\ndata = np.loadtxt('data/ex1data2.txt', delimiter=',')\nX = data[:,:2]\ny = data[:,2].reshape([-1,1])\nXb = np.concatenate([np.ones([len(y),1]),X],axis=1)\ntheta_result = np.linalg.inv(Xb.T@Xb)@Xb.T@y\nprint('Theta found by gradient descent is {0}'.format(theta_result.reshape(1,-1)))\nprice=np.array([1, 1650, 3])@theta_result\nprint('Predicted price of a 1650 sq-ft, 3 br house (using normal equation) is ${0:.2f}'.format(price[0]))",
"Theta found by gradient descent is [[89597.9095428 139.21067402 -8738.01911233]]\nPredicted price of a 1650 sq-ft, 3 br house (using normal equation) is $293081.46\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecbc9f8b873d87ab8bf06174c53a0ece0a0e6adb | 6,681 | ipynb | Jupyter Notebook | test_scrapper/brasil_finep.ipynb | DubanTorres/Analisis-Scrapping-Convocatorias-Clacso | 0a4f397a3e5275973bb627f2f85eac76fb53030a | [
"BSD-3-Clause"
] | null | null | null | test_scrapper/brasil_finep.ipynb | DubanTorres/Analisis-Scrapping-Convocatorias-Clacso | 0a4f397a3e5275973bb627f2f85eac76fb53030a | [
"BSD-3-Clause"
] | null | null | null | test_scrapper/brasil_finep.ipynb | DubanTorres/Analisis-Scrapping-Convocatorias-Clacso | 0a4f397a3e5275973bb627f2f85eac76fb53030a | [
"BSD-3-Clause"
] | 1 | 2021-10-04T14:28:40.000Z | 2021-10-04T14:28:40.000Z | 31.219626 | 134 | 0.504116 | [
[
[
"import pandas as pd\nimport numpy as np\nimport requests\nfrom bs4 import BeautifulSoup\nfrom lxml import html\nimport scrapy\nfrom time import sleep\nimport urllib3\nimport json\nfrom selenium import webdriver\nimport random",
"_____no_output_____"
],
[
"def parser(link):\n\n encabezados = {\n 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36'\n }\n\n resp = requests.get(link, headers=encabezados, verify=False)\n resp = resp.text\n\n #soup = get_Soup('https://minciencias.gov.co/convocatorias/todas')\n parser = html.fromstring(resp)\n urllib3.disable_warnings()\n\n return parser",
"_____no_output_____"
],
[
"def finep():\n\n finep = pd.DataFrame()\n\n titulos_proyectos = []\n fechas_publicación_proyectos = []\n fechas_cierre_proyectos = []\n fuentes_recurso_proyectos = []\n publicos_objetivo_proyecto = []\n temas_proyectos = []\n links_proyectos = []\n descripciones = []\n publicos = []\n pdfs_proyectos = []\n\n ###########\n base = 'http://www.finep.gov.br/chamadas-publicas?situacao=aberta'\n\n parser_pagina = parser(base)\n pag_max = parser_pagina.xpath('//p[@class=\"counter pull-right\"]/text()')[0].strip()[-1]\n pag_max\n\n #########\n\n n = 0\n\n while n <= int(pag_max)-1:\n\n if n == 0:\n page = 'http://www.finep.gov.br/chamadas-publicas?situacao=aberta'\n else:\n page = 'http://www.finep.gov.br/chamadas-publicas?situacao=aberta&start=' + str(n) + '0'\n\n\n parser_pagina = parser(page)\n\n titulos = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//a/text()')\n\n links = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//a/@href')\n links = ['http://www.finep.gov.br' + i for i in links]\n\n fechas_publicacion = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//div[@class=\"data_pub div\"]/span/text()')\n \n fechas_cierre = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//div[@class=\"prazo div\"]/span/text()')\n\n fuentes_recurso = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//div[@class=\"fonte div\"]/span/text()')\n fuentes_recurso = [i.strip() for i in fuentes_recurso]\n\n\n temas = parser_pagina.xpath('//div[@id=\"conteudoChamada\"]//div[@class=\"tema div\"]/span/text()')\n temas = [i.strip() for i in temas]\n\n titulos_proyectos += titulos\n fechas_publicación_proyectos+=fechas_publicacion\n fechas_cierre_proyectos+=fechas_cierre\n fuentes_recurso_proyectos+=fuentes_recurso\n temas_proyectos+=temas\n links_proyectos+=links\n\n n+=1\n\n ########################\n\n for link in links_proyectos:\n\n parser_proyecto = parser(link)\n texts = parser_proyecto.xpath('//div[@class=\"group desc\"]/div[@class=\"text\"]//text()')\n\n # Descripción\n texts = parser_proyecto.xpath('//div[@class=\"group desc\"]/div[@class=\"text\"]//text()')\n\n descripcion = ''\n for text in texts:\n text = text.strip()\n descripcion = descripcion + text + ' '\n descripciones.append(descripcion)\n # Público objetivo\n \n publico = ''\n try:\n publico = parser_proyecto.xpath('//span[@class=\"tag\"]/text()')\n publicos.append(publico[0])\n\n except:\n publico = ''\n publicos.append(publico)\n\n # Estado\n pdfs = parser_proyecto.xpath('//tbody//a//@href')\n pdfs = ['http://www.finep.gov.br' + i for i in pdfs]\n\n pdfs_proyecto = ''\n for pdf in pdfs:\n pdfs_proyecto = pdfs_proyecto + pdf + ', '\n\n pdfs_proyecto.strip(', ')\n\n pdfs_proyectos.append(pdfs_proyecto)\n\n finep['Título'] = titulos_proyectos\n finep['Descripción'] = descripciones\n finep['Fecha públicación'] = fechas_publicación_proyectos\n finep['Fecha cierre'] = fechas_cierre_proyectos\n finep['Público Objetivo'] = publicos\n finep['Fuente Recurso'] = fuentes_recurso_proyectos\n finep['Tema'] = temas_proyectos\n finep['Link'] = links_proyectos\n finep['Pdfs'] = pdfs_proyectos\n \n\n return finep",
"_____no_output_____"
],
[
"finepT = finep()\nfinepT.to_excel('finep_brasil.xlsx')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ecbca0948fb20887ede116b41ba9f84610874db3 | 198,297 | ipynb | Jupyter Notebook | 2.6 Data Cleaning Project Walkthrough/Star Wars Survey.ipynb | orspain/Dataquest | 8d6817e89afd7e88397a76c2a638aa5aca77d99f | [
"Apache-2.0"
] | 9 | 2020-04-16T17:23:53.000Z | 2021-09-25T13:59:10.000Z | 2.6 Data Cleaning Project Walkthrough/Star Wars Survey.ipynb | orspain/Dataquest | 8d6817e89afd7e88397a76c2a638aa5aca77d99f | [
"Apache-2.0"
] | null | null | null | 2.6 Data Cleaning Project Walkthrough/Star Wars Survey.ipynb | orspain/Dataquest | 8d6817e89afd7e88397a76c2a638aa5aca77d99f | [
"Apache-2.0"
] | 16 | 2019-12-15T19:52:27.000Z | 2022-02-28T19:22:15.000Z | 78.100433 | 18,652 | 0.676294 | [
[
[
"# Introduction\n\nWhile waiting for Star Wars: The Force Awakens to come out, the team at FiveThirtyEight became interested in answering some questions about Star Wars fans. In particular, they wondered: does the rest of America realize that “The Empire Strikes Back” is clearly the best of the bunch?\n\nThe team needed to collect data addressing this question. To do this, they surveyed Star Wars fans using the online tool SurveyMonkey. They received 835 total responses, which you download from their GitHub repository.\n\nFor this project, we'll be cleaning and exploring the data set in Jupyter notebook. ",
"_____no_output_____"
],
[
"## The Data \n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# import seaborn as sns\n# import cartopy.crs as ccrs\n# import cartopy.feature as cf \n# import re\n\n# This will need to be done in the future so \n# get accustomed to using now\nfrom pandas.plotting import register_matplotlib_converters\nregister_matplotlib_converters()\n\n%matplotlib inline",
"_____no_output_____"
],
[
"star_wars = pd.read_csv(\"star_wars.csv\", encoding=\"ISO-8859-1\") # Need to specify encoding",
"_____no_output_____"
],
[
"star_wars.head(10)",
"_____no_output_____"
]
],
[
[
"### Observations\n\nThe data has several columns, including:\n\n - `RespondentID` - An anonymized ID for the respondent (person taking the survey)\n - `Gender` - The respondent's gender\n - `Age` - The respondent's age\n - `Household Income` - The respondent's income\n - `Education` - The respondent's education level\n - `Location (Census Region)` - The respondent's location\n - `Have you seen any of the 6 films in the Star Wars franchise?` - Has a Yes or No response\n - `Do you consider yourself to be a fan of the Star Wars film franchise?` - Has a Yes or No response\n\nThere are several other columns containing answers to questions about the Star Wars movies. For some questions, the respondent had to check one or more boxes. This type of data is difficult to represent in columnar format. As a result, this data set needs a lot of cleaning.\n\nFirst, we'll need to remove the invalid rows. For example, `RespondentID` is supposed to be a unique ID for each respondent, but it's blank in some rows. We'll remove any rows with an invalid `RespondentID`.",
"_____no_output_____"
]
],
[
[
"star_wars = star_wars[pd.notnull(star_wars[\"RespondentID\"])]",
"_____no_output_____"
]
],
[
[
"## Cleaning and Mapping Yes/No Columns\n\nWe'll now handle the next two columns, which are:\n\n- Have you seen any of the 6 films in the Star Wars franchise?\n- Do you consider yourself to be a fan of the Star Wars film franchise?\n\nBoth represent Yes/No questions. They can also be `NaN` where a respondent chooses not to answer a question. We can use the `pandas.Series.value_counts()` method on a series to see all of the unique values in a column, along with the total number of times each value appears.\n\nBoth columns are currently string types, because the main values they contain are Yes and No. We can make the data a bit easier to analyze down the road by converting each column to a Boolean having only the values `True`, `False`, and `NaN`. Booleans are easier to work with because we can select the rows that are True or False without having to do a string comparison.",
"_____no_output_____"
]
],
[
[
"star_wars.columns",
"_____no_output_____"
],
[
"yes_no = {\"Yes\": True, \"No\": False}\n\nfor col in [\n \"Have you seen any of the 6 films in the Star Wars franchise?\",\n \"Do you consider yourself to be a fan of the Star Wars film franchise?\"\n ]:\n star_wars[col] = star_wars[col].map(yes_no)\n\nstar_wars.head()",
"_____no_output_____"
]
],
[
[
"### Rename the Unnamed Columns",
"_____no_output_____"
]
],
[
[
"star_wars = star_wars.rename(columns={\n \"Which of the following Star Wars films have you seen? Please select all that apply.\": \"seen_1\",\n \"Unnamed: 4\": \"seen_2\",\n \"Unnamed: 5\": \"seen_3\",\n \"Unnamed: 6\": \"seen_4\",\n \"Unnamed: 7\": \"seen_5\",\n \"Unnamed: 8\": \"seen_6\"\n })\n\nstar_wars.head()",
"_____no_output_____"
]
],
[
[
"### Cleaning and Mapping Checkbox Columns\n\nThe next six columns represent a single checkbox question. The respondent checked off a series of boxes in response to the question, `Which of the following Star Wars films have you seen? Please select all that apply`.\n\nThe columns for this question are:\n\n - `Which of the following Star Wars films have you seen? Please select all that apply`. - Whether or not the respondent saw `Star Wars: Episode I The Phantom Menace`.\n - `Unnamed: 4` - Whether or not the respondent saw `Star Wars: Episode II Attack of the Clones`.\n - `Unnamed: 5` - Whether or not the respondent saw `Star Wars: Episode III Revenge of the Sith`.\n - `Unnamed: 6` - Whether or not the respondent saw `Star Wars: Episode IV A New Hope`.\n - `Unnamed: 7` - Whether or not the respondent saw `Star Wars: Episode V The Empire Strikes Back`.\n - `Unnamed: 8` - Whether or not the respondent saw `Star Wars: Episode VI Return of the Jedi`.\n\nFor each of these columns, if the value in a cell is the name of the movie, that means the respondent saw the movie. If the value is `NaN`, the respondent either didn't answer or didn't see the movie. We'll assume that they didn't see the movie.\n\nWe'll need to convert each of these columns to a Boolean, then rename the column something more intuitive. We can convert the values the same way we did earlier, except that we'll need to include the movie title and `NaN` in the mapping dictionary.",
"_____no_output_____"
]
],
[
[
"movie_mapping = {\n \"Star Wars: Episode I The Phantom Menace\": True,\n np.nan: False,\n \"Star Wars: Episode II Attack of the Clones\": True,\n \"Star Wars: Episode III Revenge of the Sith\": True,\n \"Star Wars: Episode IV A New Hope\": True,\n \"Star Wars: Episode V The Empire Strikes Back\": True,\n \"Star Wars: Episode VI Return of the Jedi\": True\n}\n\nfor col in star_wars.columns[3:9]:\n star_wars[col] = star_wars[col].map(movie_mapping)",
"_____no_output_____"
]
],
[
[
"### Cleaning the Ranking Columns\n\nThe next six columns ask the respondent to rank the Star Wars movies in order of least favorite to most favorite. `1` means the film was the most favorite, and `6` means it was the least favorite. Each of the following columns can contain the value `1`, `2`, `3`, `4`, `5`, `6`, or `NaN`:\n\n - `Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film`. - How much the respondent liked `Star Wars: Episode I The Phantom Menace`\n - `Unnamed: 10` - How much the respondent liked `Star Wars: Episode II Attack of the Clones`\n - `Unnamed: 11` - How much the respondent liked `Star Wars: Episode III Revenge of the Sith`\n - `Unnamed: 12` - How much the respondent liked `Star Wars: Episode IV A New Hope`\n - `Unnamed: 13` - How much the respondent liked `Star Wars: Episode V The Empire Strikes Back`\n - `Unnamed: 14` - How much the respondent liked `Star Wars: Episode VI Return of the Jedi`\n\nFortunately, these columns don't require a lot of cleanup. We'll need to convert each column to a numeric type, though, then rename the columns so that we can tell what they represent more easily.\n\nWe can do the numeric conversion with the `pandas.DataFrame.astype()` method.",
"_____no_output_____"
]
],
[
[
"star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)",
"_____no_output_____"
],
[
"star_wars = star_wars.rename(columns={\n \"Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.\": \"ranking_1\",\n \"Unnamed: 10\": \"ranking_2\",\n \"Unnamed: 11\": \"ranking_3\",\n \"Unnamed: 12\": \"ranking_4\",\n \"Unnamed: 13\": \"ranking_5\",\n \"Unnamed: 14\": \"ranking_6\"\n })\n\nstar_wars.head()",
"_____no_output_____"
]
],
[
[
"## Finding the highest-ranked Movie\n\nNow that we've cleaned up the ranking columns, we can find the highest-ranked movie more quickly. To do this, take the mean of each of the ranking columns using the `pandas.DataFrame.mean()` method.\n",
"_____no_output_____"
]
],
[
[
"star_wars[star_wars.columns[9:15]].mean()\n",
"_____no_output_____"
],
[
"plt.bar(range(6), star_wars[star_wars.columns[9:15]].mean())",
"_____no_output_____"
]
],
[
[
"### Findings\nSo far, we've cleaned up the data, renamed several columns, and computed the average ranking of each movie. As I suspected, it looks like the \"original\" movies are rated much more highly than the newer ones.",
"_____no_output_____"
],
[
"## View Counts\n\nLet's see how many people in our survey saw each movie:",
"_____no_output_____"
]
],
[
[
"star_wars[star_wars.columns[3:9]].sum()",
"_____no_output_____"
],
[
"plt.bar(range(6), star_wars[star_wars.columns[3:9]].sum())\n",
"_____no_output_____"
]
],
[
[
"### Findings\n\nIt appears that the original movies were seen by more respondents than the newer movies. This reinforces what we saw in the rankings, where the earlier movies seem to be more popular.",
"_____no_output_____"
],
[
"## Gender differences\n\nWe know which movies the survey population as a whole has ranked the highest. Now let's examine how certain segments of the survey population responded. There are several columns that segment our data into `Male` or `Female`.\n\nWe'll repeat the previous analysis but based on gender below:",
"_____no_output_____"
]
],
[
[
"males = star_wars[star_wars[\"Gender\"] == \"Male\"]\nfemales = star_wars[star_wars[\"Gender\"] == \"Female\"]",
"_____no_output_____"
],
[
"plt.bar(range(6), males[males.columns[9:15]].mean())\nplt.show()\n\nplt.bar(range(6), females[females.columns[9:15]].mean())\nplt.show()",
"_____no_output_____"
],
[
"plt.bar(range(6), males[males.columns[3:9]].sum())\nplt.show()\n\nplt.bar(range(6), females[females.columns[3:9]].sum())\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Findings\n\nInterestingly, more males watches episodes 1-3, but males liked them far less than females did. For the total number of responses we see a high count corresponding to `The Empire Strikes Back`.",
"_____no_output_____"
],
[
"## Education Data\n\nWe'll now take a look at the education column to see if we can determine any trends:\n",
"_____no_output_____"
]
],
[
[
"edu_seen = star_wars.groupby(\"Education\").sum()\nedu_seen[edu_seen.columns[3:8]]",
"_____no_output_____"
],
[
"edu_seen[edu_seen.columns[3:8]].plot(kind='bar')",
"_____no_output_____"
]
],
[
[
"### Findings:\n\nAmong the survey respondents, the highest totals are in the group with `Some college or Associate degree`, followed closely by `Bachelor degree` and `Graduate degree`. We see a very strong response through the majority of survey responses for `seen_5` which corresponds to `The Empire Strikes Back`, regardless of education level. ",
"_____no_output_____"
],
[
"## Location Data\n\nWe'll now do the same thing for `Location (Census Region)`, to analyze the trends there.",
"_____no_output_____"
]
],
[
[
"loc = star_wars.groupby(\"Location (Census Region)\").sum()\nloc[loc.columns[3:8]]",
"_____no_output_____"
],
[
"loc[loc.columns[3:8]].plot(kind = 'bar')",
"_____no_output_____"
]
],
[
[
"### Findings:\n\nThe location with the least viewers in our survey pool is `East South Central`. `Pacific` has the highest location. We can also see a very strong response across locations for `seen_5` which corresponds to `The Empire Strikes Back`. ",
"_____no_output_____"
],
[
"## Han Shot First\n\nWe'll continue our analysis with the `Which character shot first?` column to observe the trends there:",
"_____no_output_____"
]
],
[
[
"which = star_wars.groupby(\"Which character shot first?\").sum()\nwhich[which.columns[3:8]]",
"_____no_output_____"
],
[
"which[which.columns[3:8]].plot(kind = 'bar')",
"_____no_output_____"
]
],
[
[
"### Findings\n\nAmong our survey respondents, the character `Han` was selected as `Which character shot first?` by the majority. There are also a large number who responded, `I don't understand this question`. ",
"_____no_output_____"
],
[
"## Who is The Most Liked and Most Disliked Character? (The Answer will Shock You!)\n\nWe'll now clean up columns 15 to 29, which contain data on the characters respondents view favorably and unfavorably. We'll rename the columns so that they make more sense.\n\nThe survey has a list of characters, and for each of those the responses may range from `Very favorably`, `Somewhat favorably`, `Neither favorably nor unfavorably (neutral)`, `Somewhat unfavorably`, `Very unfavorably` or `Unfamiliar (N\\A)`. ",
"_____no_output_____"
]
],
[
[
"cols = star_wars[star_wars.columns[15:29]].columns\n\nstar_wars = star_wars.rename(columns={\n cols[0]:\"Han Solo\",\n cols[1]:\"Luke Skywalker\",\n cols[2]:\"Princess Leia\",\n cols[3]:\"Anakin Skywalker\",\n cols[4]:\"Obi Wan Kenobi\",\n cols[5]:\"Emperor Palpatine\",\n cols[6]:\"Darth Vader\",\n cols[7]:\"Lando Calrissian\",\n cols[8]:\"Boba Fett\",\n cols[9]:\"C-3P0\",\n cols[10]:\"R2D2\",\n cols[11]:\"Jar Jar Binks\",\n cols[12]:\"Padme Amidala\",\n cols[13]:\"Yoda\"\n})\n",
"_____no_output_____"
]
],
[
[
"#### `Who is The Most Liked and Most Disliked Character?` We'll try to find out the answer:",
"_____no_output_____"
]
],
[
[
"def fav_char(data):\n results = data.values.tolist().count(\"Very favorably\")\n return results\n\n\npopularity = star_wars[star_wars.columns[15:29]].apply(fav_char)\nprint(popularity)\npopularity.sort_values().plot.bar()\nplt.show()",
"Han Solo 610\nLuke Skywalker 552\nPrincess Leia 547\nAnakin Skywalker 245\nObi Wan Kenobi 591\nEmperor Palpatine 110\nDarth Vader 310\nLando Calrissian 142\nBoba Fett 138\nC-3P0 474\nR2D2 562\nJar Jar Binks 112\nPadme Amidala 168\nYoda 605\ndtype: int64\n"
],
[
"def sw_fav_char(data):\n results = data.values.tolist().count(\"Somewhat favorably\")\n return results\n\n\nsw_popularity = star_wars[star_wars.columns[15:29]].apply(sw_fav_char)\nprint(sw_popularity)",
"Han Solo 151\nLuke Skywalker 219\nPrincess Leia 210\nAnakin Skywalker 269\nObi Wan Kenobi 159\nEmperor Palpatine 143\nDarth Vader 171\nLando Calrissian 223\nBoba Fett 153\nC-3P0 229\nR2D2 185\nJar Jar Binks 130\nPadme Amidala 183\nYoda 144\ndtype: int64\n"
]
],
[
[
"### Findings: Han Solo 1st, Emperor Palatine last.\n\nIn terms of popularity, Han Solo edged out Yoda by only 5 votes as the most liked character. Obi Wan Kenobi follows in 3rd place, with R2D2 in 4th. \n\nThe character with the least `Very favorably` votes is Emperor Palatine - however Jar Jar Binks is only 2 votes behind. \n\nAlso it's interesting to see Darth Vader right in the middle.\n\nLet's see if the same holds out in the lower end:",
"_____no_output_____"
]
],
[
[
"def least_char(data):\n results = data.values.tolist().count(\"Very unfavorably\")\n return results\n\n\nleast_popular = star_wars[star_wars.columns[15:29]].apply(least_char)\nprint(least_popular)",
"Han Solo 1\nLuke Skywalker 3\nPrincess Leia 6\nAnakin Skywalker 39\nObi Wan Kenobi 7\nEmperor Palpatine 124\nDarth Vader 149\nLando Calrissian 8\nBoba Fett 45\nC-3P0 7\nR2D2 6\nJar Jar Binks 204\nPadme Amidala 34\nYoda 8\ndtype: int64\n"
],
[
"def not_so_fav_char(data):\n results = data.values.tolist().count(\"Somewhat unfavorably\")\n return results\n\n\nunpopularity = star_wars[star_wars.columns[15:29]].apply(not_so_fav_char)\nprint(unpopularity)",
"Han Solo 8\nLuke Skywalker 13\nPrincess Leia 12\nAnakin Skywalker 83\nObi Wan Kenobi 8\nEmperor Palpatine 68\nDarth Vader 102\nLando Calrissian 63\nBoba Fett 96\nC-3P0 23\nR2D2 10\nJar Jar Binks 102\nPadme Amidala 58\nYoda 8\ndtype: int64\n"
]
],
[
[
"### Findings:\n\nNomination for the least popular character, Jar Jar Binks (306 aggregate votes in `Somewhat unfavorably` and `Very unfavorably`). \n\nDarth Vader ties with Jar Jar Binks in the `Somewhat unfavorably` category, strengthening our hypothesis that he is the most controversial character in terms of popularity (split between likes and dislikes).\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecbcb3ce9f6f579b9ef1a3d863b7d6c3b61eda94 | 137,633 | ipynb | Jupyter Notebook | nb/fred-oil-brent-wti.ipynb | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | 1 | 2020-04-24T05:29:26.000Z | 2020-04-24T05:29:26.000Z | fecon235-master/nb/fred-oil-brent-wti.ipynb | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | null | null | null | fecon235-master/nb/fred-oil-brent-wti.ipynb | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | 1 | 2020-04-24T05:34:06.000Z | 2020-04-24T05:34:06.000Z | 168.255501 | 25,972 | 0.874441 | [
[
[
"## Crude Oil :: Brent vs. West Texas Intermediate (WTI)\n\nWe examine the history of crude oil prices, and their spreads.\nA Boltzmann portfolio is computed for *optimal* financial positions.\n\nDeflated prices give additional insight, along with some of the\nstatistical tools useful in financial economics.\n\n***Although WTI is more desirable than Brent from a petrochemical\nperspective, that preference is reversed when the metrics\nare financial.***\n\nUnless otherwise noted, the price per barrel is in U.S. dollars.\nThe source of the data is the U.S. Department of Energy via FRED.\nWe will use Federal Reserve USD index units to compute oil prices in non-dollar terms.",
"_____no_output_____"
],
[
"## Petrochemical background\n\nBefore exploring our financial metrics, let's get familiar\nwith the physical aspects.\n\n> **Crude oil is arguably the most crucial commodity in the world today.**\nAn important issue is the different benchmarks for crude oil prices. \n\n> The American Petroleum Institute ***API gravity*** is a measure that is\nused to compare a petroleum liquid's density to water.\nThis scale generally falls between 10 and 70, with \"light\" crude oil\ntypically having an API gravity on the higher side of the range, while\n\"heavy\" oil has a reading that falls on the lower end of the range.\n\n> The sulfur content of petroleum must also be considered.\n***Sulfur content of 0.50% is a key benchmark.***\nWhen oil has a total sulfur level greater than the benchmark, it is considered \"sour.\"\nSulfur content less than the benchmark indicates that an oil is \"sweet.\"\nSour crude oil is more prevalent than its sweet counterpart and comes from\noil sands in Canada, the Gulf of Mexico, some South American nations,\nas well as most of the Middle East.\nSweet crude is typically produced in the central United States,\nthe North Sea region of Europe, and much of Africa and the Asia Pacific region.\n*End users generally prefer sweet crude as it requires less processing\nto remove impurities than its sour counterpart.*\n\n> [Edited source: Daniela Pylypczak-Wasylyszyn](http://commodityhq.com/education/crude-oil-guide-brent-vs-wti-whats-the-difference)\n\nLight and sweet forms of crude oil are generally valued higher while\nheavy and sour types often trade at a discount.\nThese two key factors distinguish the two major benchmarks for\nworld oil prices: West Texas Intermediate (WTI) and Brent crude oil.\n\n***WTI is generally lighter and sweeter than Brent, but the supply of each\ncan differ considerably over time, thus their price difference will vary.***\n\nWTI is refined mostly in the Midwest and Gulf Coast regions of the United States,\nwhile Brent oil is typically refined in Northwest Europe.\n\n#### Approximate characteristics:\n- **Brent**: API gravity 38.06, sulfur 0.37%\n- **WTI**: API gravity 39.6, sulfur 0.24%\n\n---",
"_____no_output_____"
],
[
"*Dependencies:*\n\n- Repository: https://github.com/rsvp/fecon235\n\n*CHANGE LOG*\n\n 2017-08-08 Fix #2 and introduce Boltzmann portfolio of oils.\n 2015-05-26 Code review and revision.\n 2014-10-09 First version for oil, using 2014-09-28 Template.",
"_____no_output_____"
]
],
[
[
"from fecon235.fecon235 import *",
"_____no_output_____"
],
[
"# PREAMBLE-p6.15.1223d :: Settings and system details\nfrom __future__ import absolute_import, print_function, division\nsystem.specs()\npwd = system.getpwd() # present working directory as variable.\nprint(\" :: $pwd:\", pwd)\n# If a module is modified, automatically reload it:\n%load_ext autoreload\n%autoreload 2\n# Use 0 to disable this feature.\n\n# Notebook DISPLAY options:\n# Represent pandas DataFrames as text; not HTML representation:\nimport pandas as pd\npd.set_option( 'display.notebook_repr_html', False )\nfrom IPython.display import HTML # useful for snippets\n# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')\nfrom IPython.display import Image \n# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works\nfrom IPython.display import YouTubeVideo\n# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)\nfrom IPython.core import page\nget_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)\n# Or equivalently in config file: \"InteractiveShell.display_page = True\", \n# which will display results in secondary notebook pager frame in a cell.\n\n# Generate PLOTS inside notebook, \"inline\" generates static png:\n%matplotlib inline \n# \"notebook\" argument allows interactive zoom and resize.",
" :: Python 2.7.13\n :: IPython 5.1.0\n :: jupyter_core 4.2.1\n :: notebook 4.1.0\n :: matplotlib 1.5.1\n :: numpy 1.11.0\n :: scipy 0.17.0\n :: sympy 1.0\n :: pandas 0.19.2\n :: pandas_datareader 0.2.1\n :: Repository: fecon235 v5.17.0722 develop\n :: Timestamp: 2017-08-09T09:31:30Z\n :: $pwd: /media/yaya/virt15h/virt/dbx/Dropbox/ipy/fecon235/nb\n"
],
[
"# Define dictionary for dataframe:\noils4d = { 'Brent' : d4brent, 'WTI' : d4wti }",
"_____no_output_____"
],
[
"# Retrieve data:\noils = groupget( oils4d )",
"_____no_output_____"
]
],
[
[
"## BoW spread := Brent - WTI\n\n***Brent over WTI spread***",
"_____no_output_____"
]
],
[
[
"# Define price variables individually for convenience:\nbrent = oils['Brent']\nwti = oils['WTI']",
"_____no_output_____"
],
[
"# Define BoW: Brent over WTI spread:\nbow = todf( brent - wti, 'BoW' )",
"_____no_output_____"
]
],
[
[
"#### The difference between Brent and WTI is not superficial in the 21st century. Brent over WTI, bow, can represent over 20% of the underlying oil price!",
"_____no_output_____"
]
],
[
[
"plot( bow )",
"_____no_output_____"
]
],
[
[
"## Define \"Oil\" as weighted average price\n\nWhen BoW is non-zero, the price of \"Oil\" is ambiguous,\nthus we define a weighted average price of oil\nbetween Brent and WTI.\n\nNote that the composition of Brent oil benchmark no longer\nreally represents Brent as an location.",
"_____no_output_____"
]
],
[
[
"# Set wtbrent, the primary weight for Oil:\nwtbrent = 0.50\n# 0.50 represents the mean\n# ==================================\nwtwti = 1 - wtbrent\n\n# Weighted average:\noil = todf( (wtbrent * brent) + (wtwti * wti), 'Oil' )",
"_____no_output_____"
],
[
"plot( oil )",
"_____no_output_____"
]
],
[
[
"**The Great Recession plunge from \\$140 to \\$40 is extraordinary.**\n*But so is the post-2014 crash which wiped out legendary traders.*\nThe volatility is around 33% annualized, as we shall see later.\n\n[Using equal weights, we have created two synthetic series called `d4oil` and `m4oil`\nfor future convenience.]",
"_____no_output_____"
]
],
[
[
"# Temporarily combine data for Brent, WTI, and weighted Oil:\nstats( paste([oils, oil]) )",
" Brent WTI Oil\ncount 7879.000000 7879.000000 7879.000000\nmean 45.011701 44.248576 44.630138\nstd 33.469778 30.000004 31.660557\nmin 9.100000 10.820000 9.960000\n25% 18.500000 19.850000 19.177500\n50% 28.560000 30.080000 29.275000\n75% 64.985000 64.870000 64.987500\nmax 143.950000 145.310000 144.630000\n\n :: Index on min:\nBrent 1998-12-10\nWTI 1998-12-10\nOil 1998-12-10\ndtype: datetime64[ns]\n\n :: Index on max:\nBrent 2008-07-03\nWTI 2008-07-03\nOil 2008-07-03\ndtype: datetime64[ns]\n\n :: Head:\n Brent WTI Oil\nT \n1987-05-20 18.63 19.75 19.190\n1987-05-21 18.45 19.95 19.200\n1987-05-22 18.55 19.68 19.115\n :: Tail:\n Brent WTI Oil\nT \n2017-07-27 50.67 49.05 49.86\n2017-07-28 52.00 49.72 50.86\n2017-07-31 51.99 50.21 51.10\n\n :: Correlation matrix:\n Brent WTI Oil\nBrent 1.000000 0.990614 0.997901\nWTI 0.990614 1.000000 0.997386\nOil 0.997901 0.997386 1.000000\n"
]
],
[
[
"The very tight correlation between Brent and WTI (> 99%)\ncan mask the potential turbulence of the BoW spread. \n\n## BoW spread as a function of Oil price",
"_____no_output_____"
]
],
[
[
"# Do the regression:\nstat2( bow['BoW'], oil['Oil'] )",
" :: FIRST variable:\ncount 7879.000000\nmean 0.763125\nstd 5.557796\nmin -22.180000\n25% -1.810000\n50% -1.160000\n75% 0.685000\nmax 29.590000\nName: BoW, dtype: float64\n\n :: SECOND variable:\ncount 7879.000000\nmean 44.630138\nstd 31.660557\nmin 9.960000\n25% 19.177500\n50% 29.275000\n75% 64.987500\nmax 144.630000\nName: Oil, dtype: float64\n\n :: CORRELATION\n0.625773327194\n OLS Regression Results \n==============================================================================\nDep. Variable: Y R-squared: 0.392\nModel: OLS Adj. R-squared: 0.392\nMethod: Least Squares F-statistic: 5070.\nDate: Wed, 09 Aug 2017 Prob (F-statistic): 0.00\nTime: 02:31:49 Log-Likelihood: -22736.\nNo. Observations: 7879 AIC: 4.548e+04\nDf Residuals: 7877 BIC: 4.549e+04\nDf Model: 1 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [95.0% Conf. Int.]\n------------------------------------------------------------------------------\nIntercept -4.1395 0.084 -49.036 0.000 -4.305 -3.974\nX 0.1099 0.002 71.203 0.000 0.107 0.113\n==============================================================================\nOmnibus: 1718.842 Durbin-Watson: 0.053\nProb(Omnibus): 0.000 Jarque-Bera (JB): 10202.633\nSkew: 0.913 Prob(JB): 0.00\nKurtosis: 8.267 Cond. No. 94.6\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
]
],
[
[
"Does the price of Oil influence the BoW spread?\nNot much, the correlation is about 63%, and the linear regression is not robust.\n\n*To roughly estimate BoW*: take 11% of Oil price and subtract $4.\nSo higher Oil prices imply greater Brent premium over WTI.",
"_____no_output_____"
],
[
"## Record LOW: Oil on 1998-12-10 at 9.96 USD\n\nWe use that date to define the variable `tmin`.",
"_____no_output_____"
]
],
[
[
"tmin = '1998-12-10'",
"_____no_output_____"
]
],
[
[
"**What is the geometric mean rate since tmin?**",
"_____no_output_____"
]
],
[
[
"gemrat( oil[tmin:], yearly=256 )",
"_____no_output_____"
]
],
[
[
"May 2015: The geometric mean rate since tmin is around 11%\nwith volatility of 33% -- both very high relative to traditional assets.\n\nAug 2017: The geometric mean rate since tmin is down to around 2%\nwith volatility still of 33%. The kurtosis of 7.5 is very high\nsince it would be 3 for a Gaussian distribution:\nhence oil returns are leptokurtotic, i.e. have \"fat tails.\"",
"_____no_output_____"
],
[
"## Oil trend since tmin on 1998-12-10\n\nGiven the high price volatility,\nwhat seems to be underlying trend?",
"_____no_output_____"
]
],
[
[
"plot( trend(todf(oil[tmin:])) )",
" :: regresstime slope = 0.0131256573363\n"
]
],
[
[
"Oil prices can easily go +/- 40% off their statistical trend (about 1.2 std).\nSo the trend is very deceptive.\n\nAug 2017: the trend indicates \\$92 oil, but the\ncurrent market average is in fact around \\$50.",
"_____no_output_____"
],
[
"## Boltzmann portfolio of oils\n\nGiven the BoW spread, correlations, volatilities, and overall uptrend --\nwhat would be an *optimal* portfolio structure out-of-sample?\n\nWe shall compute a **Boltzmann portfolio**\n(see https://git.io/boltz1 for details),\nconsisting of Brent and WTI, which uses cross-entropy\nas its foundation for best geometric growth at minimized risk.\n\nThe short-side shall be practically unrestricted (floor=level=-25)\nsince we assume that crude oil derivatives will be used\nto implement the strategy.\n\nThe history since *tmin* will be the base,\nbut the user can modify the date interactively\n(see https://git.io/boltz2 for details on sequential decisions).",
"_____no_output_____"
]
],
[
[
"prtf = boltzportfolio( oils[tmin:], temp=55, floor=-25, level=-25, n=4 )\nprtf",
"_____no_output_____"
]
],
[
[
"Brent has a higher geometric mean rate (0.83% vs. -1.82 for WTI),\nhence ***0.98 of the portfolio's notional principal is dedicated to Brent***,\nand 0.02 towards WTI.\nThe expected geometric mean rate of this particular\nBoltzmann portfolio is a mere 0.78%,\nhardly better than a Treasury note.\n\nIn a Boltzmann portfolio, the second and fourth centralized moments\nare taken into account to properly access risk,\nand to adjust the geometric mean rates.\n\nThis component analysis is more accurate out-of-sample than a\ntreatment of a single time-series such as the weighted average.",
"_____no_output_____"
],
[
"## Deflated oil prices\n\nWe use our monthly deflator consisting of CPI and PCE,\nboth core and headline versions for each,\nto compute ***real*** oil prices.",
"_____no_output_____"
]
],
[
[
"# First change the sampling frequency to match inflation data:\noilmth = todf(monthly( oil ))\ndefl = todf(get( m4defl ))\noildefl = todf( oilmth * defl )",
"_____no_output_____"
],
[
"# Deflated Oil price:\nplot( oildefl )",
"_____no_output_____"
]
],
[
[
"The supportive 1998 uptrend is broken by the post-2014 crash in prices.\nThe level support from the 1990's is around \\$20 in current US dollars.",
"_____no_output_____"
],
[
"## Oil price ex-USD\n\nHere we are interested in how oil prices appear when priced in\nforeign currencies (outside the United States).\nA basket of trade-weighted currencies against USD helps\nour foreign exchange viewpoint.",
"_____no_output_____"
]
],
[
[
"# rtb is the real trade-weighted USD index,\n# computed monthly by the Federal Reserve.\nrtb = get( m4usdrtb )\noilrtb = todf( oilmth / rtb )",
"_____no_output_____"
],
[
"# Oil priced in rtb index units:\nplot( oilrtb )",
"_____no_output_____"
]
],
[
[
"No big suprises here. Oil from world-ex-USD perspective looks very similar to the dollar-only perspective.",
"_____no_output_____"
],
[
"## Concluding remarks\n\nAlthough WTI has more desirable petrochemical properties than Brent oil,\nour analysis of their prices reveal that Brent is preferable over WTI\nin the context of a financial portfolio -- denominated in practically\nany currency.\n\nBefore taking a position in oil, carefully evaluate the premium of Brent over WTI,\ni.e. the BoW spread.\n\nThe post-2014 crash in oil prices portends a downward look at\nthe \\$20 support level in current U.S. dollars especially in view\nof the caveats listed below.\nThis would be a major concern for oil companies worldwide\n(for example, in August 2017 BP announced their break-even point\nwas \\$47 with respect to the price of oil). \n\n\n### Caveats\n\n- A complete analysis would include the impact of shale oil and *alternate energy sources*.\n\n- As electric cars become more popular, the demand for petroleum will obviously diminish.\n\n- If ISIS becomes dominate over oil fields, expect some minor supply at half the market price.\n\n- Be attentive to offshore storage of crude oil, literally on non-active tanker ships with no destinations.",
"_____no_output_____"
],
[
"---\n\n*Dedicated to recently retired Andrew Hall: the name of the game has changed.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ecbcb718c050d045a448e053325906d5807cee31 | 100,868 | ipynb | Jupyter Notebook | docs/examples/jupyter/VoyagesAPI/Port Congestion.ipynb | SignalOceanSdk/SignalSDK | e21cf6026386bb46fde7582a10995cc7deff8a42 | [
"Apache-2.0"
] | 10 | 2020-09-29T06:36:45.000Z | 2022-03-14T18:15:50.000Z | docs/examples/jupyter/VoyagesAPI/Port Congestion.ipynb | SignalOceanSdk/SignalSDK | e21cf6026386bb46fde7582a10995cc7deff8a42 | [
"Apache-2.0"
] | 53 | 2020-10-08T10:05:00.000Z | 2022-03-29T14:21:18.000Z | docs/examples/jupyter/VoyagesAPI/Port Congestion.ipynb | SignalOceanSdk/SignalSDK | e21cf6026386bb46fde7582a10995cc7deff8a42 | [
"Apache-2.0"
] | 5 | 2020-09-25T07:48:04.000Z | 2021-11-23T07:08:56.000Z | 152.36858 | 81,436 | 0.880547 | [
[
[
"# Port Congestion (Live-Historical)",
"_____no_output_____"
],
[
"This example provides information about the live & historical number of vessels waiting for operation or operating in a selected Port.",
"_____no_output_____"
],
[
"## Setup\nInstall the Signal Ocean SDK:\n```\npip install signal-ocean\n```\nSet your subscription key acquired here: https://apis.signalocean.com/profile",
"_____no_output_____"
]
],
[
[
"pip install signal-ocean",
"_____no_output_____"
],
[
"signal_ocean_api_key = \"\" # replace with your subscription key",
"_____no_output_____"
]
],
[
[
"## Live",
"_____no_output_____"
],
[
"In order to access the port congestion of a specific port (in this case Rotterdam) we are going to use the voyages data api. \nMore specifically we are going to retrieve the voyages events of MR2 vessels for the last 4 months and then use the available data regarding **event_horizon**, **arrival_date** and **sailing_date** in order to find out what vessels are operating or are expecting to operate at the port of interest. \n \nThe time window of four months and also the narrowing of the vessel classes to one are used to restrict the data load of the api call and hence the running time of the notebook. For a more extended investigation the user is prompt to store locally the full voyages data of his interest and then run the analysis using them. This example displays a way to achieve this through sqlite.",
"_____no_output_____"
]
],
[
[
"from signal_ocean import Connection\nfrom signal_ocean.voyages import VoyagesAPI\n\nimport pandas as pd\nfrom datetime import date, datetime, timedelta\nimport sqlite3",
"_____no_output_____"
],
[
"connection = Connection(signal_ocean_api_key)\napi = VoyagesAPI(connection)",
"_____no_output_____"
],
[
"mr2_id = 88\ndate_from = date.today() - timedelta(days=120)\nrecent_mr2_voyages_flat = api.get_voyages_flat(\n vessel_class_id=mr2_id, date_from=date_from\n)",
"_____no_output_____"
],
[
"mr2_voyages_df = pd.DataFrame(v.__dict__ for v in recent_mr2_voyages_flat.voyages)\nmr2_events_df = pd.DataFrame(v.__dict__ for v in recent_mr2_voyages_flat.events)\nmr2_events_details_df = pd.DataFrame(\n v.__dict__ for v in recent_mr2_voyages_flat.event_details\n)",
"_____no_output_____"
],
[
"# the first time the connection is created, the .db file is created as well\nconn = sqlite3.connect(\"port_congestion.db\")",
"_____no_output_____"
],
[
"from sqlalchemy import types, create_engine\nfrom os import path\nimport os\n\n# we create an sqlalchemy engine for the db that we have created\nengine = create_engine(\n f'sqlite:///{path.join(os.path.abspath(os.getcwd()), \"port_congestion.db\")}'\n)\n\n# create tables and append the data\nmr2_voyages_df.to_sql(\"mr2_voyages\", engine, index=True, if_exists=\"replace\")\nmr2_events_df.to_sql(\"mr2_events\", engine, index=True, if_exists=\"replace\")\nmr2_events_details_df.to_sql(\n \"mr2_events_details\", engine, index=True, if_exists=\"replace\"\n)",
"_____no_output_____"
],
[
"relative_events_query = \"\"\"\n select \n imo,\n vessel_name,\n commercial_operator,\n voyage_number,\n ev.id,\n event_type,\n event_horizon,\n purpose,\n date(ev.arrival_date) as arrival_date,\n date(ev.sailing_date) as sailing_date,\n start_time_of_operation\n from mr2_voyages voy \n left join mr2_events ev on ev.voyage_id = voy.id\n left join mr2_events_details det on det.event_id = ev.id\n where\n (\n (event_horizon = 'Current' and Date(ev.sailing_date) >= Date('now'))\n or \n (event_horizon = 'Future' and Date(ev.arrival_date) <= Date('now'))\n or\n (event_horizon = 'Historical' and Date(ev.sailing_date) = Date('now'))\n )\n and not (ev.event_horizon = 'Future' and Date(ev.sailing_date) < Date('now') )\n and event_type in ('PortCall', 'Stop')\n and port_name = 'Rotterdam'\n and purpose in ('Load', 'Discharge', 'Stop')\n\"\"\"",
"_____no_output_____"
],
[
"relative_events_df = pd.read_sql_query(\n relative_events_query, \n conn, \n parse_dates = ['arrival_date','sailing_date','start_time_of_operation']\n)",
"_____no_output_____"
],
[
"def set_status(row):\n if row.event_horizon == 'Historical' and row.purpose != \"Stop\":\n return 'Sailed' + row.purpose\n elif row.purpose == \"Stop\":\n return \"Stop\"\n elif row.event_horizon == 'Future':\n if row.sailing_date.date() <= date.today() or row.arrival_date.date() < date.today():\n return \"No Position Updates\"\n elif row.sailing_date.date() > date.today():\n return 'Expected to ' + row.purpose\n elif row.event_horizon == 'Current':\n if row.start_time_of_operation > datetime.now():\n return \"Expected to\" + row.purpose\n elif row.start_time_of_operation < datetime.now():\n return \"Operating (\" + row.purpose + \")\"\n elif row.arrival_date.date() <= date.today() <= row.sailing_date.date():\n return \"Operating (\" + row.purpose + \")\"\n else:\n return \"UNK\"",
"_____no_output_____"
],
[
"live_port_congestion_df = (\n relative_events_df\n .sort_values(\n ['arrival_date','start_time_of_operation'],\n ascending = [False,False]\n )\n .groupby(\n [\"imo\",\"voyage_number\",\"event_type\"]\n )\n .first()\n .reset_index()\n)",
"_____no_output_____"
],
[
"live_port_congestion_df['Status'] = live_port_congestion_df.apply(set_status,axis=1)",
"_____no_output_____"
],
[
"live_port_congestion_df.drop(\n ['voyage_number','event_type','id','start_time_of_operation']\n ,axis=1\n ,inplace=True\n)",
"_____no_output_____"
],
[
"live_port_congestion_df",
"_____no_output_____"
]
],
[
[
"## Historical",
"_____no_output_____"
]
],
[
[
"from signal_ocean import Connection\nfrom signal_ocean.voyages import VoyagesAPI\n\nimport pandas as pd\nfrom datetime import date, timedelta\nimport sqlite3",
"_____no_output_____"
],
[
"connection = Connection(signal_ocean_api_key)\napi = VoyagesAPI(connection)",
"_____no_output_____"
]
],
[
[
"We get the MR2 voyages that have started during the last year. \nSince we need historical port congestion info there is no way to avoid making a heavy call on voyages data api.",
"_____no_output_____"
]
],
[
[
"%%time\n\nmr2_id = 88\ndate_from = date.today() - timedelta(days=365)\nrecent_mr2_voyages_flat = api.get_voyages_flat(\n vessel_class_id=mr2_id, date_from=date_from\n)",
"Wall time: 2min 16s\n"
],
[
"mr2_voyages_df = pd.DataFrame(v.__dict__ for v in recent_mr2_voyages_flat.voyages)\nmr2_events_df = pd.DataFrame(v.__dict__ for v in recent_mr2_voyages_flat.events)",
"_____no_output_____"
],
[
"rotterdam_events = mr2_events_df[\n (mr2_events_df.port_name == \"Rotterdam\")\n & (mr2_events_df.purpose.isin([\"Discharge\", \"Load\", \"Stop\"]))\n & (mr2_events_df.arrival_date.apply(lambda x: x.date()) <= date.today())\n].copy()",
"_____no_output_____"
],
[
"rotterdam_events[\"imo\"] = rotterdam_events.voyage_id.map(\n dict(zip(mr2_voyages_df.id, mr2_voyages_df.imo))\n)",
"_____no_output_____"
]
],
[
[
"**in_port** column stores the date range between arrival and sailing date. By exploding the dataframe based on this column we get one row per date inside the date range. This is used later on the aggregations.",
"_____no_output_____"
]
],
[
[
"rotterdam_events[\"in_port\"] = rotterdam_events.apply(\n lambda row: pd.date_range(row.arrival_date.date(), row.sailing_date.date()), axis=1\n)",
"_____no_output_____"
],
[
"rotterdam_events_ext = rotterdam_events.explode(\"in_port\")",
"_____no_output_____"
],
[
"num_of_vessels_time_series = (\n rotterdam_events_ext.groupby(\"in_port\")[\"imo\"].nunique().reset_index()\n)\nnum_of_vessels_time_series.columns = [\"date\", \"vessels\"]",
"_____no_output_____"
]
],
[
[
"**date_range_basis** dictates the dates to be displayed on the graph. On top of that ensures the continuity of the time series data by assigning 0 to the dates that had no vessels waiting or operating at the port of interest.",
"_____no_output_____"
]
],
[
[
"date_range_basis = pd.date_range(\n start=date(2021,1,1),\n end=date.today(),\n)",
"_____no_output_____"
],
[
"num_of_vessels_time_series = (\nnum_of_vessels_time_series\n .set_index(\"date\")\n .reindex(date_range_basis)\n .fillna(0.0)\n .rename_axis(\"date\").reset_index()\n)",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize = (15,10))\n\nsns.lineplot(data=num_of_vessels_time_series, x=\"date\", y=\"vessels\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecbcb9039d8637cd9b3ddf29ff6f596f4ee812f4 | 588,214 | ipynb | Jupyter Notebook | demo/MMPose_Tutorial.ipynb | yumendecc/mmpose | 5976f550cc8172d07d5b182fed94ee585694f99d | [
"Apache-2.0"
] | null | null | null | demo/MMPose_Tutorial.ipynb | yumendecc/mmpose | 5976f550cc8172d07d5b182fed94ee585694f99d | [
"Apache-2.0"
] | null | null | null | demo/MMPose_Tutorial.ipynb | yumendecc/mmpose | 5976f550cc8172d07d5b182fed94ee585694f99d | [
"Apache-2.0"
] | null | null | null | 187.987856 | 197,684 | 0.805878 | [
[
[
"<a href=\"https://colab.research.google.com/github/open-mmlab/mmpose/blob/main/demo/MMPose_Tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# MMPose Tutorial\n\nWelcome to MMPose colab tutorial! In this tutorial, we will show you how to\n- perform inference with an MMPose model\n- train a new mmpose model with your own datasets\n\nLet's start!",
"_____no_output_____"
],
[
"## Install MMPose\n\nWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.",
"_____no_output_____"
]
],
[
[
"# check NVCC version\n!nvcc -V\n\n# check GCC version\n!gcc --version\n\n# check python in conda environment\n!which python",
"nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2020 NVIDIA Corporation\nBuilt on Tue_Sep_15_19:10:02_PDT_2020\nCuda compilation tools, release 11.1, V11.1.74\nBuild cuda_11.1.TC455_06.29069683_0\ngcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\nCopyright (C) 2019 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n/home/PJLAB/liyining/anaconda3/envs/pt1.9/bin/python\n"
],
[
"# install dependencies: (use cu111 because colab has CUDA 11.1)\n%pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html\n\n# install mmcv-full thus we could use CUDA operators\n%pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html\n\n# install mmdet for inference demo\n%pip install mmdet\n\n# clone mmpose repo\n%rm -rf mmpose\n!git clone https://github.com/open-mmlab/mmpose.git\n%cd mmpose\n\n# install mmpose dependencies\n%pip install -r requirements.txt\n\n# install mmpose in develop mode\n%pip install -e .",
"\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nLooking in links: https://download.pytorch.org/whl/torch_stable.html\nRequirement already satisfied: torch==1.10.0+cu111 in /usr/local/lib/python3.7/dist-packages (1.10.0+cu111)\nRequirement already satisfied: torchvision==0.11.0+cu111 in /usr/local/lib/python3.7/dist-packages (0.11.0+cu111)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.10.0+cu111) (3.10.0.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.0+cu111) (1.21.5)\nRequirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.0+cu111) (7.1.2)\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nLooking in links: https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html\nRequirement already satisfied: mmcv-full in /usr/local/lib/python3.7/dist-packages (1.4.6)\nRequirement already satisfied: addict in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (2.4.0)\nRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (7.1.2)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (21.3)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (1.21.5)\nRequirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (0.32.0)\nRequirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (4.1.2.30)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (3.13)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->mmcv-full) (3.0.7)\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nRequirement already satisfied: mmdet in /usr/local/lib/python3.7/dist-packages (2.22.0)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from mmdet) (3.2.2)\nRequirement already satisfied: terminaltables in /usr/local/lib/python3.7/dist-packages (from mmdet) (3.1.10)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from mmdet) (1.15.0)\nRequirement already satisfied: pycocotools in /usr/local/lib/python3.7/dist-packages (from mmdet) (2.0.4)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmdet) (1.21.5)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmdet) (2.8.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmdet) (0.11.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmdet) (3.0.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmdet) (1.3.2)\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nCloning into 'mmpose'...\nremote: Enumerating objects: 17932, done.\u001b[K\nremote: Counting objects: 100% (2856/2856), done.\u001b[K\nremote: Compressing objects: 100% (1144/1144), done.\u001b[K\nremote: Total 17932 (delta 1864), reused 2414 (delta 1680), pack-reused 15076\u001b[K\nReceiving objects: 100% (17932/17932), 26.12 MiB | 30.22 MiB/s, done.\nResolving deltas: 100% (12459/12459), done.\n/content/mmpose/mmpose\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nIgnoring dataclasses: markers 'python_version == \"3.6\"' don't match your environment\nCollecting poseval@ git+https://github.com/svenkreiss/poseval.git\n Cloning https://github.com/svenkreiss/poseval.git to /tmp/pip-install-0cofrb1n/poseval_1a913a96da95443db876d27e713e233a\n Running command git clone -q https://github.com/svenkreiss/poseval.git /tmp/pip-install-0cofrb1n/poseval_1a913a96da95443db876d27e713e233a\n Running command git submodule update --init --recursive -q\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from -r requirements/build.txt (line 2)) (1.21.5)\nRequirement already satisfied: torch>=1.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements/build.txt (line 3)) (1.10.0+cu111)\nRequirement already satisfied: chumpy in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 1)) (0.70)\nRequirement already satisfied: json_tricks in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 3)) (3.15.5)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 4)) (3.2.2)\nRequirement already satisfied: munkres in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 5)) (1.1.4)\nRequirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 7)) (4.1.2.30)\nRequirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 8)) (7.1.2)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 9)) (1.4.1)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 10)) (0.11.0+cu111)\nRequirement already satisfied: xtcocotools>=1.8 in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 11)) (1.11.5)\nRequirement already satisfied: coverage in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 1)) (3.7.1)\nRequirement already satisfied: flake8 in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 2)) (4.0.1)\nRequirement already satisfied: interrogate in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 3)) (1.5.0)\nRequirement already satisfied: isort==4.3.21 in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 4)) (4.3.21)\nRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 5)) (4.3.1)\nRequirement already satisfied: pytest-runner in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 6)) (6.0.0)\nRequirement already satisfied: smplx>=0.1.28 in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 7)) (0.1.28)\nRequirement already satisfied: xdoctest>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 8)) (0.15.10)\nRequirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 9)) (0.32.0)\nRequirement already satisfied: albumentations>=0.3.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 1)) (1.1.0)\nRequirement already satisfied: onnx in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 2)) (1.11.0)\nRequirement already satisfied: onnxruntime in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 3)) (1.10.0)\nRequirement already satisfied: pyrender in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 5)) (0.1.45)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 6)) (2.23.0)\nRequirement already satisfied: trimesh in /usr/local/lib/python3.7/dist-packages (from -r requirements/optional.txt (line 8)) (3.10.3)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (7.1.2)\nRequirement already satisfied: motmetrics>=1.2 in /usr/local/lib/python3.7/dist-packages (from poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (1.2.0)\nRequirement already satisfied: shapely in /usr/local/lib/python3.7/dist-packages (from poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (1.8.1.post1)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (4.63.0)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.3->-r requirements/build.txt (line 3)) (3.10.0.2)\nRequirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.7/dist-packages (from xtcocotools>=1.8->-r requirements/runtime.txt (line 11)) (57.4.0)\nRequirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from xtcocotools>=1.8->-r requirements/runtime.txt (line 11)) (0.29.28)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements/runtime.txt (line 4)) (3.0.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements/runtime.txt (line 4)) (1.3.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements/runtime.txt (line 4)) (0.11.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements/runtime.txt (line 4)) (2.8.2)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from xdoctest>=0.10.0->-r requirements/tests.txt (line 8)) (1.15.0)\nRequirement already satisfied: scikit-image>=0.16.1 in /usr/local/lib/python3.7/dist-packages (from albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (0.18.3)\nRequirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (3.13)\nRequirement already satisfied: qudida>=0.0.4 in /usr/local/lib/python3.7/dist-packages (from albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (0.0.4)\nRequirement already satisfied: flake8-import-order in /usr/local/lib/python3.7/dist-packages (from motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (0.18.1)\nRequirement already satisfied: pytest-benchmark in /usr/local/lib/python3.7/dist-packages (from motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (3.4.1)\nRequirement already satisfied: xmltodict>=0.12.0 in /usr/local/lib/python3.7/dist-packages (from motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (0.12.0)\nRequirement already satisfied: pandas>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (1.3.5)\nRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.23.1->motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (2018.9)\nRequirement already satisfied: scikit-learn>=0.19.1 in /usr/local/lib/python3.7/dist-packages (from qudida>=0.0.4->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (1.0.2)\nRequirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (2.6.3)\nRequirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (2.4.1)\nRequirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (1.2.0)\nRequirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (2021.11.2)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.19.1->qudida>=0.0.4->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (1.1.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.19.1->qudida>=0.0.4->albumentations>=0.3.2->-r requirements/optional.txt (line 1)) (3.1.0)\nRequirement already satisfied: pycodestyle<2.9.0,>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (2.8.0)\nRequirement already satisfied: importlib-metadata<4.3 in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (4.2.0)\nRequirement already satisfied: mccabe<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (0.6.1)\nRequirement already satisfied: pyflakes<2.5.0,>=2.4.0 in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (2.4.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata<4.3->flake8->-r requirements/tests.txt (line 2)) (3.7.0)\nRequirement already satisfied: py in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (1.11.0)\nRequirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.8.9)\nRequirement already satisfied: colorama in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.4.4)\nRequirement already satisfied: attrs in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (21.4.0)\nRequirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.10.2)\nRequirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 5)) (1.4.0)\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 5)) (8.12.0)\nRequirement already satisfied: pluggy>=0.7 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 5)) (0.7.1)\nRequirement already satisfied: protobuf>=3.12.2 in /usr/local/lib/python3.7/dist-packages (from onnx->-r requirements/optional.txt (line 2)) (3.17.3)\nRequirement already satisfied: flatbuffers in /usr/local/lib/python3.7/dist-packages (from onnxruntime->-r requirements/optional.txt (line 3)) (2.0)\nRequirement already satisfied: PyOpenGL==3.1.0 in /usr/local/lib/python3.7/dist-packages (from pyrender->-r requirements/optional.txt (line 5)) (3.1.0)\nRequirement already satisfied: pyglet>=1.4.10 in /usr/local/lib/python3.7/dist-packages (from pyrender->-r requirements/optional.txt (line 5)) (1.5.0)\nRequirement already satisfied: freetype-py in /usr/local/lib/python3.7/dist-packages (from pyrender->-r requirements/optional.txt (line 5)) (2.2.0)\nRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet>=1.4.10->pyrender->-r requirements/optional.txt (line 5)) (0.16.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->-r requirements/optional.txt (line 6)) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->-r requirements/optional.txt (line 6)) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->-r requirements/optional.txt (line 6)) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->-r requirements/optional.txt (line 6)) (2021.10.8)\nRequirement already satisfied: py-cpuinfo in /usr/local/lib/python3.7/dist-packages (from pytest-benchmark->motmetrics>=1.2->poseval@ git+https://github.com/svenkreiss/poseval.git->-r requirements/optional.txt (line 4)) (8.0.0)\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nObtaining file:///content/mmpose/mmpose\n\u001b[33m WARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nRequirement already satisfied: chumpy in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (0.70)\nRequirement already satisfied: json_tricks in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (3.15.5)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (3.2.2)\nRequirement already satisfied: munkres in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (1.1.4)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (1.21.5)\nRequirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (4.1.2.30)\nRequirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (7.1.2)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (1.4.1)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (0.11.0+cu111)\nRequirement already satisfied: xtcocotools>=1.8 in /usr/local/lib/python3.7/dist-packages (from mmpose==0.24.0) (1.11.5)\nRequirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from xtcocotools>=1.8->mmpose==0.24.0) (0.29.28)\nRequirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.7/dist-packages (from xtcocotools>=1.8->mmpose==0.24.0) (57.4.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmpose==0.24.0) (1.3.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmpose==0.24.0) (3.0.7)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmpose==0.24.0) (2.8.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->mmpose==0.24.0) (0.11.0)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->mmpose==0.24.0) (1.15.0)\nRequirement already satisfied: torch==1.10.0+cu111 in /usr/local/lib/python3.7/dist-packages (from torchvision->mmpose==0.24.0) (1.10.0+cu111)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.10.0+cu111->torchvision->mmpose==0.24.0) (3.10.0.2)\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nInstalling collected packages: mmpose\n Attempting uninstall: mmpose\n\u001b[33m WARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n Found existing installation: mmpose 0.24.0\n Can't uninstall 'mmpose'. No files were found to uninstall.\n Running setup.py develop for mmpose\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\n\u001b[33mWARNING: Ignoring invalid distribution -orch (/usr/local/lib/python3.7/dist-packages)\u001b[0m\nSuccessfully installed mmpose-0.24.0\n"
],
[
"# Check Pytorch installation\nimport torch, torchvision\nprint('torch version:', torch.__version__, torch.cuda.is_available())\nprint('torchvision version:', torchvision.__version__)\n\n# Check MMPose installation\nimport mmpose\nprint('mmpose version:', mmpose.__version__)\n\n# Check mmcv installation\nfrom mmcv.ops import get_compiling_cuda_version, get_compiler_version\nprint('cuda version:', get_compiling_cuda_version())\nprint('compiler information:', get_compiler_version())",
"torch version: 1.9.0+cu111 True\ntorchvision version: 0.10.0+cu111\nmmpose version: 0.18.0\ncuda version: 11.1\ncompiler information: GCC 9.3\n"
]
],
[
[
"## Inference with an MMPose model\n\nMMPose provides high level APIs for model inference and training.",
"_____no_output_____"
]
],
[
[
"import cv2\nfrom mmpose.apis import (inference_top_down_pose_model, init_pose_model,\n vis_pose_result, process_mmdet_results)\nfrom mmdet.apis import inference_detector, init_detector\nlocal_runtime = False\n\ntry:\n from google.colab.patches import cv2_imshow # for image visualization in colab\nexcept:\n local_runtime = True\n\npose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'\npose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'\ndet_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'\ndet_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'\n\n# initialize pose model\npose_model = init_pose_model(pose_config, pose_checkpoint)\n# initialize detector\ndet_model = init_detector(det_config, det_checkpoint)\n\nimg = 'tests/data/coco/000000196141.jpg'\n\n# inference detection\nmmdet_results = inference_detector(det_model, img)\n\n# extract person (COCO_ID=1) bounding boxes from the detection results\nperson_results = process_mmdet_results(mmdet_results, cat_id=1)\n\n# inference pose\npose_results, returned_outputs = inference_top_down_pose_model(pose_model,\n img,\n person_results,\n bbox_thr=0.3,\n format='xyxy',\n dataset=pose_model.cfg.data.test.type)\n\n# show pose estimation results\nvis_result = vis_pose_result(pose_model,\n img,\n pose_results,\n dataset=pose_model.cfg.data.test.type,\n show=False)\n# reduce image size\nvis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)\n\nif local_runtime:\n from IPython.display import Image, display\n import tempfile\n import os.path as osp\n with tempfile.TemporaryDirectory() as tmpdir:\n file_name = osp.join(tmpdir, 'pose_results.png')\n cv2.imwrite(file_name, vis_result)\n display(Image(file_name))\nelse:\n cv2_imshow(vis_result)\n\n",
"Use load_from_http loader\n"
]
],
[
[
"## Train a pose estimation model on a customized dataset\n\nTo train a model on a customized dataset with MMPose, there are usually three steps:\n1. Support the dataset in MMPose\n1. Create a config\n1. Perform training and evaluation\n\n### Add a new dataset\n\nThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.html#reorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.\n\nWe first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.\n\n",
"_____no_output_____"
]
],
[
[
"# download dataset\n%mkdir data\n%cd data\n!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar\n!tar -xf coco_tiny.tar\n%cd ..",
"mkdir: cannot create directory ‘data’: File exists\n/home/PJLAB/liyining/openmmlab/mmpose/data\n--2021-09-22 22:27:21-- https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar\nResolving openmmlab.oss-cn-hangzhou.aliyuncs.com (openmmlab.oss-cn-hangzhou.aliyuncs.com)... 124.160.145.51\nConnecting to openmmlab.oss-cn-hangzhou.aliyuncs.com (openmmlab.oss-cn-hangzhou.aliyuncs.com)|124.160.145.51|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 16558080 (16M) [application/x-tar]\nSaving to: ‘coco_tiny.tar.1’\n\ncoco_tiny.tar.1 100%[===================>] 15.79M 14.7MB/s in 1.1s \n\n2021-09-22 22:27:24 (14.7 MB/s) - ‘coco_tiny.tar.1’ saved [16558080/16558080]\n\n/home/PJLAB/liyining/openmmlab/mmpose\n"
],
[
"# check the directory structure\n!apt-get -q install tree\n!tree data/coco_tiny",
"E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\r\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n\u001b[01;34mdata/coco_tiny\u001b[00m\n├── \u001b[01;34mimages\u001b[00m\n│ ├── \u001b[01;35m000000012754.jpg\u001b[00m\n│ ├── \u001b[01;35m000000017741.jpg\u001b[00m\n│ ├── \u001b[01;35m000000019157.jpg\u001b[00m\n│ ├── \u001b[01;35m000000019523.jpg\u001b[00m\n│ ├── \u001b[01;35m000000019608.jpg\u001b[00m\n│ ├── \u001b[01;35m000000022816.jpg\u001b[00m\n│ ├── \u001b[01;35m000000031092.jpg\u001b[00m\n│ ├── \u001b[01;35m000000032124.jpg\u001b[00m\n│ ├── \u001b[01;35m000000037209.jpg\u001b[00m\n│ ├── \u001b[01;35m000000050713.jpg\u001b[00m\n│ ├── \u001b[01;35m000000057703.jpg\u001b[00m\n│ ├── \u001b[01;35m000000064909.jpg\u001b[00m\n│ ├── \u001b[01;35m000000076942.jpg\u001b[00m\n│ ├── \u001b[01;35m000000079754.jpg\u001b[00m\n│ ├── \u001b[01;35m000000083935.jpg\u001b[00m\n│ ├── \u001b[01;35m000000085316.jpg\u001b[00m\n│ ├── \u001b[01;35m000000101013.jpg\u001b[00m\n│ ├── \u001b[01;35m000000101172.jpg\u001b[00m\n│ ├── \u001b[01;35m000000103134.jpg\u001b[00m\n│ ├── \u001b[01;35m000000103163.jpg\u001b[00m\n│ ├── \u001b[01;35m000000105647.jpg\u001b[00m\n│ ├── \u001b[01;35m000000107960.jpg\u001b[00m\n│ ├── \u001b[01;35m000000117891.jpg\u001b[00m\n│ ├── \u001b[01;35m000000118181.jpg\u001b[00m\n│ ├── \u001b[01;35m000000120021.jpg\u001b[00m\n│ ├── \u001b[01;35m000000128119.jpg\u001b[00m\n│ ├── \u001b[01;35m000000143908.jpg\u001b[00m\n│ ├── \u001b[01;35m000000145025.jpg\u001b[00m\n│ ├── \u001b[01;35m000000147386.jpg\u001b[00m\n│ ├── \u001b[01;35m000000147979.jpg\u001b[00m\n│ ├── \u001b[01;35m000000154222.jpg\u001b[00m\n│ ├── \u001b[01;35m000000160190.jpg\u001b[00m\n│ ├── \u001b[01;35m000000161112.jpg\u001b[00m\n│ ├── \u001b[01;35m000000175737.jpg\u001b[00m\n│ ├── \u001b[01;35m000000177069.jpg\u001b[00m\n│ ├── \u001b[01;35m000000184659.jpg\u001b[00m\n│ ├── \u001b[01;35m000000209468.jpg\u001b[00m\n│ ├── \u001b[01;35m000000210060.jpg\u001b[00m\n│ ├── \u001b[01;35m000000215867.jpg\u001b[00m\n│ ├── \u001b[01;35m000000216861.jpg\u001b[00m\n│ ├── \u001b[01;35m000000227224.jpg\u001b[00m\n│ ├── \u001b[01;35m000000246265.jpg\u001b[00m\n│ ├── \u001b[01;35m000000254919.jpg\u001b[00m\n│ ├── \u001b[01;35m000000263687.jpg\u001b[00m\n│ ├── \u001b[01;35m000000264628.jpg\u001b[00m\n│ ├── \u001b[01;35m000000268927.jpg\u001b[00m\n│ ├── \u001b[01;35m000000271177.jpg\u001b[00m\n│ ├── \u001b[01;35m000000275219.jpg\u001b[00m\n│ ├── \u001b[01;35m000000277542.jpg\u001b[00m\n│ ├── \u001b[01;35m000000279140.jpg\u001b[00m\n│ ├── \u001b[01;35m000000286813.jpg\u001b[00m\n│ ├── \u001b[01;35m000000297980.jpg\u001b[00m\n│ ├── \u001b[01;35m000000301641.jpg\u001b[00m\n│ ├── \u001b[01;35m000000312341.jpg\u001b[00m\n│ ├── \u001b[01;35m000000325768.jpg\u001b[00m\n│ ├── \u001b[01;35m000000332221.jpg\u001b[00m\n│ ├── \u001b[01;35m000000345071.jpg\u001b[00m\n│ ├── \u001b[01;35m000000346965.jpg\u001b[00m\n│ ├── \u001b[01;35m000000347836.jpg\u001b[00m\n│ ├── \u001b[01;35m000000349437.jpg\u001b[00m\n│ ├── \u001b[01;35m000000360735.jpg\u001b[00m\n│ ├── \u001b[01;35m000000362343.jpg\u001b[00m\n│ ├── \u001b[01;35m000000364079.jpg\u001b[00m\n│ ├── \u001b[01;35m000000364113.jpg\u001b[00m\n│ ├── \u001b[01;35m000000386279.jpg\u001b[00m\n│ ├── \u001b[01;35m000000386968.jpg\u001b[00m\n│ ├── \u001b[01;35m000000388619.jpg\u001b[00m\n│ ├── \u001b[01;35m000000390137.jpg\u001b[00m\n│ ├── \u001b[01;35m000000390241.jpg\u001b[00m\n│ ├── \u001b[01;35m000000390298.jpg\u001b[00m\n│ ├── \u001b[01;35m000000390348.jpg\u001b[00m\n│ ├── \u001b[01;35m000000398606.jpg\u001b[00m\n│ ├── \u001b[01;35m000000400456.jpg\u001b[00m\n│ ├── \u001b[01;35m000000402514.jpg\u001b[00m\n│ ├── \u001b[01;35m000000403255.jpg\u001b[00m\n│ ├── \u001b[01;35m000000403432.jpg\u001b[00m\n│ ├── \u001b[01;35m000000410350.jpg\u001b[00m\n│ ├── \u001b[01;35m000000453065.jpg\u001b[00m\n│ ├── \u001b[01;35m000000457254.jpg\u001b[00m\n│ ├── \u001b[01;35m000000464153.jpg\u001b[00m\n│ ├── \u001b[01;35m000000464515.jpg\u001b[00m\n│ ├── \u001b[01;35m000000465418.jpg\u001b[00m\n│ ├── \u001b[01;35m000000480591.jpg\u001b[00m\n│ ├── \u001b[01;35m000000484279.jpg\u001b[00m\n│ ├── \u001b[01;35m000000494014.jpg\u001b[00m\n│ ├── \u001b[01;35m000000515289.jpg\u001b[00m\n│ ├── \u001b[01;35m000000516805.jpg\u001b[00m\n│ ├── \u001b[01;35m000000521994.jpg\u001b[00m\n│ ├── \u001b[01;35m000000528962.jpg\u001b[00m\n│ ├── \u001b[01;35m000000534736.jpg\u001b[00m\n│ ├── \u001b[01;35m000000535588.jpg\u001b[00m\n│ ├── \u001b[01;35m000000537548.jpg\u001b[00m\n│ ├── \u001b[01;35m000000553698.jpg\u001b[00m\n│ ├── \u001b[01;35m000000555622.jpg\u001b[00m\n│ ├── \u001b[01;35m000000566456.jpg\u001b[00m\n│ ├── \u001b[01;35m000000567171.jpg\u001b[00m\n│ └── \u001b[01;35m000000568961.jpg\u001b[00m\n├── train.json\n└── val.json\n\n1 directory, 99 files\n"
],
[
"# check the annotation format\nimport json\nimport pprint\n\nanns = json.load(open('data/coco_tiny/train.json'))\n\nprint(type(anns), len(anns))\npprint.pprint(anns[0], compact=True)\n",
"<class 'list'> 75\n{'bbox': [267.03, 104.32, 229.19, 320],\n 'image_file': '000000537548.jpg',\n 'image_size': [640, 480],\n 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,\n 177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,\n 339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}\n"
]
],
[
[
"After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap#readme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.",
"_____no_output_____"
]
],
[
[
"import json\nimport os.path as osp\nfrom collections import OrderedDict\nimport tempfile\n\nimport numpy as np\n\nfrom mmpose.core.evaluation.top_down_eval import (keypoint_nme,\n keypoint_pck_accuracy)\nfrom mmpose.datasets.builder import DATASETS\nfrom mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset\n\n\[email protected]_module()\nclass TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):\n\n def __init__(self,\n ann_file,\n img_prefix,\n data_cfg,\n pipeline,\n dataset_info=None,\n test_mode=False):\n super().__init__(\n ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)\n\n # flip_pairs, upper_body_ids and lower_body_ids will be used\n # in some data augmentations like random flip\n self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],\n [11, 12], [13, 14], [15, 16]]\n self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)\n self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)\n\n self.ann_info['joint_weights'] = None\n self.ann_info['use_different_joint_weights'] = False\n\n self.dataset_name = 'coco_tiny'\n self.db = self._get_db()\n\n def _get_db(self):\n with open(self.ann_file) as f:\n anns = json.load(f)\n\n db = []\n for idx, ann in enumerate(anns):\n # get image path\n image_file = osp.join(self.img_prefix, ann['image_file'])\n # get bbox\n bbox = ann['bbox']\n # get keypoints\n keypoints = np.array(\n ann['keypoints'], dtype=np.float32).reshape(-1, 3)\n num_joints = keypoints.shape[0]\n joints_3d = np.zeros((num_joints, 3), dtype=np.float32)\n joints_3d[:, :2] = keypoints[:, :2]\n joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)\n joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])\n\n sample = {\n 'image_file': image_file,\n 'bbox': bbox,\n 'rotation': 0,\n 'joints_3d': joints_3d,\n 'joints_3d_visible': joints_3d_visible,\n 'bbox_score': 1,\n 'bbox_id': idx,\n }\n db.append(sample)\n\n return db\n\n def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):\n \"\"\"Evaluate keypoint detection results. The pose prediction results will\n be saved in `${res_folder}/result_keypoints.json`.\n\n Note:\n batch_size: N\n num_keypoints: K\n heatmap height: H\n heatmap width: W\n\n Args:\n results (list(preds, boxes, image_path, output_heatmap))\n :preds (np.ndarray[N,K,3]): The first two dimensions are\n coordinates, score is the third dimension of the array.\n :boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]\n , scale[1],area, score]\n :image_paths (list[str]): For example, ['Test/source/0.jpg']\n :output_heatmap (np.ndarray[N, K, H, W]): model outputs.\n\n res_folder (str, optional): The folder to save the testing\n results. If not specified, a temp folder will be created.\n Default: None.\n metric (str | list[str]): Metric to be performed.\n Options: 'PCK', 'NME'.\n\n Returns:\n dict: Evaluation results for evaluation metric.\n \"\"\"\n metrics = metric if isinstance(metric, list) else [metric]\n allowed_metrics = ['PCK', 'NME']\n for metric in metrics:\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n\n if res_folder is not None:\n tmp_folder = None\n res_file = osp.join(res_folder, 'result_keypoints.json')\n else:\n tmp_folder = tempfile.TemporaryDirectory()\n res_file = osp.join(tmp_folder.name, 'result_keypoints.json')\n\n kpts = []\n for result in results:\n preds = result['preds']\n boxes = result['boxes']\n image_paths = result['image_paths']\n bbox_ids = result['bbox_ids']\n\n batch_size = len(image_paths)\n for i in range(batch_size):\n kpts.append({\n 'keypoints': preds[i].tolist(),\n 'center': boxes[i][0:2].tolist(),\n 'scale': boxes[i][2:4].tolist(),\n 'area': float(boxes[i][4]),\n 'score': float(boxes[i][5]),\n 'bbox_id': bbox_ids[i]\n })\n kpts = self._sort_and_unique_bboxes(kpts)\n\n self._write_keypoint_results(kpts, res_file)\n info_str = self._report_metric(res_file, metrics)\n name_value = OrderedDict(info_str)\n\n if tmp_folder is not None:\n tmp_folder.cleanup()\n\n return name_value\n\n def _report_metric(self, res_file, metrics, pck_thr=0.3):\n \"\"\"Keypoint evaluation.\n\n Args:\n res_file (str): Json file stored prediction results.\n metrics (str | list[str]): Metric to be performed.\n Options: 'PCK', 'NME'.\n pck_thr (float): PCK threshold, default: 0.3.\n\n Returns:\n dict: Evaluation results for evaluation metric.\n \"\"\"\n info_str = []\n\n with open(res_file, 'r') as fin:\n preds = json.load(fin)\n assert len(preds) == len(self.db)\n\n outputs = []\n gts = []\n masks = []\n\n for pred, item in zip(preds, self.db):\n outputs.append(np.array(pred['keypoints'])[:, :-1])\n gts.append(np.array(item['joints_3d'])[:, :-1])\n masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)\n\n outputs = np.array(outputs)\n gts = np.array(gts)\n masks = np.array(masks)\n\n normalize_factor = self._get_normalize_factor(gts)\n\n if 'PCK' in metrics:\n _, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,\n normalize_factor)\n info_str.append(('PCK', pck))\n\n if 'NME' in metrics:\n info_str.append(\n ('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))\n\n return info_str\n\n @staticmethod\n def _write_keypoint_results(keypoints, res_file):\n \"\"\"Write results into a json file.\"\"\"\n\n with open(res_file, 'w') as f:\n json.dump(keypoints, f, sort_keys=True, indent=4)\n\n @staticmethod\n def _sort_and_unique_bboxes(kpts, key='bbox_id'):\n \"\"\"sort kpts and remove the repeated ones.\"\"\"\n kpts = sorted(kpts, key=lambda x: x[key])\n num = len(kpts)\n for i in range(num - 1, 0, -1):\n if kpts[i][key] == kpts[i - 1][key]:\n del kpts[i]\n\n return kpts\n \n @staticmethod\n def _get_normalize_factor(gts):\n \"\"\"Get inter-ocular distance as the normalize factor, measured as the\n Euclidean distance between the outer corners of the eyes.\n\n Args:\n gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.\n\n Return:\n np.ndarray[N, 2]: normalized factor\n \"\"\"\n\n interocular = np.linalg.norm(\n gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)\n return np.tile(interocular, [1, 2])\n\n",
"_____no_output_____"
]
],
[
[
"### Create a config file\n\nIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.",
"_____no_output_____"
]
],
[
[
"from mmcv import Config\ncfg = Config.fromfile(\n './configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'\n)\n\n# set basic configs\ncfg.data_root = 'data/coco_tiny'\ncfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'\ncfg.gpu_ids = range(1)\ncfg.seed = 0\n\n# set log interval\ncfg.log_config.interval = 1\n\n# set evaluation configs\ncfg.evaluation.interval = 10\ncfg.evaluation.metric = 'PCK'\ncfg.evaluation.save_best = 'PCK'\n\n# set learning rate policy\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=10,\n warmup_ratio=0.001,\n step=[17, 35])\ncfg.total_epochs = 40\n\n# set batch size\ncfg.data.samples_per_gpu = 16\ncfg.data.val_dataloader = dict(samples_per_gpu=16)\ncfg.data.test_dataloader = dict(samples_per_gpu=16)\n\n\n# set dataset configs\ncfg.data.train.type = 'TopDownCOCOTinyDataset'\ncfg.data.train.ann_file = f'{cfg.data_root}/train.json'\ncfg.data.train.img_prefix = f'{cfg.data_root}/images/'\n\ncfg.data.val.type = 'TopDownCOCOTinyDataset'\ncfg.data.val.ann_file = f'{cfg.data_root}/val.json'\ncfg.data.val.img_prefix = f'{cfg.data_root}/images/'\n\ncfg.data.test.type = 'TopDownCOCOTinyDataset'\ncfg.data.test.ann_file = f'{cfg.data_root}/val.json'\ncfg.data.test.img_prefix = f'{cfg.data_root}/images/'\n\nprint(cfg.pretty_text)\n",
"dataset_info = dict(\n dataset_name='coco',\n paper_info=dict(\n author=\n 'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\\'a}r, Piotr and Zitnick, C Lawrence',\n title='Microsoft coco: Common objects in context',\n container='European conference on computer vision',\n year='2014',\n homepage='http://cocodataset.org/'),\n keypoint_info=dict({\n 0:\n dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),\n 1:\n dict(\n name='left_eye',\n id=1,\n color=[51, 153, 255],\n type='upper',\n swap='right_eye'),\n 2:\n dict(\n name='right_eye',\n id=2,\n color=[51, 153, 255],\n type='upper',\n swap='left_eye'),\n 3:\n dict(\n name='left_ear',\n id=3,\n color=[51, 153, 255],\n type='upper',\n swap='right_ear'),\n 4:\n dict(\n name='right_ear',\n id=4,\n color=[51, 153, 255],\n type='upper',\n swap='left_ear'),\n 5:\n dict(\n name='left_shoulder',\n id=5,\n color=[0, 255, 0],\n type='upper',\n swap='right_shoulder'),\n 6:\n dict(\n name='right_shoulder',\n id=6,\n color=[255, 128, 0],\n type='upper',\n swap='left_shoulder'),\n 7:\n dict(\n name='left_elbow',\n id=7,\n color=[0, 255, 0],\n type='upper',\n swap='right_elbow'),\n 8:\n dict(\n name='right_elbow',\n id=8,\n color=[255, 128, 0],\n type='upper',\n swap='left_elbow'),\n 9:\n dict(\n name='left_wrist',\n id=9,\n color=[0, 255, 0],\n type='upper',\n swap='right_wrist'),\n 10:\n dict(\n name='right_wrist',\n id=10,\n color=[255, 128, 0],\n type='upper',\n swap='left_wrist'),\n 11:\n dict(\n name='left_hip',\n id=11,\n color=[0, 255, 0],\n type='lower',\n swap='right_hip'),\n 12:\n dict(\n name='right_hip',\n id=12,\n color=[255, 128, 0],\n type='lower',\n swap='left_hip'),\n 13:\n dict(\n name='left_knee',\n id=13,\n color=[0, 255, 0],\n type='lower',\n swap='right_knee'),\n 14:\n dict(\n name='right_knee',\n id=14,\n color=[255, 128, 0],\n type='lower',\n swap='left_knee'),\n 15:\n dict(\n name='left_ankle',\n id=15,\n color=[0, 255, 0],\n type='lower',\n swap='right_ankle'),\n 16:\n dict(\n name='right_ankle',\n id=16,\n color=[255, 128, 0],\n type='lower',\n swap='left_ankle')\n }),\n skeleton_info=dict({\n 0:\n dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),\n 1:\n dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),\n 2:\n dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),\n 3:\n dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),\n 4:\n dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),\n 5:\n dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),\n 6:\n dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),\n 7:\n dict(\n link=('left_shoulder', 'right_shoulder'),\n id=7,\n color=[51, 153, 255]),\n 8:\n dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),\n 9:\n dict(\n link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),\n 10:\n dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),\n 11:\n dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),\n 12:\n dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),\n 13:\n dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),\n 14:\n dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),\n 15:\n dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),\n 16:\n dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),\n 17:\n dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),\n 18:\n dict(\n link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])\n }),\n joint_weights=[\n 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,\n 1.2, 1.5, 1.5\n ],\n sigmas=[\n 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,\n 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089\n ])\nlog_level = 'INFO'\nload_from = None\nresume_from = None\ndist_params = dict(backend='nccl')\nworkflow = [('train', 1)]\ncheckpoint_config = dict(interval=10)\nevaluation = dict(interval=10, metric='PCK', save_best='PCK')\noptimizer = dict(type='Adam', lr=0.0005)\noptimizer_config = dict(grad_clip=None)\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=500,\n warmup_ratio=0.001,\n step=[170, 200])\ntotal_epochs = 40\nlog_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])\nchannel_cfg = dict(\n num_output_channels=17,\n dataset_joints=17,\n dataset_channel=[[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ]],\n inference_channel=[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ])\nmodel = dict(\n type='TopDown',\n pretrained=\n 'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',\n backbone=dict(\n type='HRNet',\n in_channels=3,\n extra=dict(\n stage1=dict(\n num_modules=1,\n num_branches=1,\n block='BOTTLENECK',\n num_blocks=(4, ),\n num_channels=(64, )),\n stage2=dict(\n num_modules=1,\n num_branches=2,\n block='BASIC',\n num_blocks=(4, 4),\n num_channels=(32, 64)),\n stage3=dict(\n num_modules=4,\n num_branches=3,\n block='BASIC',\n num_blocks=(4, 4, 4),\n num_channels=(32, 64, 128)),\n stage4=dict(\n num_modules=3,\n num_branches=4,\n block='BASIC',\n num_blocks=(4, 4, 4, 4),\n num_channels=(32, 64, 128, 256)))),\n keypoint_head=dict(\n type='TopdownHeatmapSimpleHead',\n in_channels=32,\n out_channels=17,\n num_deconv_layers=0,\n extra=dict(final_conv_kernel=1),\n loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),\n train_cfg=dict(),\n test_cfg=dict(\n flip_test=True,\n post_process='default',\n shift_heatmap=True,\n modulate_kernel=11))\ndata_cfg = dict(\n image_size=[192, 256],\n heatmap_size=[48, 64],\n num_output_channels=17,\n num_joints=17,\n dataset_channel=[[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ]],\n inference_channel=[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ],\n soft_nms=False,\n nms_thr=1.0,\n oks_thr=0.9,\n vis_thr=0.2,\n use_gt_bbox=False,\n det_bbox_thr=0.0,\n bbox_file=\n 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'\n)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='TopDownRandomFlip', flip_prob=0.5),\n dict(\n type='TopDownHalfBodyTransform',\n num_joints_half_body=8,\n prob_half_body=0.3),\n dict(\n type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(type='TopDownGenerateTarget', sigma=2),\n dict(\n type='Collect',\n keys=['img', 'target', 'target_weight'],\n meta_keys=[\n 'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',\n 'rotation', 'bbox_score', 'flip_pairs'\n ])\n]\nval_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(\n type='Collect',\n keys=['img'],\n meta_keys=[\n 'image_file', 'center', 'scale', 'rotation', 'bbox_score',\n 'flip_pairs'\n ])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(\n type='Collect',\n keys=['img'],\n meta_keys=[\n 'image_file', 'center', 'scale', 'rotation', 'bbox_score',\n 'flip_pairs'\n ])\n]\ndata_root = 'data/coco_tiny'\ndata = dict(\n samples_per_gpu=16,\n workers_per_gpu=2,\n val_dataloader=dict(samples_per_gpu=16),\n test_dataloader=dict(samples_per_gpu=16),\n train=dict(\n type='TopDownCOCOTinyDataset',\n ann_file='data/coco_tiny/train.json',\n img_prefix='data/coco_tiny/images/',\n data_cfg=dict(\n image_size=[192, 256],\n heatmap_size=[48, 64],\n num_output_channels=17,\n num_joints=17,\n dataset_channel=[[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ]],\n inference_channel=[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ],\n soft_nms=False,\n nms_thr=1.0,\n oks_thr=0.9,\n vis_thr=0.2,\n use_gt_bbox=False,\n det_bbox_thr=0.0,\n bbox_file=\n 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'\n ),\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='TopDownRandomFlip', flip_prob=0.5),\n dict(\n type='TopDownHalfBodyTransform',\n num_joints_half_body=8,\n prob_half_body=0.3),\n dict(\n type='TopDownGetRandomScaleRotation',\n rot_factor=40,\n scale_factor=0.5),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(type='TopDownGenerateTarget', sigma=2),\n dict(\n type='Collect',\n keys=['img', 'target', 'target_weight'],\n meta_keys=[\n 'image_file', 'joints_3d', 'joints_3d_visible', 'center',\n 'scale', 'rotation', 'bbox_score', 'flip_pairs'\n ])\n ],\n dataset_info=dict(\n dataset_name='coco',\n paper_info=dict(\n author=\n 'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\\'a}r, Piotr and Zitnick, C Lawrence',\n title='Microsoft coco: Common objects in context',\n container='European conference on computer vision',\n year='2014',\n homepage='http://cocodataset.org/'),\n keypoint_info=dict({\n 0:\n dict(\n name='nose',\n id=0,\n color=[51, 153, 255],\n type='upper',\n swap=''),\n 1:\n dict(\n name='left_eye',\n id=1,\n color=[51, 153, 255],\n type='upper',\n swap='right_eye'),\n 2:\n dict(\n name='right_eye',\n id=2,\n color=[51, 153, 255],\n type='upper',\n swap='left_eye'),\n 3:\n dict(\n name='left_ear',\n id=3,\n color=[51, 153, 255],\n type='upper',\n swap='right_ear'),\n 4:\n dict(\n name='right_ear',\n id=4,\n color=[51, 153, 255],\n type='upper',\n swap='left_ear'),\n 5:\n dict(\n name='left_shoulder',\n id=5,\n color=[0, 255, 0],\n type='upper',\n swap='right_shoulder'),\n 6:\n dict(\n name='right_shoulder',\n id=6,\n color=[255, 128, 0],\n type='upper',\n swap='left_shoulder'),\n 7:\n dict(\n name='left_elbow',\n id=7,\n color=[0, 255, 0],\n type='upper',\n swap='right_elbow'),\n 8:\n dict(\n name='right_elbow',\n id=8,\n color=[255, 128, 0],\n type='upper',\n swap='left_elbow'),\n 9:\n dict(\n name='left_wrist',\n id=9,\n color=[0, 255, 0],\n type='upper',\n swap='right_wrist'),\n 10:\n dict(\n name='right_wrist',\n id=10,\n color=[255, 128, 0],\n type='upper',\n swap='left_wrist'),\n 11:\n dict(\n name='left_hip',\n id=11,\n color=[0, 255, 0],\n type='lower',\n swap='right_hip'),\n 12:\n dict(\n name='right_hip',\n id=12,\n color=[255, 128, 0],\n type='lower',\n swap='left_hip'),\n 13:\n dict(\n name='left_knee',\n id=13,\n color=[0, 255, 0],\n type='lower',\n swap='right_knee'),\n 14:\n dict(\n name='right_knee',\n id=14,\n color=[255, 128, 0],\n type='lower',\n swap='left_knee'),\n 15:\n dict(\n name='left_ankle',\n id=15,\n color=[0, 255, 0],\n type='lower',\n swap='right_ankle'),\n 16:\n dict(\n name='right_ankle',\n id=16,\n color=[255, 128, 0],\n type='lower',\n swap='left_ankle')\n }),\n skeleton_info=dict({\n 0:\n dict(\n link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),\n 1:\n dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),\n 2:\n dict(\n link=('right_ankle', 'right_knee'),\n id=2,\n color=[255, 128, 0]),\n 3:\n dict(\n link=('right_knee', 'right_hip'),\n id=3,\n color=[255, 128, 0]),\n 4:\n dict(\n link=('left_hip', 'right_hip'), id=4, color=[51, 153,\n 255]),\n 5:\n dict(\n link=('left_shoulder', 'left_hip'),\n id=5,\n color=[51, 153, 255]),\n 6:\n dict(\n link=('right_shoulder', 'right_hip'),\n id=6,\n color=[51, 153, 255]),\n 7:\n dict(\n link=('left_shoulder', 'right_shoulder'),\n id=7,\n color=[51, 153, 255]),\n 8:\n dict(\n link=('left_shoulder', 'left_elbow'),\n id=8,\n color=[0, 255, 0]),\n 9:\n dict(\n link=('right_shoulder', 'right_elbow'),\n id=9,\n color=[255, 128, 0]),\n 10:\n dict(\n link=('left_elbow', 'left_wrist'),\n id=10,\n color=[0, 255, 0]),\n 11:\n dict(\n link=('right_elbow', 'right_wrist'),\n id=11,\n color=[255, 128, 0]),\n 12:\n dict(\n link=('left_eye', 'right_eye'),\n id=12,\n color=[51, 153, 255]),\n 13:\n dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),\n 14:\n dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),\n 15:\n dict(\n link=('left_eye', 'left_ear'), id=15, color=[51, 153,\n 255]),\n 16:\n dict(\n link=('right_eye', 'right_ear'),\n id=16,\n color=[51, 153, 255]),\n 17:\n dict(\n link=('left_ear', 'left_shoulder'),\n id=17,\n color=[51, 153, 255]),\n 18:\n dict(\n link=('right_ear', 'right_shoulder'),\n id=18,\n color=[51, 153, 255])\n }),\n joint_weights=[\n 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,\n 1.0, 1.2, 1.2, 1.5, 1.5\n ],\n sigmas=[\n 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,\n 0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089\n ])),\n val=dict(\n type='TopDownCOCOTinyDataset',\n ann_file='data/coco_tiny/val.json',\n img_prefix='data/coco_tiny/images/',\n data_cfg=dict(\n image_size=[192, 256],\n heatmap_size=[48, 64],\n num_output_channels=17,\n num_joints=17,\n dataset_channel=[[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ]],\n inference_channel=[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ],\n soft_nms=False,\n nms_thr=1.0,\n oks_thr=0.9,\n vis_thr=0.2,\n use_gt_bbox=False,\n det_bbox_thr=0.0,\n bbox_file=\n 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'\n ),\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(\n type='Collect',\n keys=['img'],\n meta_keys=[\n 'image_file', 'center', 'scale', 'rotation', 'bbox_score',\n 'flip_pairs'\n ])\n ],\n dataset_info=dict(\n dataset_name='coco',\n paper_info=dict(\n author=\n 'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\\'a}r, Piotr and Zitnick, C Lawrence',\n title='Microsoft coco: Common objects in context',\n container='European conference on computer vision',\n year='2014',\n homepage='http://cocodataset.org/'),\n keypoint_info=dict({\n 0:\n dict(\n name='nose',\n id=0,\n color=[51, 153, 255],\n type='upper',\n swap=''),\n 1:\n dict(\n name='left_eye',\n id=1,\n color=[51, 153, 255],\n type='upper',\n swap='right_eye'),\n 2:\n dict(\n name='right_eye',\n id=2,\n color=[51, 153, 255],\n type='upper',\n swap='left_eye'),\n 3:\n dict(\n name='left_ear',\n id=3,\n color=[51, 153, 255],\n type='upper',\n swap='right_ear'),\n 4:\n dict(\n name='right_ear',\n id=4,\n color=[51, 153, 255],\n type='upper',\n swap='left_ear'),\n 5:\n dict(\n name='left_shoulder',\n id=5,\n color=[0, 255, 0],\n type='upper',\n swap='right_shoulder'),\n 6:\n dict(\n name='right_shoulder',\n id=6,\n color=[255, 128, 0],\n type='upper',\n swap='left_shoulder'),\n 7:\n dict(\n name='left_elbow',\n id=7,\n color=[0, 255, 0],\n type='upper',\n swap='right_elbow'),\n 8:\n dict(\n name='right_elbow',\n id=8,\n color=[255, 128, 0],\n type='upper',\n swap='left_elbow'),\n 9:\n dict(\n name='left_wrist',\n id=9,\n color=[0, 255, 0],\n type='upper',\n swap='right_wrist'),\n 10:\n dict(\n name='right_wrist',\n id=10,\n color=[255, 128, 0],\n type='upper',\n swap='left_wrist'),\n 11:\n dict(\n name='left_hip',\n id=11,\n color=[0, 255, 0],\n type='lower',\n swap='right_hip'),\n 12:\n dict(\n name='right_hip',\n id=12,\n color=[255, 128, 0],\n type='lower',\n swap='left_hip'),\n 13:\n dict(\n name='left_knee',\n id=13,\n color=[0, 255, 0],\n type='lower',\n swap='right_knee'),\n 14:\n dict(\n name='right_knee',\n id=14,\n color=[255, 128, 0],\n type='lower',\n swap='left_knee'),\n 15:\n dict(\n name='left_ankle',\n id=15,\n color=[0, 255, 0],\n type='lower',\n swap='right_ankle'),\n 16:\n dict(\n name='right_ankle',\n id=16,\n color=[255, 128, 0],\n type='lower',\n swap='left_ankle')\n }),\n skeleton_info=dict({\n 0:\n dict(\n link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),\n 1:\n dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),\n 2:\n dict(\n link=('right_ankle', 'right_knee'),\n id=2,\n color=[255, 128, 0]),\n 3:\n dict(\n link=('right_knee', 'right_hip'),\n id=3,\n color=[255, 128, 0]),\n 4:\n dict(\n link=('left_hip', 'right_hip'), id=4, color=[51, 153,\n 255]),\n 5:\n dict(\n link=('left_shoulder', 'left_hip'),\n id=5,\n color=[51, 153, 255]),\n 6:\n dict(\n link=('right_shoulder', 'right_hip'),\n id=6,\n color=[51, 153, 255]),\n 7:\n dict(\n link=('left_shoulder', 'right_shoulder'),\n id=7,\n color=[51, 153, 255]),\n 8:\n dict(\n link=('left_shoulder', 'left_elbow'),\n id=8,\n color=[0, 255, 0]),\n 9:\n dict(\n link=('right_shoulder', 'right_elbow'),\n id=9,\n color=[255, 128, 0]),\n 10:\n dict(\n link=('left_elbow', 'left_wrist'),\n id=10,\n color=[0, 255, 0]),\n 11:\n dict(\n link=('right_elbow', 'right_wrist'),\n id=11,\n color=[255, 128, 0]),\n 12:\n dict(\n link=('left_eye', 'right_eye'),\n id=12,\n color=[51, 153, 255]),\n 13:\n dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),\n 14:\n dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),\n 15:\n dict(\n link=('left_eye', 'left_ear'), id=15, color=[51, 153,\n 255]),\n 16:\n dict(\n link=('right_eye', 'right_ear'),\n id=16,\n color=[51, 153, 255]),\n 17:\n dict(\n link=('left_ear', 'left_shoulder'),\n id=17,\n color=[51, 153, 255]),\n 18:\n dict(\n link=('right_ear', 'right_shoulder'),\n id=18,\n color=[51, 153, 255])\n }),\n joint_weights=[\n 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,\n 1.0, 1.2, 1.2, 1.5, 1.5\n ],\n sigmas=[\n 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,\n 0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089\n ])),\n test=dict(\n type='TopDownCOCOTinyDataset',\n ann_file='data/coco_tiny/val.json',\n img_prefix='data/coco_tiny/images/',\n data_cfg=dict(\n image_size=[192, 256],\n heatmap_size=[48, 64],\n num_output_channels=17,\n num_joints=17,\n dataset_channel=[[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ]],\n inference_channel=[\n 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\n ],\n soft_nms=False,\n nms_thr=1.0,\n oks_thr=0.9,\n vis_thr=0.2,\n use_gt_bbox=False,\n det_bbox_thr=0.0,\n bbox_file=\n 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'\n ),\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='TopDownAffine'),\n dict(type='ToTensor'),\n dict(\n type='NormalizeTensor',\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]),\n dict(\n type='Collect',\n keys=['img'],\n meta_keys=[\n 'image_file', 'center', 'scale', 'rotation', 'bbox_score',\n 'flip_pairs'\n ])\n ],\n dataset_info=dict(\n dataset_name='coco',\n paper_info=dict(\n author=\n 'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\\'a}r, Piotr and Zitnick, C Lawrence',\n title='Microsoft coco: Common objects in context',\n container='European conference on computer vision',\n year='2014',\n homepage='http://cocodataset.org/'),\n keypoint_info=dict({\n 0:\n dict(\n name='nose',\n id=0,\n color=[51, 153, 255],\n type='upper',\n swap=''),\n 1:\n dict(\n name='left_eye',\n id=1,\n color=[51, 153, 255],\n type='upper',\n swap='right_eye'),\n 2:\n dict(\n name='right_eye',\n id=2,\n color=[51, 153, 255],\n type='upper',\n swap='left_eye'),\n 3:\n dict(\n name='left_ear',\n id=3,\n color=[51, 153, 255],\n type='upper',\n swap='right_ear'),\n 4:\n dict(\n name='right_ear',\n id=4,\n color=[51, 153, 255],\n type='upper',\n swap='left_ear'),\n 5:\n dict(\n name='left_shoulder',\n id=5,\n color=[0, 255, 0],\n type='upper',\n swap='right_shoulder'),\n 6:\n dict(\n name='right_shoulder',\n id=6,\n color=[255, 128, 0],\n type='upper',\n swap='left_shoulder'),\n 7:\n dict(\n name='left_elbow',\n id=7,\n color=[0, 255, 0],\n type='upper',\n swap='right_elbow'),\n 8:\n dict(\n name='right_elbow',\n id=8,\n color=[255, 128, 0],\n type='upper',\n swap='left_elbow'),\n 9:\n dict(\n name='left_wrist',\n id=9,\n color=[0, 255, 0],\n type='upper',\n swap='right_wrist'),\n 10:\n dict(\n name='right_wrist',\n id=10,\n color=[255, 128, 0],\n type='upper',\n swap='left_wrist'),\n 11:\n dict(\n name='left_hip',\n id=11,\n color=[0, 255, 0],\n type='lower',\n swap='right_hip'),\n 12:\n dict(\n name='right_hip',\n id=12,\n color=[255, 128, 0],\n type='lower',\n swap='left_hip'),\n 13:\n dict(\n name='left_knee',\n id=13,\n color=[0, 255, 0],\n type='lower',\n swap='right_knee'),\n 14:\n dict(\n name='right_knee',\n id=14,\n color=[255, 128, 0],\n type='lower',\n swap='left_knee'),\n 15:\n dict(\n name='left_ankle',\n id=15,\n color=[0, 255, 0],\n type='lower',\n swap='right_ankle'),\n 16:\n dict(\n name='right_ankle',\n id=16,\n color=[255, 128, 0],\n type='lower',\n swap='left_ankle')\n }),\n skeleton_info=dict({\n 0:\n dict(\n link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),\n 1:\n dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),\n 2:\n dict(\n link=('right_ankle', 'right_knee'),\n id=2,\n color=[255, 128, 0]),\n 3:\n dict(\n link=('right_knee', 'right_hip'),\n id=3,\n color=[255, 128, 0]),\n 4:\n dict(\n link=('left_hip', 'right_hip'), id=4, color=[51, 153,\n 255]),\n 5:\n dict(\n link=('left_shoulder', 'left_hip'),\n id=5,\n color=[51, 153, 255]),\n 6:\n dict(\n link=('right_shoulder', 'right_hip'),\n id=6,\n color=[51, 153, 255]),\n 7:\n dict(\n link=('left_shoulder', 'right_shoulder'),\n id=7,\n color=[51, 153, 255]),\n 8:\n dict(\n link=('left_shoulder', 'left_elbow'),\n id=8,\n color=[0, 255, 0]),\n 9:\n dict(\n link=('right_shoulder', 'right_elbow'),\n id=9,\n color=[255, 128, 0]),\n 10:\n dict(\n link=('left_elbow', 'left_wrist'),\n id=10,\n color=[0, 255, 0]),\n 11:\n dict(\n link=('right_elbow', 'right_wrist'),\n id=11,\n color=[255, 128, 0]),\n 12:\n dict(\n link=('left_eye', 'right_eye'),\n id=12,\n color=[51, 153, 255]),\n 13:\n dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),\n 14:\n dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),\n 15:\n dict(\n link=('left_eye', 'left_ear'), id=15, color=[51, 153,\n 255]),\n 16:\n dict(\n link=('right_eye', 'right_ear'),\n id=16,\n color=[51, 153, 255]),\n 17:\n dict(\n link=('left_ear', 'left_shoulder'),\n id=17,\n color=[51, 153, 255]),\n 18:\n dict(\n link=('right_ear', 'right_shoulder'),\n id=18,\n color=[51, 153, 255])\n }),\n joint_weights=[\n 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,\n 1.0, 1.2, 1.2, 1.5, 1.5\n ],\n sigmas=[\n 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,\n 0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089\n ])))\nwork_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'\ngpu_ids = range(0, 1)\nseed = 0\n\n"
]
],
[
[
"### Train and Evaluation\n",
"_____no_output_____"
]
],
[
[
"from mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.apis import train_model\nimport mmcv\n\n# build dataset\ndatasets = [build_dataset(cfg.data.train)]\n\n# build model\nmodel = build_posenet(cfg.model)\n\n# create work_dir\nmmcv.mkdir_or_exist(cfg.work_dir)\n\n# train model\ntrain_model(\n model, datasets, cfg, distributed=False, validate=True, meta=dict())",
"Use load_from_http loader\n"
]
],
[
[
"Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.",
"_____no_output_____"
]
],
[
[
"from mmpose.apis import (inference_top_down_pose_model, init_pose_model,\n vis_pose_result, process_mmdet_results)\nfrom mmdet.apis import inference_detector, init_detector\nlocal_runtime = False\n\ntry:\n from google.colab.patches import cv2_imshow # for image visualization in colab\nexcept:\n local_runtime = True\n\n\npose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'\ndet_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'\ndet_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'\n\n# initialize pose model\npose_model = init_pose_model(cfg, pose_checkpoint)\n# initialize detector\ndet_model = init_detector(det_config, det_checkpoint)\n\nimg = 'tests/data/coco/000000196141.jpg'\n\n# inference detection\nmmdet_results = inference_detector(det_model, img)\n\n# extract person (COCO_ID=1) bounding boxes from the detection results\nperson_results = process_mmdet_results(mmdet_results, cat_id=1)\n\n# inference pose\npose_results, returned_outputs = inference_top_down_pose_model(pose_model,\n img,\n person_results,\n bbox_thr=0.3,\n format='xyxy',\n dataset='TopDownCocoDataset')\n\n# show pose estimation results\nvis_result = vis_pose_result(pose_model,\n img,\n pose_results,\n kpt_score_thr=0.,\n dataset='TopDownCocoDataset',\n show=False)\n\n# reduce image size\nvis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)\n\nif local_runtime:\n from IPython.display import Image, display\n import tempfile\n import os.path as osp\n import cv2\n with tempfile.TemporaryDirectory() as tmpdir:\n file_name = osp.join(tmpdir, 'pose_results.png')\n cv2.imwrite(file_name, vis_result)\n display(Image(file_name))\nelse:\n cv2_imshow(vis_result)",
"Use load_from_local loader\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbcc7080d0dfee6e8837503d1e4699816346796 | 6,550 | ipynb | Jupyter Notebook | SVG/.ipynb_checkpoints/tfslim_network_gradient_output_wrt_input-checkpoint.ipynb | Santara/stochastic_value_gradient | 8abcc61e3681e2f251e9f30df72f97e40093ff0d | [
"MIT"
] | 28 | 2017-06-24T12:19:23.000Z | 2021-12-25T16:03:31.000Z | SVG/.ipynb_checkpoints/tfslim_network_gradient_output_wrt_input-checkpoint.ipynb | Santara/stochastic_value_gradient | 8abcc61e3681e2f251e9f30df72f97e40093ff0d | [
"MIT"
] | null | null | null | SVG/.ipynb_checkpoints/tfslim_network_gradient_output_wrt_input-checkpoint.ipynb | Santara/stochastic_value_gradient | 8abcc61e3681e2f251e9f30df72f97e40093ff0d | [
"MIT"
] | 10 | 2017-07-22T06:39:01.000Z | 2020-04-21T08:13:24.000Z | 32.914573 | 103 | 0.51542 | [
[
[
"import numpy as np\nimport tensorflow as tf\nimport tensorflow.contrib.slim as slim ",
"_____no_output_____"
],
[
"inputs1 = tf.placeholder(shape=[None,16],dtype=tf.float32)\nnet = slim.fully_connected(inputs1, 10, scope='fc1')\nnet = slim.fully_connected(net, 4, scope='out')",
"_____no_output_____"
],
[
"var = slim.get_variables()",
"_____no_output_____"
],
[
"sess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)",
"_____no_output_____"
],
[
"# sess.run(var)",
"_____no_output_____"
],
[
"grad_out_inp = tf.gradients(net,inputs1)\ninp = np.random.random([10,16])",
"_____no_output_____"
],
[
"sess.run(tf.shape(grad_out_inp), feed_dict={inputs1:inp})",
"_____no_output_____"
],
[
"trainable_variables = slim.get_variables('fc1')\ntrainable_variables.extend(slim.get_variables('out'))",
"_____no_output_____"
],
[
"dummy_targets = np.random.random([10,4])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbccb3c4556f051703e69c24d9032abe6d11a71 | 311,498 | ipynb | Jupyter Notebook | CP_Combination_Demo.ipynb | ptocca/CP_Combination_Demo | fcf58db802b840456c7be83ceb73a3f549946886 | [
"MIT"
] | null | null | null | CP_Combination_Demo.ipynb | ptocca/CP_Combination_Demo | fcf58db802b840456c7be83ceb73a3f549946886 | [
"MIT"
] | null | null | null | CP_Combination_Demo.ipynb | ptocca/CP_Combination_Demo | fcf58db802b840456c7be83ceb73a3f549946886 | [
"MIT"
] | null | null | null | 22.949827 | 149 | 0.381251 | [
[
[
"# CP combination demo",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.stats as ss\nfrom sklearn.model_selection import train_test_split\nimport pandas as pd",
"_____no_output_____"
],
[
"from IPython.display import display, HTML\n\ndisplay(HTML(\"<style>.container {width: 90% !important;} </style>\"))",
"_____no_output_____"
],
[
"import panel as pn\n\npn.extension(comm='ipywidgets')",
"_____no_output_____"
],
[
"%matplotlib inline\nfrom matplotlib.figure import Figure",
"_____no_output_____"
],
[
"import param",
"_____no_output_____"
],
[
"from CP import *",
"_____no_output_____"
],
[
"def plot_score_hist(alpha_0_a, alpha_0_b, alpha_1_a, alpha_1_b):\n f = Figure(figsize=(18, 6))\n ax_a = f.add_subplot(1, 3, 1)\n ax_a.hist([alpha_0_a, alpha_1_a], bins=np.linspace(-10, 10, 51))\n ax_b = f.add_subplot(1, 3, 2, sharey=ax_a)\n ax_b.hist([alpha_0_b, alpha_1_b], bins=np.linspace(-10, 10, 51))\n\n ax_c = f.add_subplot(1, 3, 3)\n ax_c.plot(alpha_0_a, alpha_0_b, \"g.\", alpha=0.05);\n ax_c.plot(alpha_1_a, alpha_1_b, \"r.\", alpha=0.05);\n f.suptitle(\"Histograms of simulated scores (blue and orange)\\n\",\n fontsize=16);\n return f\n\n\nclass SynthDataSet(param.Parameterized):\n N = param.Integer(default=2000, bounds=(100, 10000))\n percentage_of_positives = param.Number(default=50.0, bounds=(0.1, 100.0))\n seed = param.Integer(default=0, bounds=(0, 32767))\n cc = param.Number(default=0.0, bounds=(-1.0, 1.0))\n var = param.Number(default=1.0, bounds=(0.5, 2.0))\n\n micp_calibration_fraction = param.Number(default=0.5, bounds=(0.01, 0.5))\n comb_calibration_fraction = param.Number(default=0.3, bounds=(0.01, 0.5))\n\n # Outputs\n output = param.Dict(default=dict(),\n precedence=-1) # To have all updates in one go\n\n n = 2\n\n def __init__(self, **params):\n super(SynthDataSet, self).__init__(**params)\n self.update()\n\n def update(self):\n output = dict()\n\n cov = self.cc * np.ones(shape=(self.n, self.n))\n cov[np.diag_indices(self.n)] = self.var\n\n np.random.seed(self.seed)\n\n positives_number = int(self.N * self.percentage_of_positives / 100)\n negatives_number = self.N - positives_number\n \n try:\n alpha_neg = ss.multivariate_normal(mean=[-1, -1], cov=cov).rvs(\n size=(negatives_number,))\n alpha_pos = ss.multivariate_normal(mean=[1, 1], cov=cov).rvs(\n size=(positives_number,))\n except:\n placeholder = np.array([0.0])\n output['scores_cal_a'] = placeholder\n output['scores_pcal_a'] = placeholder\n output['scores_cal_b'] = placeholder\n output['scores_pcal_b'] = placeholder\n output['y_cal'] = placeholder\n output['y_pcal'] = placeholder\n output['scores_test_a'] = placeholder\n output['scores_test_b'] = placeholder\n self.output = output\n return\n \n\n alpha_neg_a = alpha_neg[:, 0]\n alpha_neg_b = alpha_neg[:, 1]\n alpha_pos_a = alpha_pos[:, 0]\n alpha_pos_b = alpha_pos[:, 1]\n\n scores_a = np.concatenate((alpha_neg_a, alpha_pos_a))\n scores_b = np.concatenate((alpha_neg_b, alpha_pos_b))\n y = np.concatenate((np.zeros(negatives_number, dtype=np.int8),\n np.ones(positives_number, dtype=np.int8)))\n\n micp_calibration_size = int(self.micp_calibration_fraction * self.N)\n comb_calibration_size = int(self.comb_calibration_fraction * self.N)\n scores_tr_a, output['scores_test_a'], \\\n scores_tr_b, output['scores_test_b'], \\\n y_tr, output['y_test'] = train_test_split(scores_a, scores_b, y,\n train_size=micp_calibration_size + comb_calibration_size,\n stratify=y)\n\n output['scores_cal_a'], output['scores_pcal_a'], \\\n output['scores_cal_b'], output['scores_pcal_b'], \\\n output['y_cal'], output['y_pcal'] = train_test_split(scores_tr_a,\n scores_tr_b, y_tr,\n train_size=micp_calibration_size,\n stratify=y_tr)\n\n self.output = output\n\n @pn.depends(\"N\", \"percentage_of_positives\", \"seed\", \"cc\", \"var\",\n \"micp_calibration_fraction\", \"comb_calibration_fraction\")\n def view(self):\n self.update()\n f = plot_score_hist(\n self.output['scores_cal_a'][self.output['y_cal'] == 0],\n self.output['scores_cal_b'][self.output['y_cal'] == 0],\n self.output['scores_cal_a'][self.output['y_cal'] == 1],\n self.output['scores_cal_b'][self.output['y_cal'] == 1])\n\n return f\n\n def view2(self):\n return \"# %d\" % self.N\n\n\nsd = SynthDataSet()",
"_____no_output_____"
],
[
"def p_plane_plot(p_0, p_1, y, title_part, pics_title_part):\n def alpha_y(y):\n \"\"\"Tune the transparency\"\"\"\n a = 1 - np.sum(y) / 10000\n if a < 0.05:\n a = 0.05\n return a\n\n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))\n ax1.plot(p_0[y == 0], p_1[y == 0], 'g.', label='Negative',\n alpha=alpha_y(y == 0));\n ax2.plot(p_0[y == 1], p_1[y == 1], 'r.', label='Positive',\n alpha=alpha_y(y == 1));\n ax1.set_title(\"Inactives $(p_0,p_1)$ for \" + title_part, fontsize=16)\n ax2.set_title(\"Actives $(p_0,p_1)$ for \" + title_part, fontsize=16)\n ax1.set_xlabel('$p_0$', fontsize=14)\n ax1.set_ylabel('$p_1$', fontsize=14)\n\n ax2.set_xlabel('$p_0$', fontsize=14)\n\n ax1.grid()\n ax2.grid()",
"_____no_output_____"
],
[
"from scipy.interpolate import UnivariateSpline, InterpolatedUnivariateSpline, \\\n interp1d\n\n\ndef ecdf(x):\n v, c = np.unique(x, return_counts='true')\n q = np.cumsum(c) / np.sum(c)\n return v, q\n\n\ndef ECDF_cal_p(p_test, p_cal):\n v, q = ecdf(p_cal)\n v = np.concatenate(([0], v))\n q = np.concatenate(([0], q))\n us = interp1d(v, q, bounds_error=False, fill_value=(0, 1))\n return us(p_test)",
"_____no_output_____"
]
],
[
[
"# Let's apply MICP as usual",
"_____no_output_____"
],
[
"Let's now redo things computing $p_0$ and $p_1$.\n\nLet's assume that the 'alphas' are actually the decision function values out of an SVM.",
"_____no_output_____"
]
],
[
[
"from CP import pValues",
"_____no_output_____"
],
[
"def pValue_hist(p_0, p_1, y, pic_title=None,\n labels_names=[\"Negative\", \"Positive\"]):\n u_l = unique_labels(y)\n if len(u_l) != 2:\n return\n\n f = Figure(figsize=(18, 6))\n ax_0 = f.add_subplot(1, 2, 1)\n\n ax_0.hist((p_0[y == u_l[0]], p_0[y == u_l[1]]),\n bins=np.linspace(0, 1, 101),\n label=labels_names);\n ax_0.set_title(\"$p_{%s}$\" % str(u_l[0]), fontsize=14)\n if not (labels_names is None):\n ax_0.legend()\n\n ax_1 = f.add_subplot(1, 2, 2)\n ax_1.hist((p_1[y == u_l[0]], p_1[y == u_l[1]]),\n bins=np.linspace(0, 1, 101),\n label=labels_names);\n ax_1.set_title(\"$p_{%s}$\" % str(u_l[1]), fontsize=14)\n if not (labels_names is None):\n ax_1.legend()\n\n if not (pic_title is None):\n f.suptitle(\"p-value histograms for %s\" % pic_title, fontsize=16)\n\n return f",
"_____no_output_____"
],
[
"\ndef ncm(scores, label):\n if label == 1:\n return -scores\n else:\n return scores\n\n\nclass MICP(param.Parameterized):\n sd = param.Parameter(precedence=-1)\n p_0_a = param.Array(precedence=-1)\n p_1_a = param.Array(precedence=-1)\n p_0_b = param.Array(precedence=-1)\n p_1_b = param.Array(precedence=-1)\n p_0_a_cal = param.Array(precedence=-1)\n p_1_a_cal = param.Array(precedence=-1)\n p_0_b_cal = param.Array(precedence=-1)\n p_1_b_cal = param.Array(precedence=-1)\n\n def __init__(self, sd, **params):\n self.sd = sd\n super(MICP, self).__init__(**params)\n self.update()\n\n def aux_update_(self, scores_cal_a, scores_cal_b, scores_pcal_a,\n scores_pcal_b, scores_test_a, scores_test_b, y_cal, y_pcal,\n y_test):\n randomize = False\n\n with param.batch_watch(self):\n self.p_0_a = pValues(\n calibrationAlphas=ncm(scores_cal_a[y_cal == 0], 0),\n testAlphas=ncm(scores_test_a, 0),\n randomized=randomize)\n self.p_1_a = pValues(\n calibrationAlphas=ncm(scores_cal_a[y_cal == 1], 1),\n testAlphas=ncm(scores_test_a, 1),\n randomized=randomize)\n\n self.p_0_b = pValues(\n calibrationAlphas=ncm(scores_cal_b[y_cal == 0], 0),\n testAlphas=ncm(scores_test_b, 0),\n randomized=randomize)\n self.p_1_b = pValues(\n calibrationAlphas=ncm(scores_cal_b[y_cal == 1], 1),\n testAlphas=ncm(scores_test_b, 1),\n randomized=randomize)\n self.p_0_a_cal = pValues(calibrationAlphas=ncm(scores_cal_a[y_cal == 0], 0),\n testAlphas=ncm(scores_pcal_a, 0),\n randomized=randomize)\n self.p_1_a_cal = pValues(calibrationAlphas=ncm(scores_cal_a[y_cal == 1], 1),\n testAlphas=ncm(scores_pcal_a, 1),\n randomized=randomize)\n\n self.p_0_b_cal = pValues(calibrationAlphas=ncm(scores_cal_b[y_cal == 0], 0),\n testAlphas=ncm(scores_pcal_b, 0),\n randomized=randomize)\n self.p_1_b_cal = pValues(calibrationAlphas=ncm(scores_cal_b[y_cal == 1], 1),\n testAlphas=ncm(scores_pcal_b, 1),\n randomized=randomize)\n\n @pn.depends(\"sd.output\", watch=True)\n def update(self):\n self.aux_update_(**self.sd.output)\n\n @pn.depends(\"p_0_a\", \"p_1_a\", \"p_0_b\", \"p_1_b\")\n def view(self):\n return pn.Column(\n pValue_hist(self.p_0_a, self.p_1_a, self.sd.output['y_test']),\n pValue_hist(self.p_0_b, self.p_1_b, self.sd.output['y_test']))\n\n @pn.depends(\"p_0_a\", \"p_1_a\", \"p_0_b\", \"p_1_b\")\n def view_tables(self):\n c_a_f = cp_cm_widget(self.p_0_a, self.p_1_a, self.sd.output['y_test'])\n c_b_f = cp_cm_widget(self.p_0_b, self.p_1_b, self.sd.output['y_test'])\n\n return pn.Row(c_a_f, c_b_f)\n\n @pn.depends(\"p_0_a\", \"p_1_a\", \"p_0_b\", \"p_1_b\")\n def view_p_plane(self):\n f = Figure(figsize=(18, 6))\n ax = f.add_subplot(1, 1, 1)\n ax.plot(self.p_0_a, self.p_0_b, \"g.\", alpha=0.05, label=\"$p_0$\")\n ax.plot(self.p_1_a, self.p_1_b, \"r.\", alpha=0.05, label=\"$p_1$\")\n ax.set_xlabel(\"p-value for set 'a'\")\n ax.set_ylabel(\"p-value for set 'b'\")\n ax.legend()\n ax.set_aspect(1.0)\n return f",
"_____no_output_____"
]
],
[
[
"Now we compute the p-values with Mondrian Inductive",
"_____no_output_____"
]
],
[
[
"micp = MICP(sd)",
"_____no_output_____"
],
[
"# micp_panel = pn.Column(pn.Row(micp.sd.param, micp.sd.view),\n# micp.view)",
"_____no_output_____"
],
[
"# micp_panel",
"_____no_output_____"
],
[
"def ECDF_comb(comb_func, ps, ps_cal):\n \"\"\"Note: ps_cal are the p-values of the calibration examples with the same label as the p-value.\n Example: ECDF_comb(minimum, ps_test_0, ps_cal_0[y_cal==0])\"\"\"\n p_comb = comb_func(ps)\n ps_cal_comb = comb_func(ps_cal)\n return ECDF_cal_p(p_comb, ps_cal_comb)",
"_____no_output_____"
],
[
"def KolmogorovAveraging(p_vals, phi, phi_inv):\n return phi_inv(np.sum(phi(p_vals), axis=1) / p_vals.shape[1])",
"_____no_output_____"
]
],
[
[
"## Arithmetic mean",
"_____no_output_____"
]
],
[
[
"def comb_arithmetic(ps, _=None):\n return np.mean(ps, axis=1)\n\ndef comb_arithmetic_ECDF(ps, ps_cal):\n return ECDF_comb(comb_arithmetic, ps, ps_cal)",
"_____no_output_____"
],
[
"# Unoptimized Irwin-Hall CDF\n# Bates is the distribution of the mean of N independent uniform RVs\n# Irwin-Hall is the distribution of the sum\nfrom scipy.special import factorial, comb\n\ndef Irwin_Hall_CDF_base(x, n):\n acc = 0\n sign = 1\n for k in np.arange(0, np.floor(x) + 1):\n acc += sign * comb(n, k) * (x - k) ** n\n sign *= -1\n return acc / factorial(n)\n\n\nIrwin_Hall_CDF = np.vectorize(Irwin_Hall_CDF_base, excluded=(1, \"n\"))",
"_____no_output_____"
],
[
"from functools import partial\n\n\ndef comb_arithmetic_q(ps, _=None):\n phi = lambda x: x.shape[1] * x\n phi_inv = partial(Irwin_Hall_CDF, n=ps.shape[1])\n return KolmogorovAveraging(ps, phi, phi_inv)",
"_____no_output_____"
]
],
[
[
"## Geometric mean",
"_____no_output_____"
]
],
[
[
"import scipy.stats as ss",
"_____no_output_____"
],
[
"def comb_geometric(ps, _=None):\n return ss.gmean(ps, axis=1)",
"_____no_output_____"
]
],
[
[
"## Fisher combination",
"_____no_output_____"
]
],
[
[
"def fisher(p, _=None):\n k = np.sum(np.log(p), axis=1).reshape(-1, 1)\n fs = -k / np.arange(1, p.shape[1]).reshape(1, -1)\n return np.sum(np.exp(\n k + np.cumsum(np.c_[np.zeros(shape=(p.shape[0])), np.log(fs)], axis=1)),\n axis=1)",
"_____no_output_____"
],
[
"\ndef comb_geometric_ECDF(ps, ps_cal):\n return ECDF_comb(comb_geometric, ps, ps_cal)",
"_____no_output_____"
]
],
[
[
"## Max p",
"_____no_output_____"
]
],
[
[
"def comb_maximum(ps, _=None):\n return np.max(ps, axis=1)",
"_____no_output_____"
],
[
"def comb_maximum_ECDF(ps, ps_cal):\n return ECDF_comb(comb_minimum, ps, ps_cal)",
"_____no_output_____"
],
[
"def comb_maximum_q(ps, _=None):\n max_ps = comb_maximum(ps)\n phi_inv = ss.beta(a=ps.shape[1], b=1).cdf\n\n return phi_inv(max_ps)",
"_____no_output_____"
]
],
[
[
"## Minimum and Bonferroni",
"_____no_output_____"
]
],
[
[
"def comb_minimum(ps, _=None):\n return np.min(ps, axis=1)",
"_____no_output_____"
],
[
"def comb_minimum_ECDF(ps, ps_cal):\n return ECDF_comb(comb_minimum, ps, ps_cal)",
"_____no_output_____"
]
],
[
[
"The k-order statistic of n uniformly distributed variates is distributed as Beta(k,n+1-k).",
"_____no_output_____"
]
],
[
[
"def comb_minimum_q(ps, _=None):\n min_ps = comb_minimum(ps)\n phi_inv = ss.beta(a=1, b=ps.shape[1]).cdf\n\n return phi_inv(min_ps)",
"_____no_output_____"
],
[
"def comb_bonferroni(ps, _=None):\n return np.clip(ps.shape[1] * np.min(ps, axis=1), 0, 1)",
"_____no_output_____"
],
[
"def comb_bonferroni_q(ps, _=None):\n b_ps= p.shape[1] * np.min(ps, axis=1)\n phi_inv = ss.beta(a=1, b=ps.shape[1]).cdf\n\n return np.where(b_ps < 1.0 / ps.shape[1],\n phi_inv(b_ps / ps.shape[1]),\n 1.0)",
"_____no_output_____"
],
[
"methodFunc = {\"Arithmetic Mean\": comb_arithmetic,\n \"Arithmetic Mean (quantile)\": comb_arithmetic_q,\n \"Arithmetic Mean (ECDF)\": comb_arithmetic_ECDF,\n \"Geometric Mean\": comb_geometric,\n \"Geometric Mean (quantile)\": fisher, # comb_geometric_q,\n \"Geometric Mean (ECDF)\": comb_geometric_ECDF,\n \"Minimum\": comb_minimum,\n \"Bonferroni\": comb_bonferroni,\n \"Minimum (quantile)\": comb_minimum_q,\n \"Minimum (ECDF)\": comb_minimum_ECDF,\n }",
"_____no_output_____"
],
[
"def cp_cm_widget(p_0, p_1, y):\n c_cm = cpConfusionMatrix_df(p_0, p_1, y).groupby('epsilon').agg('mean')\n c_cm['Actual error rate'] = c_cm[[\"Positive predicted Negative\",\"Negative predicted Positive\",\n \"Positive predicted Empty\",\"Negative predicted Empty\"]].sum(axis=1) / c_cm.sum(axis=1)\n c_cm['Avg set size'] = (c_cm[[\"Positive predicted Negative\", \"Negative predicted Positive\", \n \"Positive predicted Positive\", \"Negative predicted Negative\"]].sum(axis=1) + \\\n 2*(c_cm[[\"Positive predicted Uncertain\",\"Negative predicted Uncertain\"]].sum(axis=1))) / c_cm.sum(axis=1)\n cw = 50\n col_widths = {'epsilon': 50,\n 'Actual error rate': cw,\n 'Avg set size': cw,\n \"Positive predicted Positive\": cw,\n \"Positive predicted Negative\": cw,\n \"Negative predicted Negative\": cw,\n \"Negative predicted Positive\": cw,\n \"Positive predicted Empty\": cw,\n \"Negative predicted Empty\": cw,\n \"Positive predicted Uncertain\": cw,\n \"Negative predicted Uncertain\": cw}\n# return pn.widgets.DataFrame(c_cm, fit_columns=False, widths=col_widths,\n# disabled=True)\n return pn.Pane(c_cm.to_html(notebook=True))",
"_____no_output_____"
],
[
"class SimpleCombination(param.Parameterized):\n sd = param.Parameter(precedence=-1)\n micp = param.Parameter(precedence=-1)\n p_comb_0 = param.Array(precedence=-1)\n p_comb_1 = param.Array(precedence=-1)\n\n method = param.Selector(list(methodFunc.keys()))\n\n def __init__(self, sd, micp, **params):\n self.sd = sd\n self.micp = micp\n super(SimpleCombination, self).__init__(**params)\n self.update()\n\n @pn.depends(\"micp.p_0_a\", \"micp.p_1_a\", \"micp.p_0_b\", \"micp.p_1_b\",\n \"method\", watch=True)\n def update(self):\n comb_method = methodFunc[self.method]\n y_pcal = self.sd.output['y_pcal']\n ps_0 = np.c_[self.micp.p_0_a, self.micp.p_0_b]\n ps_pcal_0 = np.c_[self.micp.p_0_a_cal[y_pcal == 0], self.micp.p_0_b_cal[y_pcal == 0]]\n ps_1 = np.c_[self.micp.p_1_a, self.micp.p_1_b]\n ps_pcal_1 = np.c_[self.micp.p_1_a_cal[y_pcal == 1], self.micp.p_1_b_cal[y_pcal == 1]]\n\n with param.batch_watch(self):\n self.p_comb_0 = comb_method(ps_0, ps_pcal_0)\n self.p_comb_1 = comb_method(ps_1, ps_pcal_1)\n \n @pn.depends(\"p_comb_0\", \"p_comb_1\")\n def view_table(self):\n return cp_cm_widget(self.p_comb_0, self.p_comb_1,\n self.sd.output['y_test'])\n\n @pn.depends(\"p_comb_0\", \"p_comb_1\")\n def view_validity(self):\n f = Figure()\n ax = f.add_subplot(1, 1, 1)\n ax.plot(*ecdf(self.p_comb_0[self.sd.output['y_test'] == 0]))\n ax.plot(*ecdf(self.p_comb_1[self.sd.output['y_test'] == 1]))\n ax.plot((0, 1), (0, 1), \"k--\")\n ax.set_aspect(1.0)\n ax.set_xlabel(\"Target error rate\")\n ax.set_ylabel(\"Actual error rate\")\n\n return f\n\n ",
"_____no_output_____"
],
[
"sc = SimpleCombination(sd, micp)",
"_____no_output_____"
],
[
"class App(param.Parameterized):\n sd = param.Parameter()\n micp = param.Parameter()\n sc = param.Parameter()\n\n def __init__(self, sd, micp, fisher, **params):\n self.sd = sd\n self.micp = micp\n self.sc = sc\n\n @pn.depends(\"sc.p_comb_0\", \"sc.p_comb_1\", watch=True)\n def view(self):\n return pn.Column(pn.Row(sd.param, sd.view),\n pn.Row(micp.view_tables, micp.view_p_plane),\n pn.Row(sc.param, sc.view_table, sc.view_validity))",
"_____no_output_____"
],
[
"if 0:\n app = App(sd, micp, sc)\n app.view()",
"_____no_output_____"
],
[
"# ss.pearsonr(p_0_a,p_0_b),ss.pearsonr(p_1_a,p_1_b)",
"_____no_output_____"
],
[
"class MultiCombination(param.Parameterized):\n sd = param.Parameter(precedence=-1)\n micp = param.Parameter(precedence=-1)\n p_comb_0 = param.Array(precedence=-1)\n p_comb_1 = param.Array(precedence=-1)\n\n methods_names = [\"Base A\", \"Base B\",] + list(methodFunc.keys())\n methods = param.ListSelector(default=[methods_names[0]],objects=methods_names)\n\n def __init__(self, sd, micp, **params):\n self.sd = sd\n self.micp = micp\n super().__init__(**params)\n self.update()\n\n @pn.depends(\"micp.p_0_a\", \"micp.p_1_a\", \"micp.p_0_b\", \"micp.p_1_b\",\n \"methods\", watch=True)\n def update(self):\n k = len(self.methods)\n p_comb_0 = np.zeros(shape=(k, micp.p_0_a.shape[0]))\n p_comb_1 = np.zeros(shape=(k, micp.p_0_a.shape[0]))\n ps_0 = np.c_[self.micp.p_0_a, self.micp.p_0_b]\n ps_1 = np.c_[self.micp.p_1_a, self.micp.p_1_b]\n y_pcal = self.sd.output['y_pcal']\n for i,m in enumerate(self.methods):\n if m==\"Base A\":\n p_comb_0[i] = self.micp.p_0_a\n p_comb_1[i] = self.micp.p_1_a\n continue\n elif m==\"Base B\":\n p_comb_0[i] = self.micp.p_0_b\n p_comb_1[i] = self.micp.p_1_b\n continue\n # If not a base CP, do the combination\n try:\n comb_method = methodFunc[m]\n except TypeError:\n comb_method = methodFunc[m[0]]\n ps_pcal_0 = np.c_[self.micp.p_0_a_cal[y_pcal == 0], self.micp.p_0_b_cal[y_pcal == 0]]\n ps_pcal_1 = np.c_[self.micp.p_1_a_cal[y_pcal == 1], self.micp.p_1_b_cal[y_pcal == 1]]\n p_comb_0[i] = comb_method(ps_0, ps_pcal_0)\n p_comb_1[i] = comb_method(ps_1, ps_pcal_1)\n with param.batch_watch(self):\n self.p_comb_0 = p_comb_0\n self.p_comb_1 = p_comb_1\n \n @pn.depends(\"p_comb_0\", \"p_comb_1\")\n def view_table(self):\n return cp_cm_widget(self.p_comb_0, self.p_comb_1,\n self.sd.output['y_test'])\n\n @pn.depends(\"p_comb_0\", \"p_comb_1\")\n def view_validity(self):\n f = Figure(figsize=(12,12))\n ax = f.add_subplot(2, 2, 1)\n for i,m in enumerate(self.methods):\n ax.plot(*ecdf(self.p_comb_0[i][self.sd.output['y_test'] == 0]), label=m)\n ax.set_aspect(1.0)\n ax.set_xlabel(\"Target error rate\")\n ax.set_ylabel(\"Actual error rate\")\n ax.legend()\n ax.plot((0, 1), (0, 1), \"k--\")\n\n ax = f.add_subplot(2, 2, 2)\n for i,m in enumerate(self.methods):\n ax.plot(*ecdf(self.p_comb_1[i][self.sd.output['y_test'] == 1]), label=m)\n ax.set_aspect(1.0)\n ax.set_xlabel(\"Target error rate\")\n ax.set_ylabel(\"Actual error rate\")\n ax.legend()\n ax.plot((0, 1), (0, 1), \"k--\")\n \n ax = f.add_subplot(2, 2, 3)\n for i,m in enumerate(self.methods):\n ps = np.r_[self.p_comb_0[i],self.p_comb_1[i]]\n x,c = ecdf(ps)\n \n ax.plot(x,2*(1-c), label=m)\n ax.set_xlabel(\"Target error rate\")\n ax.set_xlim(0,1)\n ax.set_ylim(0,2)\n ax.set_ylabel(\"Average set size\")\n ax.legend()\n ax.plot((0, 1), (1, 0), \"k--\")\n ax.grid()\n\n \n return f\n \nmc = MultiCombination(sd, micp)",
"_____no_output_____"
],
[
"class AppMulti(param.Parameterized):\n sd = param.Parameter()\n micp = param.Parameter()\n mc = param.Parameter()\n\n def __init__(self, sd, micp, mc, **params):\n self.sd = sd\n self.micp = micp\n self.mc = mc\n\n def view(self):\n \n custom_mc_widgets = pn.Param(self.mc.param, widgets={\"methods\": pn.widgets.CheckBoxGroup})\n return pn.Column(pn.Row(self.sd.param, self.sd.view),\n pn.Row(self.micp.view_tables, self.micp.view_p_plane),\n pn.Row(custom_mc_widgets, self.mc.view_validity))\n\n",
"_____no_output_____"
],
[
"am = AppMulti(sd,micp,mc)",
"_____no_output_____"
],
[
"srv = am.view()",
"_____no_output_____"
],
[
"srv.show()",
"_____no_output_____"
],
[
"srv.stop()",
"_____no_output_____"
]
],
[
[
"# Neyman-Pearson",
"_____no_output_____"
]
],
[
[
"def BetaKDE(X, b): # Unfortunately this is too slow in this implementation\n def kde(x):\n return sum(\n ss.beta(x / b + 1, (1 - x) / b + 1).pdf(x_i) for x_i in X) / len(X)\n\n return kde",
"_____no_output_____"
]
],
[
[
"## Density estimation via histogram",
"_____no_output_____"
]
],
[
[
"def NeymanPearson(p_a, p_b, h0, test_p_a, test_p_b, pics_title_part):\n n_bins = 1000\n min_h1_lh = 0.0001\n\n f = plt.figure(figsize=(22, 5))\n\n ax = f.add_subplot(2, 4, 1)\n h, bins, _ = ax.hist([p_a[h0], p_a[~h0]],\n bins=np.linspace(0, 1, n_bins + 1))\n ax.set_title(\"Histogram of p-values (a)\")\n\n safe_h1 = np.where(h[1] == 0, min_h1_lh, h[1])\n lmbd_a = h[0] / safe_h1\n ax = f.add_subplot(2, 4, 2)\n ax.plot(bins[:-1], lmbd_a)\n ax.set_title(\"Lambda (a)\")\n\n lmbd_a_interp = UnivariateSpline(\n np.concatenate(([0], 0.5 * (bins[1] - bins[0]) + bins[:-1])),\n np.concatenate(([0], lmbd_a)), k=1, s=0, ext=3)\n ax = f.add_subplot(2, 4, 3)\n ax.plot(np.linspace(0, 0.5, 101), lmbd_a_interp(np.linspace(0, 0.5, 101)))\n ax.set_title(\"Lambda (a) for p-values in [0,0.5]\")\n\n ax = f.add_subplot(2, 4, 5)\n h, bins, _ = ax.hist([p_b[h0], p_b[~h0]],\n bins=np.linspace(0, 1, n_bins + 1))\n ax.set_title(\"Histogram of p-values (b)\")\n\n safe_h1 = np.where(h[1] == 0, min_h1_lh, h[1])\n lmbd_b = h[0] / safe_h1\n\n ax = f.add_subplot(2, 4, 6)\n ax.plot(bins[:-1], lmbd_b)\n ax.set_title(\"Lambda (b)\")\n\n lmbd_b_interp = UnivariateSpline(\n np.concatenate(([0], 0.5 * (bins[1] - bins[0]) + bins[:-1])),\n # Let's add the origin and let's assume the middle of the bin\n np.concatenate(([0], lmbd_b)), k=1, s=0, ext=3)\n ax = f.add_subplot(2, 4, 7)\n ax.plot(np.linspace(0, 0.5, 101), lmbd_b_interp(np.linspace(0, 0.5, 101)));\n ax.set_title(\"Lambda (b) for p-values in [0,0.5]\")\n\n lmbd_comb = lmbd_a_interp(p_a) * lmbd_b_interp(p_b)\n\n v, q = ecdf(lmbd_comb[h0])\n\n NP_calibr = UnivariateSpline(v, q, k=1, s=0, ext=3)\n\n lmbd_comb_test = lmbd_a_interp(test_p_a) * lmbd_b_interp(test_p_b)\n\n p_npcomb = NP_calibr(lmbd_comb_test)\n\n ax = f.add_subplot(1, 4, 4)\n xx, yy = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100))\n zz = NP_calibr(\n lmbd_a_interp(xx.ravel()) * lmbd_b_interp(yy.ravel())).reshape(xx.shape)\n\n ax.contourf(xx, yy, zz);\n ax.set_title(\"Combination of p-values\")\n ax.set_xlabel(\"p (a)\")\n ax.set_ylabel(\"p (b)\")\n\n f.tight_layout()\n\n f.savefig(pics_base_name + pics_title_part + \"_npcomb.png\", dpi=300)\n\n return p_npcomb",
"_____no_output_____"
],
[
"p_0_npcomb = NeymanPearson(p_0_a_cal, p_0_b_cal, y_cal == 0, p_0_a, p_0_b,\n pics_title_part=\"_0\")",
"_____no_output_____"
]
],
[
[
"## Density estimation via histogram smoothed with a spline",
"_____no_output_____"
]
],
[
[
"def splineEst(data, n_knots=20, s=0.3):\n k = np.linspace(0, 1, n_knots + 1)\n\n # UnivariateSpline() below requires that the x be strictly increasing\n # quantiles might be the same...\n\n h, bins = np.histogram(data, bins=n_knots, density=True)\n\n ss = UnivariateSpline(0.5 * (bins[:-1] + bins[1:]), h, k=3, s=s, ext=3)\n return ss",
"_____no_output_____"
],
[
"def NeymanPearsonDE(p_a, p_b, h0, p_a_test, p_b_test, pics_title_part,\n densityEstimator=splineEst):\n f = plt.figure(figsize=(22, 5))\n\n p_a_h0 = p_a[h0]\n\n kde = densityEstimator(p_a_h0)\n l_h0 = kde(p_a)\n\n p_a_h1 = p_a[~h0]\n\n kde = densityEstimator(p_a_h1)\n l_h1 = kde(p_a)\n\n lmbd_a = l_h0 / l_h1\n\n # lmbd_a = np.clip(lmbd_a,1e-10,1e+10)\n ax = f.add_subplot(2, 3, 1)\n\n ax.plot(p_a, l_h0, \"r.\", label=\"Null\")\n ax.plot(p_a, l_h1, \"b.\", label=\"Alternate\")\n ax.set_title('Likelihoods (a)')\n\n ax = f.add_subplot(2, 3, 2)\n ax.plot(p_a, lmbd_a, \"g.\")\n ax.set_title('Lambda (a)')\n\n p_a_u, i_u = np.unique(p_a, return_index=True)\n lmbd_a_int = UnivariateSpline(p_a_u, lmbd_a[i_u], k=1, s=0, ext=3)\n\n ########################################################################################\n\n # Now compute lambda for p_b\n\n p_b_h0 = p_b[h0]\n\n kde = densityEstimator(p_b_h0)\n l_h0 = kde(p_b)\n\n p_b_h1 = p_b[~h0]\n\n kde = densityEstimator(p_b_h1)\n l_h1 = kde(p_b)\n\n lmbd_b = l_h0 / l_h1\n\n # lmbd_a = np.clip(lmbd_a,1e-10,1e+10)\n ax = f.add_subplot(2, 3, 4)\n\n ax.plot(p_b, l_h0, \"r.\", label=\"Null\")\n ax.plot(p_b, l_h1, \"b.\", label=\"Alternate\")\n ax.set_xlabel(\"p value\")\n ax.set_title('Likelihoods (b)')\n\n ax = f.add_subplot(2, 3, 5)\n\n ax.plot(p_b, lmbd_b, \"g.\")\n ax.set_title('Lambda (b)')\n ax.set_xlabel(\"p value\")\n\n p_b_u, i_u = np.unique(p_b, return_index=True)\n lmbd_b_int = UnivariateSpline(p_b_u, lmbd_b[i_u], k=1, s=0, ext=3)\n\n ######################################################################\n # Combine the lambdas assuming independence\n # lmbd_comb = lmbd_a_interp(p_a)*lmbd_b_interp(p_b)\n lmbd_comb = lmbd_a * lmbd_b\n\n # lmbd_comb_interp = UnivariateSpline(eval_points,lmbd_comb,k=1,s=0,ext=3)\n\n v, q = ecdf(lmbd_comb[h0])\n\n NP_calibr = UnivariateSpline(v, q, k=1, s=0, ext=3)\n\n # This can take a while\n p_npcomb = NP_calibr(lmbd_a_int(p_a_test) * lmbd_b_int(p_b_test))\n\n ax = f.add_subplot(1, 3, 3)\n xx, yy = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100))\n zz = NP_calibr(lmbd_a_int(xx.ravel()) * lmbd_b_int(yy.ravel())).reshape(\n xx.shape)\n\n ax.contourf(xx, yy, zz);\n ax.set_title(\"Combination of p-values\")\n ax.set_xlabel(\"p (a)\")\n ax.set_ylabel(\"p (b)\")\n ax.set_aspect(1)\n\n f.tight_layout()\n\n f.savefig(pics_base_name + pics_title_part + \"_npde.png\", dpi=300)\n\n return p_npcomb",
"_____no_output_____"
],
[
"p_0_npde = NeymanPearsonDE(p_0_a_cal, p_0_b_cal, y_cal == 0, p_0_a, p_0_b,\n pics_title_part=\"_0\")",
"_____no_output_____"
],
[
"def plot_diag_pVals(p_vals, descs, h0, pics_title_part):\n n_bins = 200\n bins = np.linspace(0, 1, n_bins + 1)\n f, axs = plt.subplots(len(p_vals), 1, figsize=(15, 5 * len(p_vals)))\n\n for ax, p, d in zip(axs, p_vals, descs):\n ax.hist([p[h0], p[~h0]], bins=bins, density=True)\n ax.set_xlabel(\"$p_{%s}$\" % d, fontsize=14)\n\n f.suptitle(\"Histograms of p\" + pics_title_part + \" values\", fontsize=18,\n y=1.02)\n f.tight_layout()\n\n f.savefig(pics_title_part + \"_hists.png\", dpi=150);",
"_____no_output_____"
],
[
"p_1_npcomb = NeymanPearson(p_1_a_cal, p_1_b_cal, y_cal == 1, p_1_a, p_1_b,\n pics_title_part='_1')",
"_____no_output_____"
],
[
"p_1_npde = NeymanPearsonDE(p_1_a_cal, p_1_b_cal, y_cal == 1, p_1_a, p_1_b,\n pics_title_part='_1')",
"_____no_output_____"
],
[
"c_cf_npde, precision_npde = cp_statistics(p_0_npde, p_1_npde, None, None,\n y_test, \"_npde\",\n \" Neyman-Pearson (Spline) Combination\");",
"_____no_output_____"
],
[
"c_cf_npcomb, precision_npcomb = cp_statistics(p_0_npcomb, p_1_npcomb, None,\n None, y_test, \"_npcomb\",\n \" Neyman-Pearson (Hist) Combination\");",
"_____no_output_____"
]
],
[
[
"# V-Matrix Density Ratio Approach",
"_____no_output_____"
]
],
[
[
"from sklearn.externals.joblib import Memory\n\nmem = Memory(location='.', verbose=0)",
"_____no_output_____"
],
[
"mem.clear()",
"_____no_output_____"
],
[
"import sklearn\n\ncached_rbf_kernel = mem.cache(sklearn.metrics.pairwise.rbf_kernel)\n\n\nclass rbf_krnl(object):\n def __init__(self, gamma):\n self.gamma = gamma\n\n def __call__(self, X, Y=None):\n return cached_rbf_kernel(X, Y, gamma=self.gamma)\n\n def __repr__(self):\n return \"RBF Gaussian gamma: \" + str(self.gamma)\n\n\ncached_polynomial_kernel = mem.cache(sklearn.metrics.pairwise.polynomial_kernel)\n\n\nclass poly_krnl(object):\n def __init__(self, gamma):\n self.gamma = gamma\n\n def __call__(self, X, Y=None):\n return cached_polynomial_kernel(X, Y, gamma=self.gamma, coef0=1)\n\n def __repr__(self):\n return \"Polynomial deg 3 kernel gamma: \" + str(self.gamma)\n\n\nclass poly_krnl_2(object):\n def __init__(self, gamma):\n self.gamma = gamma\n\n def __call__(self, X, Y=None):\n return cached_polynomial_kernel(X, Y, degree=2, gamma=self.gamma,\n coef0=1e-6) # I use a homogeneous polynomial kernel\n\n def __repr__(self):\n return \"Polynomial deg 2 kernel gamma: \" + str(self.gamma)\n\n\nclass poly_krnl_inv(object):\n def __init__(self, gamma):\n self.gamma = gamma\n\n def __call__(self, X, Y=None):\n return cached_polynomial_kernel(X, Y, degree=-1, gamma=self.gamma,\n coef0=0.5)\n\n def __repr__(self):\n return \"Polynomial deg -1 kernel gamma: \" + str(self.gamma)",
"_____no_output_____"
],
[
"def INK_Spline_Linear(x, y, gamma):\n x = np.atleast_2d(x) * gamma\n y = np.atleast_2d(y) * gamma\n min_v = np.min(np.stack((x, y)), axis=0)\n\n k_p = 1 + x * y + 0.5 * np.abs(\n x - y) * min_v * min_v + min_v * min_v * min_v / 3\n\n return np.prod(k_p, axis=1)\n\n\ndef INK_Spline_Linear_Normed(x, y, gamma):\n \"\"\"Computes the Linear INK-Spline Kernel\n x,y: 2-d arrays, n samples by p features\n Returns: \"\"\"\n return INK_Spline_Linear(x, y, gamma) / np.sqrt(\n INK_Spline_Linear(x, x, gamma) * INK_Spline_Linear(y, y, gamma))\n\n\nfrom sklearn.metrics import pairwise_kernels\n\n\nclass ink_lin_krnl(object):\n \"\"\"\n Linear INK-Spline Kernel\n Assumes that the domain is [0,+inf]\"\"\"\n\n def __init__(self, gamma):\n self.gamma = gamma\n\n def __call__(self, X, Y=None):\n if Y is None:\n Y = X\n idxs = np.mgrid[slice(0, X.shape[0]), slice(0, Y.shape[0])]\n res = INK_Spline_Linear_Normed(X[idxs[0].ravel()],\n Y[idxs[1].ravel()], self.gamma).reshape(\n X.shape[0], Y.shape[0])\n return res\n\n def __repr__(self):\n return \"Linear INK-spline kernel (on [0,1])\"",
"_____no_output_____"
],
[
"def v_mat_star_eye(X, X_prime, dummy):\n return np.eye(X.shape[0])",
"_____no_output_____"
],
[
"from numba import jit, prange, njit",
"_____no_output_____"
]
],
[
[
"In \"V-Matrix Method of Solving Statistical Inference Problems\" (Vapnik and Izmailov), the V matrix is expressed as:\n\n$$\nV_{i,j} = \\prod_{k=1}^d \\int \\theta(x^{(k)}-X_i^{(k)})\\,\\theta(x^{(k)}-X_j^{(k)}) \\sigma_k(x^{(k)}) d\\mu(x^{(k)})\n$$\n\n\nIf $\\sigma(x^{(k)}) = 1$ and $d\\mu(x^{(k)}) = \\prod_{k=1}^d dF_\\ell(x^{(k)})$ \n$$\nV_{i,j} = \\prod_{k=1}^d \\nu\\left(X^{(k)} > \\max\\left\\lbrace X_i^{(k)},X_j^{(k)}\\right\\rbrace\\right)\n$$\n\nHowever, the following is recommended for density ratio estimation\n\n$$\n\\sigma(x_k) = \\frac{1}{F_{num}(x_k)(1-F_{num}(x_k))+\\epsilon}\n$$\n\nIt's not clear to me why we'd be looking only at the ECDF of the numerator. Why not all the data?\n\nIn any case, how do we calculate the $V_{i,j}$?",
"_____no_output_____"
],
[
"I would say that the integral can be approximated with a sum:\n\n$$\n\\frac{1}{\\ell}\\sum_{x_k > \\left\\lbrace X_i^{(k)},X_j^{(k)}\\right\\rbrace} \\sigma(x_k)\n$$\n\nwhere the $x_k$ are taken from all the data (??)",
"_____no_output_____"
]
],
[
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])')\ndef v_mat_sigma_eye(X, X_prime, data):\n data_sorted = np.sort(data, axis=0)\n data_l = data.shape[0]\n\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n acc = 1\n for k in range(X.shape[1]):\n # Let's compute the frequency of data with values larger than those for X_i and X^'_j\n f = (data_l - np.searchsorted(data_sorted[:, k],\n max(X[i, k], X_prime[j, k]),\n side=\"right\")) / data_l\n acc *= f\n v[i, j] = acc\n return v",
"_____no_output_____"
],
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', nopython=True,\n parallel=True, nogil=True)\ndef v_mat_max(X, X_prime, dummy):\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n acc = 1\n for k in range(X.shape[1]):\n acc *= 1 - max(X[i, k], X_prime[j, k])\n v[i, j] = acc\n return v",
"_____no_output_____"
],
[
"# This takes forever... \n\n\n# @mem.cache\n@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', parallel=True,\n nogil=True)\ndef v_mat_sigma_ratio(X, X_prime, data):\n data_sorted = np.sort(data, 0)\n data_l = data.shape[0]\n eps = 1 / (data_l * data_l) # Just an idea...\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n accu = 1\n for k in range(X.shape[1]):\n dd = data_sorted[:, k]\n s = 0\n for l in data[:, k]:\n if l > X[i, k] and l > X_prime[j, k]:\n f = (np.searchsorted(dd, l, side=\"right\")) / data_l\n s += 1 / (f * (1 - f) + eps)\n accu *= (s / data_l)\n v[i, j] = accu\n return v",
"_____no_output_____"
]
],
[
[
"### Experimental V-matrices",
"_____no_output_____"
]
],
[
[
"from statsmodels.distributions.empirical_distribution import ECDF",
"_____no_output_____"
],
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', parallel=True,\n nogil=True)\ndef v_mat_star_sigma_ratio_approx(X, X_prime, data):\n data_sorted = np.sort(data, 0)\n data_l = data.shape[0]\n eps = 1 / (data_l) # Just an idea...\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n accu = 1\n for k in range(X.shape[1]):\n dd = data_sorted[:, k]\n f = (data_l - np.searchsorted(dd, np.maximum(X[i, k],\n X_prime[j, k]),\n side=\"right\")) / data_l\n f1 = (np.searchsorted(dd, X[i, k], side=\"right\")) / data_l\n f2 = (np.searchsorted(dd, X_prime[j, k], side=\"right\")) / data_l\n accu *= f / (f1 * f2 * (1 - f2) * (1 - f1) + eps)\n v[i, j] = accu\n return v / np.max(v)",
"_____no_output_____"
],
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', parallel=True,\n nogil=True)\ndef v_mat_star_sigma_oneside_approx(X, X_prime, data):\n data_sorted = np.sort(data, 0)\n data_l = data.shape[0]\n eps = 1 / (data_l) # Just an idea...\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n accu = 1\n for k in range(X.shape[1]):\n dd = data_sorted[:, k]\n f = (data_l - np.searchsorted(dd, np.maximum(X[i, k],\n X_prime[j, k]),\n side=\"right\")) / data_l\n f1 = (np.searchsorted(dd, X[i, k], side=\"right\")) / data_l\n f2 = (np.searchsorted(dd, X_prime[j, k], side=\"right\")) / data_l\n accu *= f / ((1 - f1) * (1 - f2) + eps)\n v[i, j] = accu\n return v / np.max(v)",
"_____no_output_____"
],
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', parallel=True,\n nogil=True)\ndef v_mat_star_sigma_rev_approx(X, X_prime, data):\n data_sorted = np.sort(data, 0)\n data_l = data.shape[0]\n eps = 1 / (data_l) # Just an idea...\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n accu = 1\n for k in range(X.shape[1]):\n dd = data_sorted[:, k]\n f = (data_l - np.searchsorted(dd, np.maximum(X[i, k],\n X_prime[j, k]),\n side=\"right\")) / data_l\n f1 = (np.searchsorted(dd, X[i, k], side=\"right\")) / data_l\n f2 = (np.searchsorted(dd, X_prime[j, k], side=\"right\")) / data_l\n accu *= f * f1 * (1 - f1) * f2 * (1 - f2)\n v[i, j] = accu\n return v",
"_____no_output_____"
],
[
"@mem.cache\n@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', parallel=True,\n nogil=True)\ndef v_mat_star_sigma_oneside(X, X_prime, X_num):\n X_num_sorted = np.sort(X_num, 0)\n X_num_l = X_num.shape[0]\n eps = 1e-6\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n acc = 1\n for k in range(X.shape[1]):\n f = np.searchsorted(X_num_sorted[:, k],\n np.maximum(X[i, k], X_prime[j, k]),\n side=\"right\") / X_num_l\n acc *= 1 / (1 + f)\n # acc *= 1/(f*(1-f)+eps)\n v[i, j] = acc\n return v / np.max(v)",
"_____no_output_____"
],
[
"@jit('float64[:,:](float64[:,:],float64[:,:],float64[:,:])', nopython=True,\n parallel=True, nogil=True)\ndef v_mat_star_sigma_log(X, X_prime, dummy):\n v = np.zeros(shape=(X.shape[0], X_prime.shape[0]))\n for i in prange(X.shape[0]):\n for j in range(X_prime.shape[0]):\n acc = 1\n for k in range(X.shape[1]):\n acc *= -np.log(np.maximum(X[i, k], X_prime[j, k]))\n v[i, j] = acc\n return v / np.max(v)",
"_____no_output_____"
],
[
"import cvxopt\n\n\ndef DensityRatio_QP(X_den, X_num, kernel, g, v_matrix, ridge=1e-3):\n \"\"\"\n The function computes a model of the density ratio.\n The function is in the form $A^T K$\n The function returns the coefficients $\\alpha_i$ and the bias term b\n \"\"\"\n l_den, d = X_den.shape\n l_num, d_num = X_num.shape\n\n # TODO: Check d==d_num\n\n ones_num = np.matrix(np.ones(shape=(l_num, 1)))\n zeros_den = np.matrix(np.zeros(shape=(l_den, 1)))\n\n gram = kernel(X_den)\n K = np.matrix(gram + ridge * np.eye(l_den))\n # K = np.matrix(gram) # No ridge\n\n print(\"K max, min: %e, %e\" % (np.max(K), np.min(K)))\n\n data = np.concatenate((X_den, X_num))\n if callable(v_matrix):\n V = np.matrix(v_matrix(X_den, X_den, data))\n V_star = np.matrix(v_matrix(X_den, X_num, data)) # l_den by l_num\n else:\n return -1\n\n print(\"V max,min: %e, %e\" % (np.max(V), np.min(V)))\n print(\"V_star max,min: %e, %e\" % (np.max(V_star), np.min(V_star)))\n\n tgt1 = K * V * K\n print(\"K*V*K max, min: %e, %e\" % (np.max(tgt1), np.min(tgt1)))\n\n tgt2 = g * K\n print(\"g*K max, min: %e, %e\" % (np.max(tgt2), np.min(tgt2)))\n\n P = cvxopt.matrix(2 * (tgt1 + tgt2))\n\n q_ = -2 * (l_den / l_num) * (K * V_star * ones_num)\n\n print(\"q max, min: %e, %e\" % (np.max(q_), np.min(q_)))\n q = cvxopt.matrix(q_)\n\n #### Let's construct the inequality constraints\n\n # Now create G and h\n G = cvxopt.matrix(-K)\n h = cvxopt.matrix(zeros_den)\n # G = cvxopt.matrix(np.vstack((-K,-np.eye(l_den))))\n # h = cvxopt.matrix(np.vstack((zeros_den,zeros_den)))\n\n # Let's construct the equality constraints\n\n A = cvxopt.matrix((1 / l_den) * K * V_star * ones_num).T\n b = cvxopt.matrix(np.ones(1))\n\n return cvxopt.solvers.qp(P, q, G, h, A, b, options=dict(\n maxiters=50)) #### For expediency, we limit the number of iterations",
"_____no_output_____"
],
[
"def RKHS_Eval(A, X_test, X_train, kernel, c=0):\n gramTest = kernel(X_test, X_train)\n\n return np.dot(gramTest, A) + c",
"_____no_output_____"
],
[
"from sklearn.base import BaseEstimator, RegressorMixin\n\nfrom sklearn.preprocessing import StandardScaler\n\n\nclass DensityRatio_Estimator(BaseEstimator, RegressorMixin):\n \"\"\"Custom Regressor for density ratio estimation\"\"\"\n\n def __init__(self, krnl=rbf_krnl(1), g=1, v_matrix=v_mat_sigma_eye):\n self.krnl = krnl\n self.g = g\n self.v_matrix = v_matrix\n\n def fit(self, X_den, X_num):\n\n self.X_train_ = np.copy(X_den)\n\n res = DensityRatio_QP(self.X_train_,\n X_num,\n kernel=self.krnl,\n g=self.g,\n v_matrix=self.v_matrix)\n self.A_ = res['x']\n return self\n\n def predict(self, X):\n if self.A_ is None:\n return None # I should raise an exception\n\n return self.predict_proba(X)\n\n def predict_proba(self, X):\n if self.A_ is None:\n return None # I should raise an exception\n\n pred = RKHS_Eval(A=self.A_,\n X_test=X,\n X_train=self.X_train_,\n kernel=self.krnl)\n return np.clip(pred, a_min=0, a_max=None, out=pred)",
"_____no_output_____"
],
[
"def NeymanPearson_VMatrix(p_a, p_b, h0, p_a_test, p_b_test, g=0.5,\n krnl=ink_lin_krnl(1), v_matrix=v_mat_sigma_ratio,\n diag=True):\n p_h0 = np.hstack((p_a[h0].reshape(-1, 1), p_b[h0].reshape(-1, 1)))\n p_h1 = np.hstack((p_a[~h0].reshape(-1, 1), p_b[~h0].reshape(-1, 1)))\n\n dre = DensityRatio_Estimator(v_matrix=v_matrix,\n krnl=krnl,\n g=g)\n dre.fit(X_den=p_h1, X_num=p_h0)\n\n lmbd_h0 = np.clip(dre.predict(p_h0), 1e-10, None)\n v, q = ecdf(lmbd_h0)\n v = np.concatenate(([0],\n v)) # This may be OK for this application but it is not correct in general\n q = np.concatenate(([0], q))\n NP_calibr = interp1d(v, q, bounds_error=False, fill_value=\"extrapolate\")\n\n p_test = np.hstack((p_a_test.reshape(-1, 1), p_b_test.reshape(-1, 1)))\n p_npcomb = NP_calibr(dre.predict(p_test))\n\n if diag:\n f, axs = plt.subplots(1, 3, figsize=(12, 4))\n x = np.linspace(0, 1, 100)\n xx, yy = np.meshgrid(x, x)\n lambd_grid = dre.predict(\n np.hstack((xx.reshape(-1, 1), yy.reshape(-1, 1))))\n comb_p_grid = NP_calibr(lambd_grid)\n # im = axs[0].imshow(lambd_grid.reshape(100,100),interpolation=None,origin='lower') \n im = axs[0].contourf(xx, yy, lambd_grid.reshape(100, 100));\n f.colorbar(im, ax=axs[0])\n axs[0].set_title(\"Lambda\")\n\n # im = axs[2].imshow(comb_p_grid.reshape(100,100),interpolation=None,origin='lower')\n im = axs[2].contourf(xx, yy, comb_p_grid.reshape(100, 100), vmin=0,\n vmax=1);\n f.colorbar(im, ax=axs[2])\n axs[2].set_title(\"Combined p-value\")\n\n x = np.linspace(0, np.max(lmbd_h0), 100)\n axs[1].plot(x, NP_calibr(x))\n axs[1].set_title(\"Lambda to p-value\")\n print(\"Max:\", np.max(comb_p_grid))\n print(\"Min:\", np.min(comb_p_grid))\n\n f.tight_layout()\n\n return np.clip(p_npcomb.ravel(), 0, 1)",
"_____no_output_____"
],
[
"p_0_a.shape",
"_____no_output_____"
],
[
"%%time\nnp_kwargs = dict(g=1e-5, krnl=rbf_krnl(4.5),\n v_matrix=v_mat_star_sigma_rev_approx)\np_0_npcomb_vm = NeymanPearson_VMatrix(p_0_a_cal, p_0_b_cal, y_cal == 0, p_0_a,\n p_0_b, **np_kwargs)\np_1_npcomb_vm = NeymanPearson_VMatrix(p_1_a_cal, p_1_b_cal, y_cal == 1, p_1_a,\n p_1_b, **np_kwargs)\nc_cf_npcomb_vm, precision_f_npcomb_vm = cp_statistics(p_0_npcomb_vm,\n p_1_npcomb_vm, None, None,\n y_test, \"_np_v\",\n \" NP (V-Matrix)\");",
"_____no_output_____"
]
],
[
[
"+ NNP Ideal | NA \t 606\t14\t 549\t20\t0\t0\t1880\t1931\n+ g=1e-6,krnl=rbf_krnl(6),v_matrix=v_mat_star_sigma_rev_approx | 0.01\t 584\t26\t 67\t21\t0\t0\t1890\t2412\n+ g=1e-5,krnl=rbf_krnl(6),v_matrix=v_mat_star_sigma_rev_approx | 0.01\t 562\t17\t 335\t18\t0\t0\t1921\t2147 ## But better above 0.01\n+ g=1e-5,krnl=rbf_krnl(5),v_matrix=v_mat_star_sigma_rev_approx | 0.01\t 543\t15\t 549\t18\t0\t0\t1942\t1933\n+ g=1e-5,krnl=rbf_krnl(4.5),v_matrix=v_mat_star_sigma_rev_approx | 0.01\t 545\t16\t 554\t19\t0\t0\t1939\t1927",
"_____no_output_____"
]
],
[
[
"ps_0 = np.c_[p_0_a, p_0_b]\nps_1 = np.c_[p_1_a, p_1_b]\n\nps_0_cal = np.c_[p_0_a_cal, p_0_b_cal]\nps_1_cal = np.c_[p_1_a_cal, p_1_b_cal]",
"_____no_output_____"
]
],
[
[
"## Base A",
"_____no_output_____"
]
],
[
[
"c_cf_a, precision_a = cp_statistics(p_0_a, p_1_a, None, None, y_test, \"_a\",\n \" base CP a\");",
"_____no_output_____"
]
],
[
[
"## Base B",
"_____no_output_____"
]
],
[
[
"c_cf_b, precision_b = cp_statistics(p_0_b, p_1_b, None, None, y_test, \"_a\",\n \" base CP a\");",
"_____no_output_____"
],
[
"def confusion_matrices(c_cf, epsilons=(0.01, 0.05, 0.10)):\n idx = pd.IndexSlice\n\n for eps in epsilons:\n c_cf_eps = c_cf.loc[idx[:, eps], idx[:]]\n c_cf_eps.index = c_cf_eps.index.droplevel(1)\n c_cf_eps.index.name = \"$\\epsilon=%0.2f$\" % eps\n\n name_part = (\"_%0.2f\" % eps).replace('.', '_')\n with open(pics_base_name + '_cf' + name_part + '.txt', \"w\") as mf:\n print(c_cf_eps.to_latex(), file=mf)\n\n display(c_cf_eps)\n\n # %%\n\n\np_plane_plot(p_0_a, p_1_a, y_test, \"Conformal Predictor A\", \"_a\")",
"_____no_output_____"
],
[
"p_plane_plot(p_0_npde, p_1_npde, y_test, \"NNP combination\", \"_nnp\")",
"_____no_output_____"
],
[
"p_plane_plot(p_0_f, p_1_f, y_test, \"Fischer combination\", \"_f\")",
"_____no_output_____"
],
[
"p_plane_plot(p_0_npidcomb, p_1_npidcomb, y_test, \"NNP (ideal)\", \"_npid\")",
"_____no_output_____"
],
[
"p_plane_plot(p_0_npcomb_vm, p_1_npcomb_vm, y_test, \"NP (V-Matrix)\", \"_npvmat\")",
"_____no_output_____"
],
[
"cfs_to_compare = [\n (c_cf_a, \"a\"),\n (c_cf_b, \"b\"),\n\n (c_cf_npid, \"NNP Ideal\"),\n (c_cf_npcomb, \"NNP\"),\n (c_cf_npde, \"NNP (spline)\"),\n (c_cf_npcomb_vm, \"NP V-Matrix\"),\n\n (c_cf_avg, \"Arith\"),\n (c_cf_geom, \"Geom\"),\n (c_cf_max, \"Max\"),\n (c_cf_min, \"Min\"),\n (c_cf_bonf, \"Bonferroni\"),\n\n (c_cf_avg_q, \"Arithmetic (Quantile)\"),\n (c_cf_f, \"Geometric (Quantile) Fisher\"),\n (c_cf_max_q, \"Max (Quantile)\"),\n (c_cf_min_q, \"Min (Quantile)\"),\n (c_cf_bonf_q, \"Bonferroni (Quantile)\"),\n\n (c_cf_avg_ECDF, \"Arithmetic (ECDF)\"),\n (c_cf_geom_ECDF, \"Geometric (ECDF)\"),\n (c_cf_f_ECDF, \"Fisher (ECDF)\"),\n (c_cf_max_ECDF, \"Max (ECDF)\"),\n]\n\ncfs, method_names = zip(*cfs_to_compare)\n\nc_cf = pd.concat(cfs,\n keys=method_names, names=(\"p-values\", \"epsilon\"))\nconfusion_matrices(c_cf)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbccdfc0a03214bb173b4e336ad2da650523389 | 11,920 | ipynb | Jupyter Notebook | BCM/index.ipynb | vbartle/resources | 3a698c4fd948384daca14c25e5a5dc9d270ffd3c | [
"MIT"
] | 2 | 2019-08-11T20:44:30.000Z | 2019-08-19T12:18:57.000Z | BCM/index.ipynb | NoahChristiansen/resources | 38a0fb75b4e5531a44db325e250815533592ff88 | [
"MIT"
] | 1 | 2019-07-16T13:03:52.000Z | 2021-04-13T07:43:17.000Z | BCM/index.ipynb | test-v1/resources | 601d944f7490890dcd6ca6695dede4947501b10a | [
"MIT"
] | 1 | 2020-01-31T22:10:44.000Z | 2020-01-31T22:10:44.000Z | 60.507614 | 400 | 0.683893 | [
[
[
"# Bayesian Cognitive Modeling in PyMC3\nPyMC3 port of Lee and Wagenmakers' [Bayesian Cognitive Modeling - A Practical Course](http://bayesmodels.com)\n\nAll the codes are in jupyter notebook with the model explain in distributions (as in the book). Background information of the models please consult the book. You can also compare the result with the original code associated with the book ([WinBUGS and JAGS](https://webfiles.uci.edu/mdlee/Code.zip); [Stan](https://github.com/stan-dev/example-models/tree/master/Bayesian_Cognitive_Modeling))\n\n_All the codes are currently tested under PyMC3 v3.3 master with theano 1.0_",
"_____no_output_____"
],
[
"## Part II - PARAMETER ESTIMATION\n\n### [Chapter 3: Inferences with binomials](./ParameterEstimation/Binomial.ipynb) \n [3.1 Inferring a rate](./ParameterEstimation/Binomial.ipynb#3.1-Inferring-a-rate) \n [3.2 Difference between two rates](./ParameterEstimation/Binomial.ipynb#3.2-Difference-between-two-rates) \n [3.3 Inferring a common rate](./ParameterEstimation/Binomial.ipynb#3.3-Inferring-a-common-rate) \n [3.4 Prior and posterior prediction](./ParameterEstimation/Binomial.ipynb#3.4-Prior-and-posterior-prediction) \n [3.5 Posterior prediction](./ParameterEstimation/Binomial.ipynb#3.5-Posterior-Predictive) \n [3.6 Joint distributions](./ParameterEstimation/Binomial.ipynb#3.6-Joint-distributions) \n \n### [Chapter 4: Inferences with Gaussians](./ParameterEstimation/Gaussian.ipynb) \n [4.1 Inferring a mean and standard deviation](./ParameterEstimation/Gaussian.ipynb#4.1-Inferring-a-mean-and-standard-deviation) \n [4.2 The seven scientists](./ParameterEstimation/Gaussian.ipynb#4.2-The-seven-scientists) \n [4.3 Repeated measurement of IQ](./ParameterEstimation/Gaussian.ipynb#4.3-Repeated-measurement-of-IQ) \n \n### [Chapter 5: Some examples of data analysis](./ParameterEstimation/DataAnalysis.ipynb) \n [5.1 Pearson correlation](./ParameterEstimation/DataAnalysis.ipynb#5.1-Pearson-correlation) \n [5.2 Pearson correlation with uncertainty](./ParameterEstimation/DataAnalysis.ipynb#5.2-Pearson-correlation-with-uncertainty) \n [5.3 The kappa coefficient of agreement](./ParameterEstimation/DataAnalysis.ipynb#5.3-The-kappa-coefficient-of-agreement) \n [5.4 Change detection in time series data](./ParameterEstimation/DataAnalysis.ipynb#5.4-Change-detection-in-time-series-data) \n [5.5 Censored data](./ParameterEstimation/DataAnalysis.ipynb#5.5-Censored-data) \n [5.6 Recapturing planes](./ParameterEstimation/DataAnalysis.ipynb#5.6-Recapturing-planes) \n \n### [Chapter 6: Latent-mixture models](./ParameterEstimation/Latent-mixtureModels.ipynb) \n [6.1 Exam scores](./ParameterEstimation/Latent-mixtureModels.ipynb#6.1-Exam-scores) \n [6.2 Exam scores with individual differences](./ParameterEstimation/Latent-mixtureModels.ipynb#6.2-Exam-scores-with-individual-differences) \n [6.3 Twenty questions](./ParameterEstimation/Latent-mixtureModels.ipynb#6.3-Twenty-questions) \n [6.4 The two-country quiz](./ParameterEstimation/Latent-mixtureModels.ipynb#6.4-The-two-country-quiz) \n [6.5 Assessment of malingering](./ParameterEstimation/Latent-mixtureModels.ipynb#6.5-Assessment-of-malingering) \n [6.6 Individual differences in malingering](./ParameterEstimation/Latent-mixtureModels.ipynb#6.6-Individual-differences-in-malingering) \n [6.7 Alzheimer’s recall test cheating](./ParameterEstimation/Latent-mixtureModels.ipynb#6.7-Alzheimer's-recall-test-cheating) ",
"_____no_output_____"
],
[
"## Part III - MODEL SELECTION\n\n### [Chapter 8: Comparing Gaussian means](./ModelSelection/ComparingGaussianMeans.ipynb) \n [8.1 One-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.1-One-sample-comparison) \n [8.2 Order-restricted one-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.2-Order-restricted-one-sample-comparison) \n [8.3 Two-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.3-Two-sample-comparison)\n \n### [Chapter 9: Comparing binomial rates](./ModelSelection/ComparingBinomialRates.ipynb) \n [9.1 Equality of proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.1-Equality-of-proportions) \n [9.2 Order-restricted equality of proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.2-Order-restricted-equality-of-proportions) \n [9.3 Comparing within-subject proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.3-Comparing-within-subject-proportions) \n [9.4 Comparing between-subject proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.4-Comparing-between-subject-proportions) \n [9.5 Order-restricted between-subjects comparison](./ModelSelection/ComparingBinomialRates.ipynb#9.5-Order-restricted-between-subject-proportions) ",
"_____no_output_____"
],
[
"## Part IV - CASE STUDIES\n\n### [Chapter 10: Memory retention](./CaseStudies/MemoryRetention.ipynb) \n [10.1 No individual differences](./CaseStudies/MemoryRetention.ipynb#10.1-No-individual-differences) \n [10.2 Full individual differences](./CaseStudies/MemoryRetention.ipynb#10.2-Full-individual-differences) \n [10.3 Structured individual differences](./CaseStudies/MemoryRetention.ipynb#10.3-Structured-individual-differences) \n \n### [Chapter 11: Signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb) \n [11.1 Signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb#11.1-Signal-detection-theory) \n [11.2 Hierarchical signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb#11.2-Hierarchical-signal-detection-theory) \n [11.3 Parameter expansion](./CaseStudies/SignalDetectionTheory.ipynb#11.3-Parameter-expansion) \n \n### [Chapter 12: Psychophysical functions](./CaseStudies/PsychophysicalFunctions.ipynb) \n [12.1 Psychophysical functions](./CaseStudies/PsychophysicalFunctions.ipynb#12.1-Psychophysical-functions) \n [12.2 Psychophysical functions under contamination](./CaseStudies/PsychophysicalFunctions.ipynb#12.2-Psychophysical-functions-under-contamination) \n \n### [Chapter 13: Extrasensory perception](./CaseStudies/ExtrasensoryPerception.ipynb) \n [13.1 Evidence for optional stopping](./CaseStudies/ExtrasensoryPerception.ipynb#13.1-Evidence-for-optional-stopping) \n [13.2 Evidence for differences in ability](./CaseStudies/ExtrasensoryPerception.ipynb#13.2-Evidence-for-differences-in-ability) \n [13.3 Evidence for the impact of extraversion](./CaseStudies/ExtrasensoryPerception.ipynb#13.3-Evidence-for-the-impact-of-extraversion) \n \n### [Chapter 14: Multinomial processing trees](./CaseStudies/MultinomialProcessingTrees.ipynb) \n [14.1 Multinomial processing model of pair-clustering](./CaseStudies/MultinomialProcessingTrees.ipynb#14.1-Multinomial-processing-model-of-pair-clustering) \n [14.2 Latent-trait MPT model](./CaseStudies/MultinomialProcessingTrees.ipynb#14.2-Latent-trait-MPT-model) \n \n### [Chapter 15: The SIMPLE model of memory](./CaseStudies/TheSIMPLEModelofMemory.ipynb) \n [15.1 The SIMPLE model](./CaseStudies/TheSIMPLEModelofMemory.ipynb#15.1-The-SIMPLE-model) \n [15.2 A hierarchical extension of SIMPLE](./CaseStudies/TheSIMPLEModelofMemory.ipynb#15.2-A-hierarchical-extension-of-SIMPLE) \n \n### [Chapter 16: The BART model of risk taking](./CaseStudies/TheBARTModelofRiskTaking.ipynb) \n [16.1 The BART model](./CaseStudies/TheBARTModelofRiskTaking.ipynb#16.1-The-BART-model) \n [16.2 A hierarchical extension of the BART model](./CaseStudies/TheBARTModelofRiskTaking.ipynb#16.2-A-hierarchical-extension-of-the-BART-model) \n\n### [Chapter 17: The GCM model of categorization](./CaseStudies/TheGCMModelofCategorization.ipynb) \n [17.1 The GCM model](./CaseStudies/TheGCMModelofCategorization.ipynb#17.1-The-GCM-model) \n [17.2 Individual differences in the GCM](./CaseStudies/TheGCMModelofCategorization.ipynb#17.2-Individual-differences-in-the-GCM) \n [17.3 Latent groups in the GCM](./CaseStudies/TheGCMModelofCategorization.ipynb#17.3-Latent-groups-in-the-GCM) \n \n### [Chapter 18: Heuristic decision-making](./CaseStudies/HeuristicDecisionMaking.ipynb) \n [18.1 Take-the-best](./CaseStudies/HeuristicDecisionMaking.ipynb#18.1-Take-the-best) \n [18.2 Stopping](./CaseStudies/HeuristicDecisionMaking.ipynb#18.2-Stopping) \n [18.3 Searching](./CaseStudies/HeuristicDecisionMaking.ipynb#18.3-Searching) \n [18.4 Searching and stopping](./CaseStudies/HeuristicDecisionMaking.ipynb#18.4-Searching-and-stopping) \n \n### [Chapter 19: Number concept development](./CaseStudies/NumberConceptDevelopment.ipynb) \n [19.1 Knower-level model for Give-N](./CaseStudies/NumberConceptDevelopment.ipynb#19.1-Knower-level-model-for-Give-N) \n [19.2 Knower-level model for Fast-Cards](./CaseStudies/NumberConceptDevelopment.ipynb#19.2-Knower-level-model-for-Fast-Cards) \n [19.3 Knower-level model for Give-N and Fast-Cards](./CaseStudies/NumberConceptDevelopment.ipynb#19.3-Knower-level-model-for-Give-N-and-Fast-Cards) ",
"_____no_output_____"
]
],
[
[
"# Python Environment and library version\n%load_ext watermark\n%watermark -v -m -p pymc3,theano,scipy,numpy,pandas,matplotlib,seaborn",
"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
ecbcd87ab49227dd8e8a91504f67d64dc31e2a25 | 36,479 | ipynb | Jupyter Notebook | Emotions Train.ipynb | Vineethpaul09/Facial-Emotions-Detection-Training | 23a78dcee3380b34e302d593c0dc9ea28dc1482d | [
"MIT"
] | null | null | null | Emotions Train.ipynb | Vineethpaul09/Facial-Emotions-Detection-Training | 23a78dcee3380b34e302d593c0dc9ea28dc1482d | [
"MIT"
] | null | null | null | Emotions Train.ipynb | Vineethpaul09/Facial-Emotions-Detection-Training | 23a78dcee3380b34e302d593c0dc9ea28dc1482d | [
"MIT"
] | null | null | null | 49.163073 | 8,436 | 0.637134 | [
[
[
"from project_libraries import *\nfrom keras.models import model_from_json\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.optimizers import *\nfrom keras.layers.normalization import BatchNormalization\nimport seaborn as sns\nimport timeit\nfrom sklearn.model_selection import KFold",
"Using TensorFlow backend.\n"
]
],
[
[
"## Loading the data\n\nThe data consists of 48x48 pixel grayscale images of faces. \n\n\nGenerally facial expression into one of seven categories that are : 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral\n \n\nThe training set consists of 28,709 examples and the public test set consists of 3,589 examples.\n",
"_____no_output_____"
]
],
[
[
"# loading the data\ndata = './sa.csv'\n# In the dataset lableling map is 0 as anger, 1=Disgust, 2=Fear.. etc.\nlabel_map = ['Anger', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']\n# the dataset has headers like emotion, pixels and useage\nheaders =['emotion','pixels','usage']\ndf=pd.read_csv(data ,names=headers, na_filter=False)\nim=df['pixels']\n# head data \ndf.head(10)",
"_____no_output_____"
]
],
[
[
"## Data visualization\nBy this can able understand the dataset, number of data for each class",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"./sa.csv\")\ndf.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(9,4))\nsns.countplot(x='emotion', data=df)",
"_____no_output_____"
]
],
[
[
"## Analyze the data\nLets now analyze how images in the dataset look like how much data is present for each class and how many number of classes are present.",
"_____no_output_____"
]
],
[
[
"# Function dataset file is for defining the number of classes\ndef Datasetfile(filname):\n # images are 48x48\n # N = 35887\n Y = [] # y is used name classes\n X = [] # x is used for remaining data\n first = True\n for line in open(filname):\n if first:\n first = False\n else:\n row = line.split(',')\n Y.append(int(row[0]))\n X.append([int(p) for p in row[1].split()])\n X, Y = np.array(X) / 255.0, np.array(Y)\n return X, Y",
"_____no_output_____"
],
[
"X, Y = Datasetfile(data)\nname_class = set(Y)\nprint(\"The name of classes\")\nif 0 in name_class:\n print(\"Anger\")\n if 1 in name_class:\n print(\"Disgust\")\n if 2 in name_class:\n print(\"Fear\")\n if 3 in name_class:\n print(\"Happy\")\n if 4 in name_class:\n print(\"Sad\")\n if 5 in name_class:\n print(\"Suprise\")\n if 6 in name_class:\n print(\"Neutral\")\nnum_class = len(name_class)\nprint(\"The number of classes are:\",num_class)",
"The name of classes\nAnger\nDisgust\nFear\nHappy\nSad\nSuprise\nNeutral\nThe number of classes are: 7\n"
],
[
"# keras with tensorflow backend\nN, D = X.shape\nX = X.reshape(N, 48, 48, 1)",
"_____no_output_____"
]
],
[
[
"splitting the data into train data and test data.\nUsing the 90% data for the training and remaining for testing.",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=0)\ny_train = (np.arange(num_class) == y_train[:, None]).astype(np.float32)\ny_test = (np.arange(num_class) == y_test[:, None]).astype(np.float32)",
"_____no_output_____"
]
],
[
[
"## The model:\nWith the help of CNN using 6 convolutional layers including batch normalization, activation, max pooling, dropout layers and flatten layyers. 2 full connected dense layers and finally dense layer.",
"_____no_output_____"
]
],
[
[
"def my_model():\n \n # Sequential is allow the build the model layers by layers\n model = Sequential()\n # input shape \n input_shape = (48,48,1)\n # For the output volume size matches the input volume size, by setting the value to the “same”.\n padding = 'same'\n # activation Relu will help to bring the non linearity in the model\n activation = 'relu'\n #technique to coordinate the update of multiple layers in the modeland also accelerate learning process\n #Normalization = BatchNormalization()\n model.add(Conv2D(64, (5, 5), input_shape=input_shape,activation=activation, padding=padding))\n model.add(Conv2D(64, (5, 5), activation=activation, padding=padding))\n model.add(BatchNormalization())\n model.add(MaxPooling2D(pool_size=(2, 2)))\n \n model.add(Conv2D(128, (5, 5),activation=activation, padding=padding))\n model.add(Conv2D(128, (5, 5),activation=activation, padding=padding))\n model.add(BatchNormalization())\n model.add(MaxPooling2D(pool_size=(2, 2)))\n \n model.add(Conv2D(256, (3, 3),activation=activation, padding=padding))\n model.add(Conv2D(256, (3, 3),activation=activation, padding=padding))\n model.add(BatchNormalization())\n model.add(MaxPooling2D(pool_size=(2, 2)))\n \n # Flatten helps converting the data into 1-dimension array. for inputting full connected dense layer\n model.add(Flatten())\n model.add(Dense(128))\n model.add(BatchNormalization())\n model.add(Activation('relu')) # activation function\n # The dropped while training this effect makes network less sensitive, also reduce the problem - less overfit\n model.add(Dropout(0.25))\n \n model.add(Dense(7))\n model.add(Activation('softmax'))\n # compile the model with the parameters.\n model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer='adam')\n\n return model\n# renaming the my_model as model\nmodel=my_model()\n# model summary\nmodel.summary()",
"WARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nWARNING:tensorflow:From c:\\users\\vineethpaulpr\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:3295: The name tf.log is deprecated. Please use tf.math.log instead.\n\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 48, 48, 64) 1664 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 48, 48, 64) 102464 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 48, 48, 64) 256 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 24, 24, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 24, 24, 128) 204928 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 24, 24, 128) 409728 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 24, 24, 128) 512 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 12, 12, 128) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 12, 12, 256) 295168 \n_________________________________________________________________\nconv2d_6 (Conv2D) (None, 12, 12, 256) 590080 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 12, 12, 256) 1024 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 6, 6, 256) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 9216) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 128) 1179776 \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, 128) 512 \n_________________________________________________________________\nactivation_1 (Activation) (None, 128) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 7) 903 \n_________________________________________________________________\nactivation_2 (Activation) (None, 7) 0 \n=================================================================\nTotal params: 2,787,015\nTrainable params: 2,785,863\nNon-trainable params: 1,152\n_________________________________________________________________\n"
],
[
"#Define the K-fold Cross Validator\nnum_folds = 3\nacc_per_fold = 0;\nloss_per_fold = 0;\n\npath_model = 'model_filter.h5'\nkfold = KFold(n_splits=num_folds, shuffle=True)\n\n# K-fold Cross Validation model evaluation\nfold_no = 1\nfor train, test in kfold.split(X_train, y_train):\n\n # Define the model architecture\n model = my_model()\n \n # Generate a print\n print('------------------------------------------------------------------------')\n print(f'Training for fold {fold_no} ...')\n # Fit data to model\n history = model.fit(x=X_train, y=y_train,epochs=1,batch_size=64,verbose=1,validation_data=(X_test,y_test),shuffle=True, callbacks=[ModelCheckpoint(filepath=path_model, verbose=1, save_best_only=True),])\n\n\n # Generate generalization metrics\n scores = model.evaluate(X_test, y_test, verbose=0)\n print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%')\n acc_per_fold.append(scores[1] * 100)\n loss_per_fold.append(scores[0])\n\n # Increase fold number\n fold_no = fold_no + 1",
"------------------------------------------------------------------------\nTraining for fold 1 ...\nTrain on 64596 samples, validate on 7178 samples\nEpoch 1/1\n17344/64596 [=======>......................] - ETA: 33:37 - loss: 1.8199 - acc: 0.2858"
],
[
"# for understanding the time consuming of the model, \n# starting the time\nstart = timeit.default_timer()\npath_model='model_filter.h5' # saving model\nK.tensorflow_backend.clear_session() # destroys the current graph and builds a new one\nmodel=my_model() # create the model\nK.set_value(model.optimizer.lr,1e-3) # set the learning rate at 0.001\n# fit the model\n# parameter are x as x train data\n# y as y train data\n# batch size is 64, which means it take 64 samples from the dataset and train network. the defult value is 32, if its 64,128,256 are good for the model\n# epoches is 20\n# for validation data we are using X_test and Y_test.\nh=model.fit(x=X_train, y=y_train,epochs=20,batch_size=64,verbose=1,validation_data=(X_test,y_test),shuffle=True, callbacks=[ModelCheckpoint(filepath=path_model, verbose=1, save_best_only=True),])\n# time is stop at the end of all the epoches\nstop = timeit.default_timer()\n# printing the time \nprint('Time: ', stop - start) ",
"_____no_output_____"
]
],
[
[
"## Model evaluation",
"_____no_output_____"
]
],
[
[
"\n# evaluting the model with x_test and y_test.\ntest_eval = model.evaluate(X_test, y_test, verbose=0)\nprint('Test loss:', test_eval[0])\nprint('Test accuracy:', test_eval[1])",
"_____no_output_____"
]
],
[
[
"Graphical Representation of model accuracy and model loss for both training and testing",
"_____no_output_____"
]
],
[
[
"plt.plot(h.history['acc'])\nplt.plot(h.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(h.history['loss'])\nplt.plot(h.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbcdaa02ee2ad8e31722e63dbdf0de1c36676f4 | 42,509 | ipynb | Jupyter Notebook | continuous-variables/LabMate-AI-one-hot+continuous.ipynb | YANGZ001/OrganicChem-LabMate-AI | fb826d85dd852aab987b9bef6856d8da6a4bd9be | [
"MIT"
] | null | null | null | continuous-variables/LabMate-AI-one-hot+continuous.ipynb | YANGZ001/OrganicChem-LabMate-AI | fb826d85dd852aab987b9bef6856d8da6a4bd9be | [
"MIT"
] | null | null | null | continuous-variables/LabMate-AI-one-hot+continuous.ipynb | YANGZ001/OrganicChem-LabMate-AI | fb826d85dd852aab987b9bef6856d8da6a4bd9be | [
"MIT"
] | null | null | null | 65.803406 | 9,968 | 0.739067 | [
[
[
"# Please read me first\n- Make sure 'train_data.xlsx' and 'all_combos.xlsx' are in the folder 'LabMate-AI'",
"_____no_output_____"
]
],
[
[
"# ALL CODE:\nimport numpy as np \nimport pandas as pd \nfrom sklearn.model_selection import KFold\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import GridSearchCV\n# from sklearn.externals.joblib import dump\nimport seaborn as sns\nimport time\nfrom treeinterpreter import treeinterpreter as ti\nimport matplotlib.pyplot as plt\n\ndef get_colu_num_numerical(filename1_df):#X with no label\n colu = filename1_df.iloc[:,:-1].select_dtypes(exclude=['object']) #extract columns\n num = len(colu.columns) #get the number of columns\n return colu, num\n\ndef get_colu_num_catagorical(filename1_df):#X with no label\n colu = filename1_df.iloc[:,:-1].select_dtypes(include=['object']) #extract columns\n num = len(colu.columns) #get the number of columns\n return colu, num\n\n# find index in all_combos of train_data \n#先在all里面加入索引,然后merge 去交际,留下交际,得到index\ndef get_train_OHE(filename1_df, all_combos_df):\n All_OHE_df = pd.get_dummies(all_combos_df) # OHE for all data\n copy_df = all_combos_df.copy()\n copy_df['index'] = np.arange(len(all_combos_df.iloc[:,1]))# Creat a new colum for index in all\n X = filename1_df.iloc[:,:-1] #X\n merge_df = pd.merge(copy_df, X, how='inner') #Merge data from unconverted formation\n index_lst = merge_df['index'].tolist()\n X_OHE_df = All_OHE_df.iloc[index_lst,:]\n return X_OHE_df\n\n# Get the index of each catagorical data\ndef get_OHE_index(filename1_df): #from your X\n #get unique num:\n Numerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)#get the number of numerical columns, one-hot-encoded columns should appear behind them.\n Discrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)\n unique_num = []\n for i in Discrete_colu.columns:\n unique_num.append(len(Discrete_colu[i].unique()))\n index = [Numerical_num]\n count = Numerical_num\n for i in unique_num[:-1]:\n count = count + int(i)\n index.append(count)\n print('Index of OHE:',index)\n return index\n\n#tree interpreter for feature importance\n#以下是tree interpreter的集成代码\ndef plot_tree(model,filename1_df, all_combos_df): #model, filename1_df is dataframe of train data. All is dataframe of allcombos.\n All_OHE_df = pd.get_dummies(all_combos_df)\n prediction, bias, contributions = ti.predict(model, All_OHE_df)\n contributions = pd.DataFrame(contributions, columns=All_OHE_df.columns) #get all the contributions\n X = filename1_df.iloc[:,:-1]\n index = get_OHE_index(X)\n Numerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)\n Discrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)\n Discrete_colu_name = Discrete_colu.columns\n unique_num = []\n for i in Discrete_colu.columns:\n unique_num.append(len(Discrete_colu[i].unique()))\n numerical_contri_df = contributions.iloc[:,:Numerical_num]\n discrete_contri_df = pd.DataFrame()\n for i in range(Discrete_num):\n posi = index[i]\n increment = unique_num[i]\n discrete_contri_df[Discrete_colu_name[i]] = contributions.iloc[:,posi:posi+increment].sum(1)\n contri_all_df = pd.concat([numerical_contri_df, discrete_contri_df], axis=1)\n plt.figure(figsize=(12, 20))\n contri_all_df.plot.box()\n plt.title('Feature importance')\n plt.savefig('Feature importance.png')\n plt.show()\n\n#load data\nprint('Welcome! Let me work out what is the best experiment for you to run...')\n#filename1 = input(\"Please type in your train data file name: \")\n#filename2 = input('Please type in your chemical space file name: ')\nprint('\\nStart:', time.strftime(\"%Y/%m/%d %H:%M:%S\"))\nfilename1 = 'train_data.xlsx'\nfilename2 = 'all_combos.xlsx'\nprint('\\nLoading: ', filename1)\n\nfilename1_df = pd.read_excel(filename1)\nall_combos_df = pd.read_excel(filename2)\ny = filename1_df.iloc[:,-1]\nX = filename1_df.iloc[:,:-1]\nAll_OHE_df = pd.get_dummies(all_combos_df)\n# df_train_corrected = train.iloc[:,:-1]\nX_OHE_df = get_train_OHE(filename1_df, all_combos_df)\nunseen = pd.concat([All_OHE_df, X_OHE_df]).drop_duplicates(keep=False) # drop when you get index\n#process\n#process\n\nprint('\\nAll good till now. I am figuring out the best method to analyze your data. Bear with me...')\n#General stuff\nseed = 1234\nkfold = KFold(n_splits = 2, random_state = seed)#\nscoring = 'neg_mean_absolute_error'\nmodel = RandomForestRegressor(random_state=seed)\n\n#Parameters to tune\n# estimators = np.arange(100, 1050, 50) \n# estimators_int = np.ndarray.tolist(estimators)\nestimators_int = [200, 500]\nparam_grid = {'n_estimators':estimators_int, 'max_features':['auto', 'sqrt'],\n 'max_depth':[None ]}#\n#search best parameters and train\ngrid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold, n_jobs=-1)\ngrid_result = grid.fit(X_OHE_df, y)\n\n#print the best data cranked out from the grid search\nnp.savetxt('Model_best_score_MAE.txt', [\"best_score: %s\" % grid.best_score_], fmt ='%s')\nbest_params = pd.DataFrame([grid.best_params_], columns=grid.best_params_.keys())\nbest_params.to_csv('best_parameters.txt', sep= '\\t')\n\nprint('\\n... done! It is going to be lightspeed from here on out! :)')\n#predict future data\n\nmodel2 = grid.best_estimator_\n# model2 = RandomForestRegressor(n_estimators = grid.best_params_['n_estimators'], max_features = grid.best_params_['max_features'], max_depth = grid.best_params_['max_depth'], random_state = seed)\nRF_fit = model2.fit(X_OHE_df, y)\npredictions = model2.predict(unseen)\npredictions_df = pd.DataFrame(data=predictions, columns=['Prediction'])\n\n# #feature importance\n# feat_imp = pd.DataFrame(model2.feature_importances_,\n# index=train.iloc[:,1:-1].columns,\n# columns=['Feature_importances'])\n# feat_imp = feat_imp.sort_values(by=['Feature_importances'], ascending = False)\n\nall_predictions = []\nfor e in model2.estimators_:\n all_predictions += [e.predict(unseen)]\n\nvariance = np.var(all_predictions, axis=0)\nvariance_df = pd.DataFrame(data=variance, columns=['Variance'])\n\nassert len(variance) == len(predictions)\nunseen_combos_df = pd.concat([all_combos_df, X]).drop_duplicates(keep=False)\ndf = pd.concat([unseen_combos_df, predictions_df, variance_df], axis=1)\n\nprint('Analysing feature importance for you...')\nplot_tree(model2, filename1_df, all_combos_df)\n#save data\ndf.to_csv('predictions.csv',index=False)\nprint('\\nYou are all set! Have a good one, mate!')#all_comboss.csv\nprint('\\nEnd:', time.strftime(\"%Y/%m/%d %H:%M:%S\"))#train_data.csv",
"Welcome! Let me work out what is the best experiment for you to run...\n\nStart: 2019/09/18 14:33:38\n\nLoading: train_data.xlsx\n\nAll good till now. I am figuring out the best method to analyze your data. Bear with me...\n\n... done! It is going to be lightspeed from here on out! :)\nAnalysing feature importance for you...\nIndex of OHE: [3, 6, 9]\n"
],
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import KFold\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import GridSearchCV\n# from sklearn.externals.joblib import dump\nimport seaborn as sns\nimport time\nfrom treeinterpreter import treeinterpreter as ti\nimport matplotlib.pyplot as plt\n\n#load data\nprint('Welcome! Let me work out what is the best experiment for you to run...')\n#filename1 = input(\"Please type in your train data file name: \")\n#filename2 = input('Please type in your chemical space file name: ')\nprint('\\nStart:', time.strftime(\"%Y/%m/%d %H:%M:%S\"))\nfilename1 = 'train_data.xlsx'\nfilename2 = 'all_combos.xlsx'\nprint('\\nLoading: ', filename1)\n\nfilename1_df = pd.read_excel(filename1)\nall_combos_df = pd.read_excel(filename2)\ny = filename1_df.iloc[:,-1]\nX = filename1_df.iloc[:,:-1]\n# all_combos_df['index'] = np.arange(len(all_combos_df.iloc[:,1]))\nAll_OHE_df = pd.get_dummies(all_combos_df)\n# df_train_corrected = train.iloc[:,:-1]\nX_OHE_df = get_train_OHE(filename1_df, all_combos_df)\nunseen = pd.concat([All_OHE_df, X_OHE_df]).drop_duplicates(keep=False) # drop when you get index\n#process\n\nNumerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)#get the number of numerical columns, one-hot-encoded columns should appear behind them.\nDiscrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)",
"Welcome! Let me work out what is the best experiment for you to run...\n\nStart: 2019/09/18 14:24:56\n\nLoading: train_data.xlsx\n"
],
[
"# find index in all_combos of train_data \n#先在all里面加入索引,然后merge 去交际,留下交际,得到index\ndef get_train_OHE(filename1_df, all_combos_df):\n All_OHE_df = pd.get_dummies(all_combos_df) # OHE for all data\n all_combos_df['index'] = np.arange(len(all_combos_df.iloc[:,1]))# Creat a new colum for index in all\n X = filename1_df.iloc[:,:-1] #X\n merge_df = pd.merge(all_combos_df, X, how='inner') #Merge data from unconverted formation\n index_lst = merge_df['index'].tolist()\n X_OHE_df = All_OHE_df.iloc[index_lst,:]\n return X_OHE_df\n\nX_OHE_df = get_train_OHE(filename1_df, all_combos_df)",
"_____no_output_____"
],
[
"def get_colu_num_numerical(filename1_df):#X with no label\n colu = filename1_df.iloc[:,:-1].select_dtypes(exclude=['object']) #extract columns\n num = len(colu.columns) #get the number of columns\n return colu, num\n\ndef get_colu_num_catagorical(filename1_df):#X with no label\n colu = filename1_df.iloc[:,:-1].select_dtypes(include=['object']) #extract columns\n num = len(colu.columns) #get the number of columns\n return colu, num\n\nNumerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)#get the number of numerical columns, one-hot-encoded columns should appear behind them.\nDiscrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)",
"_____no_output_____"
],
[
"# Get the index of each catagorical data\ndef get_OHE_index(filename1_df): #from your X\n #get unique num:\n Numerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)#get the number of numerical columns, one-hot-encoded columns should appear behind them.\n Discrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)\n unique_num = []\n for i in Discrete_colu.columns:\n unique_num.append(len(Discrete_colu[i].unique()))\n index = [Numerical_num]\n count = Numerical_num\n for i in unique_num[:-1]:\n count = count + int(i)\n index.append(count)\n print('Index of OHE:',index)\n return index",
"_____no_output_____"
],
[
"print('\\nAll good till now. I am figuring out the best method to analyze your data. Bear with me...')\n#General stuff\nseed = 1234\nkfold = KFold(n_splits = 2, random_state = seed)#\nscoring = 'neg_mean_absolute_error'\nmodel = RandomForestRegressor(random_state=seed)\n\n#Parameters to tune\n# estimators = np.arange(100, 1050, 50) \n# estimators_int = np.ndarray.tolist(estimators)\nestimators_int = [200, 500]\nparam_grid = {'n_estimators':estimators_int, 'max_features':['auto', 'sqrt'],\n 'max_depth':[None ]}#\n#search best parameters and train\ngrid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold, n_jobs=-1)\ngrid_result = grid.fit(X_OHE_df, y)\n\n#print the best data cranked out from the grid search\nnp.savetxt('Model_best_score_MAE.txt', [\"best_score: %s\" % grid.best_score_], fmt ='%s')\nbest_params = pd.DataFrame([grid.best_params_], columns=grid.best_params_.keys())\nbest_params.to_csv('best_parameters.txt', sep= '\\t')\n",
"\nAll good till now. I am figuring out the best method to analyze your data. Bear with me...\n"
],
[
"print('\\n... done! It is going to be lightspeed from here on out! :)')\n#predict future data\n\nmodel2 = grid.best_estimator_\n# model2 = RandomForestRegressor(n_estimators = grid.best_params_['n_estimators'], max_features = grid.best_params_['max_features'], max_depth = grid.best_params_['max_depth'], random_state = seed)\nRF_fit = model2.fit(X_OHE_df, y)\npredictions = model2.predict(unseen)\npredictions_df = pd.DataFrame(data=predictions, columns=['Prediction'])\n\n# #feature importance\n# feat_imp = pd.DataFrame(model2.feature_importances_,\n# index=train.iloc[:,1:-1].columns,\n# columns=['Feature_importances'])\n# feat_imp = feat_imp.sort_values(by=['Feature_importances'], ascending = False)\n\nall_predictions = []\nfor e in model2.estimators_:\n all_predictions += [e.predict(X2)]\n\nvariance = np.var(all_predictions, axis=0)\nvariance_df = pd.DataFrame(data=variance, columns=['Variance'])\n\nassert len(variance) == len(predictions)\nunseen_combos_df = pd.concat([all_combos_df, X]).drop_duplicates(keep=False)\ndf = pd.concat([unseen_combos_df, predictions_df, variance_df], axis=1)",
"_____no_output_____"
],
[
"#tree interpreter for feature importance\n#以下是tree interpreter的集成代码\ndef plot_tree(model,filename1_df, all_combos_df): #model, filename1_df is dataframe of train data. All is dataframe of allcombos.\n All_OHE_df = pd.get_dummies(all_combos_df)\n prediction, bias, contributions = ti.predict(model, All_OHE_df)\n contributions = pd.DataFrame(contributions, columns=All_OHE_df.columns) #get all the contributions\n X = filename1_df.iloc[:,:-1]\n index = get_OHE_index(X)\n Numerical_colu, Numerical_num = get_colu_num_numerical(filename1_df)\n Discrete_colu, Discrete_num = get_colu_num_catagorical(filename1_df)\n Discrete_colu_name = Discrete_colu.columns\n unique_num = []\n \n for i in Discrete_colu.columns:\n unique_num.append(len(Discrete_colu[i].unique()))\n numerical_contri_df = contributions.iloc[:,:Numerical_num]\n discrete_contri_df = pd.DataFrame()\n for i in range(Discrete_num):\n posi = index[i]\n increment = unique_num[i]\n discrete_contri_df[Discrete_colu_name[i]] = contributions.iloc[:,posi:posi+increment].sum(1)\n contri_all_df = pd.concat([numerical_contri_df, discrete_contri_df], axis=1)\n plt.figure(figsize=(12, 20))\n contri_all_df.plot.box()\n plt.title('Feature importance')\n plt.savefig('Feature importance.png')\n plt.show()",
"_____no_output_____"
],
[
"print('Analysing feature importance for you...')\nplot_tree(model2, filename1_df, all_combos_df)\n#save data\ndf.to_csv('predictions.csv',index=False)\nprint('\\nYou are all set! Have a good one, mate!')#all_comboss.csv\nprint('\\nEnd:', time.strftime(\"%Y/%m/%d %H:%M:%S\"))#train_data.csv",
"Analysing feature importance for you...\nIndex of OHE: [3, 7]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbcdad9751e13f2b8a113c19544d18104677d07 | 772,501 | ipynb | Jupyter Notebook | codes/.ipynb_checkpoints/P5.1 Model Prep, Modelling, Evaluation & Recommendations-checkpoint.ipynb | AngShengJun/dsiCapstone | 8b2e11e6c3860205687c542106d68ec9c98d1a9b | [
"MIT"
] | null | null | null | codes/.ipynb_checkpoints/P5.1 Model Prep, Modelling, Evaluation & Recommendations-checkpoint.ipynb | AngShengJun/dsiCapstone | 8b2e11e6c3860205687c542106d68ec9c98d1a9b | [
"MIT"
] | null | null | null | codes/.ipynb_checkpoints/P5.1 Model Prep, Modelling, Evaluation & Recommendations-checkpoint.ipynb | AngShengJun/dsiCapstone | 8b2e11e6c3860205687c542106d68ec9c98d1a9b | [
"MIT"
] | null | null | null | 173.556729 | 97,140 | 0.857912 | [
[
[
"## P5.1 Model Prep, Modelling, Evaluation & Recommendations",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"### Content\n- [Model Prep](#Model-Prep)\n- [Baseline accuracy](#Baseline-accuracy)\n- [Logistic Regression Model](#Logistic-Regression-Model)\n- [Naive Bayes Model](#Naive-Bayes-Model)\n- [Deeper Look at Production Model (LR model)](#Deeper-Look-at-Production-Model-(LR-model))\n- [Review Misclassified samples](#Review-Misclassified-samples)\n- [Recommendations (Part1)](#Recommendations-(Part1))",
"_____no_output_____"
],
[
"### Model Prep\n---\nIn model learning, data is usually segregated into\n- training set; where model learns from the pattern within this set of data,\n- validate set; where model or group of models are evaluated. This is the performance the model is expected to have on unseen data.\n- test set; the best performing model is then shortlisted and tested on the test set.\n \nModel workflow steps:\n- first split data into i) train and test set, then split the train set into train subsets and validate subsets.\n- only fit on the train set then score on validate set. (Similar principle applies on test set). \n- evaluate model\nThis would prevent data leakage (inadvertent count vectorize (a transformer) the entire data before doing train-test-split).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n# increase default figure and font sizes for easier viewing\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.size'] = 14\n# always be stylish\nplt.style.use('ggplot')\n\nimport re\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import PorterStemmer\n\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import MultinomialNB\n\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import roc_auc_score\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
],
[
"# Setting - display all columns\npd.set_option('display.max_columns', None)",
"_____no_output_____"
],
[
"# Read in cleaned featured engineered data\ndframe = pd.read_csv('../assets/wordok.csv',encoding=\"ISO-8859-1\",index_col=0)",
"_____no_output_____"
],
[
"dframe.shape",
"_____no_output_____"
],
[
"dframe.head(2)",
"_____no_output_____"
],
[
"# Create Train-Test split (80-20 split)\n# X is motive. y is bomb.\nX_train,X_test,y_train,y_test = train_test_split(dframe[['motive']],dframe['bomb'],test_size=0.20,\\\n stratify=dframe['bomb'],\\\n random_state=42)",
"_____no_output_____"
],
[
"# Equal proportion of classes split across train and test set\nprint(y_train.value_counts(normalize=True))\ny_test.value_counts(normalize=True)",
"0 0.523524\n1 0.476476\nName: bomb, dtype: float64\n"
],
[
"# Create Train-Validate subsets (80-20 split) from the parent Train set\n# X is motive. y is bomb.\nX_trainsub,X_validate,y_trainsub,y_validate = train_test_split(X_train[['motive']],y_train,test_size=0.20,\\\n stratify=y_train,random_state=42)",
"_____no_output_____"
],
[
"# Lines of text in train set, validate set and test set\nrow_trainsub = X_trainsub.shape[0]\nrow_validate = X_validate.shape[0]\nrow_test = X_test.shape[0]\nprint(f\"Lines in train set: {row_trainsub}\")\nprint(f\"Lines in validate set: {row_validate}\")\nprint(f\"Lines in test set: {row_test}\")",
"Lines in train set: 20812\nLines in validate set: 5204\nLines in test set: 6505\n"
],
[
"# Instantiate porterstemmer\np_stemmer = PorterStemmer()",
"_____no_output_____"
],
[
"# Create the list of custom stopwords\n# Built upon nltk stopwords (english)\n\"\"\"Define the stopword lists\"\"\"\ns_words = stopwords.words('english')\n# Instantiate the custom list of stopwords for modelling from P5_01\nown_stop = ['motive','specific','unknown','attack','sources','noted', 'claimed','stated','incident','targeted',\\\n 'responsibility','violence','carried','government','suspected','trend','speculated','al','sectarian',\\\n 'retaliation','group','related','security','forces','people','bomb','bombing','bombings']",
"_____no_output_____"
],
[
"# Extend the Stop words\ns_words.extend(own_stop)\n# Check the addition of firstset_words\ns_words[-5:]",
"_____no_output_____"
],
[
"# Define function to convert a raw selftext to a string of words\n# The input is a single string (a raw selftext), and \n# The output is a single string (a preprocessed selftext)\n# For Stop words with firstset_words and secondset_words\n\ndef selftext_to_words(motive_text):\n \n # 1. Remove non-letters.\n letters_only = re.sub(\"[^a-zA-Z]\", \" \", motive_text)\n \n # 2. Split into individual words\n words = letters_only.split()\n \n # 3. In Python, searching a set is much faster than searching\n # a list, so convert the stopwords to a set.\n stops = set(s_words)\n\n # 5. Remove stopwords.\n meaningful_words = [w for w in words if w not in stops]\n \n # 5.5 Stemming of words\n meaningful_words = [p_stemmer.stem(w) for w in words]\n \n # 6. Join the words back into one string separated by space, \n # and return the result\n return(\" \".join(meaningful_words))",
"_____no_output_____"
],
[
"# Initialize an empty list to hold the cleaned texts.\n\nX_trainsub_clean = []\nX_validate_clean = []\nX_test_clean = []\n\n#For trainsub set\n# Instantiate counter.\nj = 0\nfor text in X_trainsub['motive']:\n \"\"\"Convert text to words, then append to X_trainf_clean.\"\"\"\n X_trainsub_clean.append(selftext_to_words(text))\n \n # If the index is divisible by 1000, print a message.\n if (j + 1) % 1000 == 0:\n print(f'Clean & parse {j + 1} of {row_trainsub+row_validate+row_test}.')\n \n j += 1\n \n# For validate set\nfor text in X_validate['motive']:\n \"\"\"Convert text to words, then append to X_validate_clean.\"\"\"\n X_validate_clean.append(selftext_to_words(text))\n \n # If the index is divisible by 1000, print a message.\n if (j + 1) % 1000 == 0:\n print(f'Clean & parse {j + 1} of {row_trainsub+row_validate+row_test}.')\n \n j += 1\n \n# For test set\nfor text in X_test['motive']:\n \"\"\"Convert text to words, then append to X_test_clean.\"\"\"\n X_test_clean.append(selftext_to_words(text))\n \n # If the index is divisible by 1000, print a message.\n if (j + 1) % (row_trainsub+row_validate+row_test) == 0:\n print(f'Clean & parse {j + 1} of {row_trainsub+row_validate+row_test}.')\n \n j += 1",
"Clean & parse 1000 of 32521.\nClean & parse 2000 of 32521.\nClean & parse 3000 of 32521.\nClean & parse 4000 of 32521.\nClean & parse 5000 of 32521.\nClean & parse 6000 of 32521.\nClean & parse 7000 of 32521.\nClean & parse 8000 of 32521.\nClean & parse 9000 of 32521.\nClean & parse 10000 of 32521.\nClean & parse 11000 of 32521.\nClean & parse 12000 of 32521.\nClean & parse 13000 of 32521.\nClean & parse 14000 of 32521.\nClean & parse 15000 of 32521.\nClean & parse 16000 of 32521.\nClean & parse 17000 of 32521.\nClean & parse 18000 of 32521.\nClean & parse 19000 of 32521.\nClean & parse 20000 of 32521.\nClean & parse 21000 of 32521.\nClean & parse 22000 of 32521.\nClean & parse 23000 of 32521.\nClean & parse 24000 of 32521.\nClean & parse 25000 of 32521.\nClean & parse 26000 of 32521.\nClean & parse 32521 of 32521.\n"
]
],
[
[
"### Baseline accuracy\n---\nDerive the baseline accuracy so as to be able to determine if the subsequent models are better than the baseline (null) model (predicting the plurality class).",
"_____no_output_____"
]
],
[
[
"y_test.value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"The Baseline accuracy is the percentage of the majority class. In this case, the baseline accuracy is 0.52. \nThis serves as benchmark for measuring model performance (i.e. model accuracy should be higher than this baseline).\n\nThe cost of a false negative is **higher** than false positive (potentially higher casualties); it is better that CT ops be more prepared in event of an actual bombing incident than be under-prepared. Therefore, the priority is to **minimize false negatives**.\n\nModel metrics for evaluation:\n- `sensitivity` (reduce false negatives) AND\n- `ROC-AUC` (measures model's skill in classification)",
"_____no_output_____"
],
[
"### Logistic Regression Model\n---\nLogistic Regression model is explored in this section.",
"_____no_output_____"
]
],
[
[
"# Set up a pipeline with two stages\n# 1.CountVectorizer (transformer)\n# 2.LogisticRegression (estimator)\n\npipe1 = Pipeline([('cvec',CountVectorizer()),\\\n ('logreg',LogisticRegression(random_state=42))\\\n ])",
"_____no_output_____"
],
[
"# Parameters of pipeline object\npipe1.get_params()",
"_____no_output_____"
],
[
"# Load pipeline object into GridSearchCV to tune CountVectorizer\n# Search over the following values of hyperparameters:\n# Max. no. of features fit (top no. frequent occur words): 2000, 3000, 4000, 5000, 10000\n# Min. no. of documents (collection of text) needed to include token: 2, 3\n# Max. no. of documents needed to include token: 90%, 95%\n# n-gram: 1 token, 1-gram, or 1 token, 2-gram.\n# class weight \"balanced\"; adjust class weights inversely proportional to class frequencies in the input data\npipe1_params = {\n 'cvec__max_features': [2_000,3_000,4_000,5_000,10_000],\\\n 'cvec__min_df': [2,3],\\\n 'cvec__max_df': [0.9,0.95],\\\n 'cvec__ngram_range': [(1, 1), (1,2)],\\\n 'logreg__max_iter': [500],\\\n 'logreg__solver':['lbfgs'],\\\n 'logreg__verbose': [1]\n}",
"_____no_output_____"
],
[
"# Instantiate GridSearchCV.\n\"\"\"pipe1 refers to the object to optimize.\"\"\"\n\"\"\"param_grid refer to parameter values to search.\"\"\"\n\"\"\"10 folds for cross-validate.\"\"\"\n\n# New line: scoring=scoring, refit='AUC', return_train_score=True\n\ngs1 = GridSearchCV(pipe1,\\\n param_grid=pipe1_params,\\\n cv=10)",
"_____no_output_____"
],
[
"# Fit GridSearch to the cleaned training data.\ngs1.fit(X_trainsub_clean,y_trainsub)",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.7s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.4s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.5s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.8s finished\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n"
],
[
"# Check the results of the grid search\n\nprint(f\"Best parameters: {gs1.best_params_}\")\nprint(f\"Best score: {gs1.best_score_}\")",
"Best parameters: {'cvec__max_df': 0.9, 'cvec__max_features': 10000, 'cvec__min_df': 3, 'cvec__ngram_range': (1, 2), 'logreg__max_iter': 500, 'logreg__solver': 'lbfgs', 'logreg__verbose': 1}\nBest score: 0.6750910183670841\n"
],
[
"# Save best model as model_1\n\nmodel_1 = gs1.best_estimator_",
"_____no_output_____"
],
[
"# Score model on train set and validate set\nprint(f\"Accuracy on train set: {model_1.score(X_trainsub_clean, y_trainsub)}\")\nprint(f\"Accuracy on validate set: {model_1.score(X_validate_clean, y_validate)}\")",
"Accuracy on train set: 0.7604266769171631\nAccuracy on validate set: 0.6781322059953881\n"
]
],
[
[
"Modelling helps with classification as the model accuracy is higher than the baseline accuracy (0.48). The model is overfitted with about 10% drop in test accuracy compared to train accuracy.",
"_____no_output_____"
]
],
[
[
"# Confusion matrix on the first log reg model\n# Pass in true values, predicted values to confusion matrix\n# Convert confusion matrix into dataframe\n# Positive class (class 1) is bomb\npreds = gs1.predict(X_validate_clean)\ncm = confusion_matrix(y_validate, preds)\ncm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])\ncm_df",
"_____no_output_____"
]
],
[
[
"The positive class (class 1) refers to `bomb`. False positive means the observed incident is classified as `bomb` when it is actually `non-bomb` event. False negative means the observation is classified as `non-bomb` when it is actually `bomb`.\n",
"_____no_output_____"
]
],
[
[
"# return nparray as a 1-D array.\nconfusion_matrix(y_validate, preds).ravel()\n# Save TN/FP/FN/TP values.\ntn, fp, fn, tp = confusion_matrix(y_validate,preds).ravel()",
"_____no_output_____"
],
[
"# Summary of metrics for log reg model\nspec = tn/(tn+fp)\nsens = tp/(tp+fn)\nprint(f\"Specificity: {round(spec,4)}\")\nprint(f\"Sensitivity: {round(sens,4)}\")",
"Specificity: 0.5187\nSensitivity: 0.8532\n"
]
],
[
[
"The Receiver Operating Characteristic (ROC) curve visualizes the overlap between the positive class and negative class as the classification threshold moves from 0 to 1 (i.e. trade-off between sensitivity (or TruePositiveRate) and specificity (1 – FalsePositiveRate). \nAs a baseline, a no-skill classifier is expected to give points lying along the diagonal (FPRate = TPRate); (i.e. the closer the curve is to the diagonal line, the less accurate the classifier). \n\n- Better performing classifiers give curves closer to the top-left corner. \n- The more area under the blue curve, the better separated the class distributions are. \n- Best trade-off between sensitivity and specificity is the top-left point along the ROC curve.",
"_____no_output_____"
]
],
[
[
"# To visualize the ROC AUC curve, first\n# Create a dataframe called pred_df that contains:\n# 1. The list of true values of our validate set.\n# 2. The list of predicted probabilities based on our model.\n\npred_proba = [i[1] for i in gs1.predict_proba(X_validate_clean)]\n\npred_df = pd.DataFrame({'validate_values': y_validate,\n 'pred_probs':pred_proba})\npred_df.head()",
"_____no_output_____"
],
[
"# Calculate ROC AUC.\nroc_auc_score(pred_df['validate_values'],pred_df['pred_probs'])",
"_____no_output_____"
],
[
"#Create figure\nplt.figure(figsize = (10,7))\n\n# Create threshold values. (Dashed orange line in plot.)\nthresholds = np.linspace(0, 1, 200)\n\n# Define function to calculate sensitivity. (True positive rate.)\ndef TPR(df, true_col, pred_prob_col, threshold):\n true_positive = df[(df[true_col] == 1) & (df[pred_prob_col] >= threshold)].shape[0]\n false_negative = df[(df[true_col] == 1) & (df[pred_prob_col] < threshold)].shape[0]\n return true_positive / (true_positive + false_negative)\n \n# Define function to calculate 1 - specificity. (False positive rate.)\ndef FPR(df, true_col, pred_prob_col, threshold):\n true_negative = df[(df[true_col] == 0) & (df[pred_prob_col] <= threshold)].shape[0]\n false_positive = df[(df[true_col] == 0) & (df[pred_prob_col] > threshold)].shape[0]\n return 1 - (true_negative / (true_negative + false_positive))\n \n# Calculate sensitivity & 1-specificity for each threshold between 0 and 1.\ntpr_values = [TPR(pred_df, 'validate_values', 'pred_probs', prob) for prob in thresholds]\nfpr_values = [FPR(pred_df, 'validate_values', 'pred_probs', prob) for prob in thresholds]\n\n# Plot ROC curve.\nplt.plot(fpr_values, # False Positive Rate on X-axis\n tpr_values, # True Positive Rate on Y-axis\n label='ROC Curve')\n\n# Plot baseline. (Perfect overlap between the two populations.)\nplt.plot(np.linspace(0, 1, 200),\n np.linspace(0, 1, 200),\n label='baseline',\n linestyle='--')\n\n# Label axes.\nplt.title(f'ROC Curve with AUC = {round(roc_auc_score(pred_df[\"validate_values\"], pred_df[\"pred_probs\"]),4)}', fontsize=22)\nplt.ylabel('Sensitivity', fontsize=18)\nplt.xlabel('1 - Specificity', fontsize=18)\n\n# Create legend.\nplt.legend(fontsize=16);",
"_____no_output_____"
]
],
[
[
"An ROC AUC of 1 means the positive and negative populations are perfectly separated and that the model is as good as it can get. The closer the ROC AUC is to 1, the better. (1 is the maximum score.)\nBefore exploring the stopwords to further potentially improve the classification ability of a model, examine the performance of a Naive Bayes model.",
"_____no_output_____"
],
[
"### Naive Bayes Model\n---\nWe explore the Naive Bayes Model, and apply the removal of identifed common overlapped words identified earlier. ",
"_____no_output_____"
]
],
[
[
"# Set up a pipeline, p2 with two stages\n# 1.CountVectorizer (transformer)\n# 2.Naive Bayes(multinomial) (estimator)\npipe2 = Pipeline([('cvec',CountVectorizer()),\\\n ('nb',MultinomialNB())\\\n ])",
"_____no_output_____"
],
[
"# Parameters of pipeline object\npipe2.get_params()",
"_____no_output_____"
],
[
"# Load pipeline object into GridSearchCV to tune CountVectorizer\n# Search over the following values of hyperparameters:\n# Maximum number of features fit (top no. frequent occur words): 2000, 3000, 4000, 5000, 10000\n# Minimum number of documents (collection of text) needed to include token: 2, 3\n# Maximum number of documents needed to include token: 90%, 95%\n# Check (individual tokens) and also check (individual tokens and 2-grams).\n\n# n-gram: 1 token, 1-gram, or 1 token, 2-gram.\npipe2_params = {\n 'cvec__max_features': [2_000,3_000,4_000,5_000,10_000],\\\n 'cvec__min_df': [2,3],\\\n 'cvec__max_df': [0.9,0.95],\\\n 'cvec__ngram_range': [(1, 1), (1,2)]\\\n}",
"_____no_output_____"
],
[
"# Instantiate GridSearchCV.\n\"\"\"pipe refers to the object to optimize.\"\"\"\n\"\"\"param_grid refer to parameter values to search.\"\"\"\n\"\"\"cv refers to number of cross-validate fold.\"\"\"\ngs2 = GridSearchCV(pipe2,\\\n param_grid=pipe2_params,\\\n cv=10)",
"_____no_output_____"
],
[
"# Fit GridSearch to the cleaned training data.\n\ngs2.fit(X_trainsub_clean,y_trainsub)",
"_____no_output_____"
],
[
"# Check the results of the grid search\nprint(f\"Best parameters: {gs2.best_params_}\")\nprint(f\"Best score: {gs2.best_score_}\")",
"Best parameters: {'cvec__max_df': 0.9, 'cvec__max_features': 10000, 'cvec__min_df': 3, 'cvec__ngram_range': (1, 2)}\nBest score: 0.6577453433724734\n"
],
[
"# Save best model as model_2\n\nmodel_2 = gs2.best_estimator_",
"_____no_output_____"
],
[
"# Score model on training set & validate set\nprint(f\"Logistic Regression Model\")\nprint(f\"Accuracy on train set: {model_1.score(X_trainsub_clean, y_trainsub)}\")\nprint(f\"Accuracy on validate set: {model_1.score(X_validate_clean, y_validate)}\")\nprint()\nprint(f\"Naive Bayes Model\")\nprint(f\"Accuracy on train set: {model_2.score(X_trainsub_clean, y_trainsub)}\")\nprint(f\"Accuracy on validate set: {model_2.score(X_validate_clean, y_validate)}\")",
"Logistic Regression Model\nAccuracy on train set: 0.7604266769171631\nAccuracy on validate set: 0.6781322059953881\n\nNaive Bayes Model\nAccuracy on train set: 0.6883528733423026\nAccuracy on validate set: 0.6593005380476556\n"
]
],
[
[
"The Naive Bayes model is overfitted on the train data, with lower validate accuracy compared to the train accuracy. It doesn't appear to be performing better than the logistic regression model, with a lower validate accuracy. However, compared to the logistic regression model, it has a smaller overfit (approx. 3% overfit). \n\nWe review the sensitivity and specificity, and roc_auc scores next.",
"_____no_output_____"
]
],
[
[
"# Confusion matrix on the naive bayes model\n# Pass in true values, predicted values to confusion matrix\n# Convert Confusion matrix into dataframe\n# Positive class (class 1) is bomb\npreds2 = gs2.predict(X_validate_clean)\ncm = confusion_matrix(y_validate, preds2)\ncm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])\ncm_df",
"_____no_output_____"
],
[
"# return nparray as a 1-D array.\nconfusion_matrix(y_validate, preds2).ravel()\n# Save TN/FP/FN/TP values.\ntn, fp, fn, tp = confusion_matrix(y_validate,preds2).ravel()",
"_____no_output_____"
],
[
"# Summary of metrics for naive bayes model\nspec = tn/(tn+fp)\nsens = tp/(tp+fn)\nprint(f\"Specificity: {round(spec,4)}\")\nprint(f\"Sensitivity: {round(sens,4)}\")",
"Specificity: 0.5088\nSensitivity: 0.8246\n"
],
[
"# To visualize the ROC AUC curve, first\n# Create a dataframe called pred_df that contains:\n# 1. The list of true values of our test set.\n# 2. The list of predicted probabilities based on our model.\n\npred_proba = [i[1] for i in gs2.predict_proba(X_validate_clean)]\n\npred_df = pd.DataFrame({'validate_values': y_validate,\n 'pred_probs':pred_proba})\npred_df.head()",
"_____no_output_____"
],
[
"# Calculate ROC AUC.\nroc_auc_score(pred_df['validate_values'],pred_df['pred_probs'])",
"_____no_output_____"
],
[
"#Create figure\nplt.figure(figsize = (10,7))\n\n# Create threshold values. (Dashed blue line in plot.)\nthresholds = np.linspace(0, 1, 200)\n\n# Define function to calculate sensitivity. (True positive rate.)\ndef TPR(df, true_col, pred_prob_col, threshold):\n true_positive = df[(df[true_col] == 1) & (df[pred_prob_col] >= threshold)].shape[0]\n false_negative = df[(df[true_col] == 1) & (df[pred_prob_col] < threshold)].shape[0]\n return true_positive / (true_positive + false_negative)\n \n# Define function to calculate 1 - specificity. (False positive rate.)\ndef FPR(df, true_col, pred_prob_col, threshold):\n true_negative = df[(df[true_col] == 0) & (df[pred_prob_col] <= threshold)].shape[0]\n false_positive = df[(df[true_col] == 0) & (df[pred_prob_col] > threshold)].shape[0]\n return 1 - (true_negative / (true_negative + false_positive))\n \n# Calculate sensitivity & 1-specificity for each threshold between 0 and 1.\ntpr_values = [TPR(pred_df, 'validate_values', 'pred_probs', prob) for prob in thresholds]\nfpr_values = [FPR(pred_df, 'validate_values', 'pred_probs', prob) for prob in thresholds]\n\n# Plot ROC curve.\nplt.plot(fpr_values, # False Positive Rate on X-axis\n tpr_values, # True Positive Rate on Y-axis\n label='ROC Curve')\n\n# Plot baseline. (Perfect overlap between the two populations.)\nplt.plot(np.linspace(0, 1, 200),\n np.linspace(0, 1, 200),\n label='baseline',\n linestyle='--')\n\n# Label axes.\nplt.title(f'ROC Curve with AUC = {round(roc_auc_score(pred_df[\"validate_values\"], pred_df[\"pred_probs\"]),4)}', fontsize=22)\nplt.ylabel('Sensitivity', fontsize=18)\nplt.xlabel('1 - Specificity', fontsize=18)\n\n# Create legend.\nplt.legend(fontsize=16);",
"_____no_output_____"
]
],
[
[
"The roc_auc of the Naive Bayes model is slightly lower (approx. 2%) than that of logistic regression model.",
"_____no_output_____"
]
],
[
[
"# Summary of Model scores in Dataframe\nsummary_df = pd.DataFrame({'accuracy' : [0.6781, 0.6593],\n 'specificity' : [0.5187, 0.5088],\n 'sensitivity' : [0.8512, 0.8246],\n 'roc_auc' : [0.7460, 0.7272]})\n# Transpose dataframe\nsummary_dft = summary_df.T\n# Rename columns\nsummary_dft.columns = ['LogReg','NB']\nsummary_dft",
"_____no_output_____"
]
],
[
[
"Prioritization is on correct classification of bomb attack mode as a misclassified actual bomb attack mode could lead to relatively more dire consequences (more casualties). In this regard, we would want to pick the model with highest True Positive Rate (sensitivity), for as much correct classification of the bomb attack modes as possible. Therefore, we pick the Logistic Regression model as the production model.\n\nFit LR model on full train set, test on test set, review misclassified samples and then explore tuning the production model.",
"_____no_output_____"
],
[
"### Deeper Look at Production Model (LR model)\n---\nFor the test accuracy and roc_auc scores, this section examines \n- model performance on test data\n- the features that helps with negative (non-bomb) and positive (bomb) classifications,\n- what could be the features that lead to misclassifications.",
"_____no_output_____"
]
],
[
[
"#Initialize an empty list to hold the clean test text.\nX_train_clean = []\n\n# Using whole of train set (i.e. trainsub and validate set)\n# Instantiate counter.\nj = 0\nfor text in X_train['motive']:\n \"\"\"Convert text to words, then append to X_trainf_clean.\"\"\"\n X_train_clean.append(selftext_to_words(text))",
"_____no_output_____"
],
[
"# using best parameters discovered above in gs2\n\n# Instantiate our CountVectorizer\ncv = CountVectorizer(ngram_range=(1,2),max_df=0.9,min_df=3,max_features=10000)\n\n# Fit and transform on whole training data\nX_train_cleancv = cv.fit_transform(X_train_clean)\n\n# Transform test data\nX_test_cleancv = cv.transform(X_test_clean)",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"# Instantiate model\nlr_prod = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)\n\n# Fit model on whole training data (without addn set of stopwords removed in NB model)\nmodel_lrprod = lr_prod.fit(X_train_cleancv,y_train)\n\n# Generate predictions from test set\npred_lrprod = lr_prod.predict(X_test_cleancv)\nprint(f\"Accuracy on whole test set: {round(model_lrprod.score(X_test_cleancv, y_test),4)}\")",
"Accuracy on whole test set: 0.6898\n"
],
[
"# Confusion matrix for test set using NB model\n# Pass in true values, predicted values to confusion matrix\n# Convert Confusion matrix into dataframe\n# Positive class (class 1) is bomb\ncm = confusion_matrix(y_test, pred_lrprod)\ncm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])\ncm_df",
"_____no_output_____"
],
[
"# return nparray as a 1-D array.\nconfusion_matrix(y_test, pred_lrprod).ravel()\n\n# Save TN/FP/FN/TP values.\ntn, fp, fn, tp = confusion_matrix(y_test, pred_lrprod).ravel()\n\n# Summary of metrics for LR model\nspec = tn/(tn+fp)\nsens = tp/(tp+fn)\nprint(f\"Specificity: {round(spec,4)}\")\nprint(f\"Sensitivity: {round(sens,4)}\")\n\n# To compute the ROC AUC curve, first\n# Create a dataframe called pred_df that contains:\n# 1. The list of true values of our test set.\n# 2. The list of predicted probabilities based on our model.\n\npred_proba = [i[1] for i in lr_prod.predict_proba(X_test_cleancv)]\n\npred_df = pd.DataFrame({'test_values': y_test,\n 'pred_probs':pred_proba})\n\n# Calculate ROC AUC.\nprint(f\"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}\")",
"Specificity: 0.5363\nSensitivity: 0.8584\nroc_auc: 0.762\n"
],
[
"#Create figure\nplt.figure(figsize = (10,7))\n\n# Create threshold values. (Dashed blue line in plot.)\nthresholds = np.linspace(0, 1, 200)\n\n# Define function to calculate sensitivity. (True positive rate.)\ndef TPR(df, true_col, pred_prob_col, threshold):\n true_positive = df[(df[true_col] == 1) & (df[pred_prob_col] >= threshold)].shape[0]\n false_negative = df[(df[true_col] == 1) & (df[pred_prob_col] < threshold)].shape[0]\n return true_positive / (true_positive + false_negative)\n \n# Define function to calculate 1 - specificity. (False positive rate.)\ndef FPR(df, true_col, pred_prob_col, threshold):\n true_negative = df[(df[true_col] == 0) & (df[pred_prob_col] <= threshold)].shape[0]\n false_positive = df[(df[true_col] == 0) & (df[pred_prob_col] > threshold)].shape[0]\n return 1 - (true_negative / (true_negative + false_positive))\n \n# Calculate sensitivity & 1-specificity for each threshold between 0 and 1.\ntpr_values = [TPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\nfpr_values = [FPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\n\n# Plot ROC curve.\nplt.plot(fpr_values, # False Positive Rate on X-axis\n tpr_values, # True Positive Rate on Y-axis\n label='ROC Curve')\n\n# Plot baseline. (Perfect overlap between the two populations.)\nplt.plot(np.linspace(0, 1, 200),\n np.linspace(0, 1, 200),\n label='baseline',\n linestyle='--')\n\n# Label axes.\nplt.title(f'ROC Curve with AUC = {round(roc_auc_score(pred_df[\"test_values\"], pred_df[\"pred_probs\"]),4)}', fontsize=22)\nplt.ylabel('Sensitivity', fontsize=18)\nplt.xlabel('1 - Specificity', fontsize=18)\n\n# Create legend.\nplt.legend(fontsize=16);",
"_____no_output_____"
]
],
[
[
"### Review Misclassified samples",
"_____no_output_____"
],
[
"Recap on the false classifications:\n\n**False Positive**: Predicted as bomb post but is actually non-bomb.\n**False Negative**: Predicted as non-bomb post but is actually bomb. \nTo gain insights into what are the words that led to misclassification of attack mode, further compare the words with the list of words that are strongly associated with the correct classes. \n\nCompare common words in :\n- false positive with the words in true positive class,\n- false negative with the words in true negative class ",
"_____no_output_____"
]
],
[
[
"df_r1 = pd.DataFrame(lr_prod.coef_[0,:])\ndf_r1.columns = ['coef']\ndf_r2=pd.DataFrame(cv.get_feature_names())\ndf_r2.columns=['wrd_features']\ndf_rc = pd.concat([df_r2,df_r1], axis=1)",
"_____no_output_____"
],
[
"lrsorted_fea = df_rc.sort_values('coef',ascending=False)\nlrsorted_fea.reset_index(drop=True,inplace=True)",
"_____no_output_____"
],
[
"# Top 5 positive word features\nlrsorted_fea.head()",
"_____no_output_____"
],
[
"# Top 5 negative word features\nlrsorted_fea.tail()",
"_____no_output_____"
]
],
[
[
"Temporal influence of the words (if any), is not immediately evident. Perhaps this can be inferred from the emergence and surge in activities for particular terror groups and related conflict context. This would likely require in-depth SME knowledge regarding particular terror groups.\n\nAdditional bomb related words identified for removal could likely be : 'bomber'. ",
"_____no_output_____"
]
],
[
[
"# postive feature words\nposcl_wrdft = lrsorted_fea[lrsorted_fea['coef']>0].copy()\nposcl_wrds = poscl_wrdft['wrd_features'].tolist()",
"_____no_output_____"
],
[
"# negative feature words\nnegcl_wrdft = lrsorted_fea[lrsorted_fea['coef']<0].copy()\nnegcl_wrds = negcl_wrdft['wrd_features'].tolist()",
"_____no_output_____"
],
[
"# Pass y_test (pandas series) into dataframe first\n# in order to use the original selftext indexes for traceability\nactual = pd.Series(y_test)\ndf_rvw = actual.to_frame()\n\n# Create column of predicted classes from production model\ndf_rvw['predicted_bomb'] = pred_lrprod\n\n# Include the selftext data\ndf_rvw['motive'] = X_test_clean\n\n# Review the dataframe\ndf_rvw.head()",
"_____no_output_____"
],
[
"# Index of misclassified classes\nrow_ids = df_rvw[df_rvw['bomb'] != df_rvw['predicted_bomb']].index\nrow_ids",
"_____no_output_____"
],
[
"# Create sub-dataframes of false positive and false negative classifications\ndf_msc = pd.DataFrame(df_rvw[df_rvw['bomb'] != df_rvw['predicted_bomb']])\ndf_fp = pd.DataFrame(df_msc.loc[df_msc['predicted_bomb'] == 0])\ndf_fn = pd.DataFrame(df_msc.loc[df_msc['predicted_bomb'] == 1])",
"_____no_output_____"
],
[
"# Empty list to hold false positive words\nfp_words = []\n# Split string in false positive selftext to individual words\n# Populate the empty list\nfor i in df_fp.index:\n words = df_fp['motive'][i].split()\n fp_words.extend(words)",
"_____no_output_____"
],
[
"# Empty list to hold false negative words\nfn_words = []\n# Split string in false positive selftext to individual words\n# Populate the empty list\nfor i in df_fn.index:\n words = df_fn['motive'][i].split()\n fn_words.extend(words)",
"_____no_output_____"
],
[
"# Print the words that likely contributed to false positive classification\nprint(set(fp_words) & set(poscl_wrds))",
"{'kecil', 'christian', 'paid', 'new', 'central', 'islam', 'hospit', 'halt', 'bridg', 'action', 'decis', 'town', 'string', 'your', 'serv', 'puerto', 'becom', 'educ', 'trial', 'stop', 'phone', 'order', 'coalit', 'auster', 'servic', 'belief', 'complet', 'left', 'oppress', 'jagdish', 'mujahid', 'serbian', 'evid', 'pakistani', 'hezbollah', 'vote', 'unclear', 'camp', 'liber', 'safeti', 'declar', 'north', 'union', 'personnel', 'sentiment', 'discourag', 'us', 'live', 'insurg', 'attempt', 'four', 'job', 'draw', 'convoy', 'andr', 'david', 'unifi', 'retali', 'pipelin', 'legal', 'sayyaf', 'airstrik', 'came', 'area', 'owner', 'ppp', 'faction', 'milit', 'visit', 'intend', 'gnla', 'passag', 'critic', 'bank', 'islami', 'center', 'ongo', 'statement', 'un', 'econom', 'power', 'pamphlet', 'afghanistan', 'occupi', 'some', 'olymp', 'farc', 'religi', 'staff', 'shiit', 'qaumi', 'specif', 'explos', 'aveng', 'parliament', 'anniversari', 'place', 'stand', 'execut', 'may', 'ceas', 'ansar', 'last', 'white', 'howev', 'deploy', 'transport', 'same', 'troop', 'which', 'khorasan', 'motiv', 'nepal', 'believ', 'been', 'martyr', 'militia', 'recent', 'call', 'poor', 'date', 'by', 'use', 'sweep', 'musab', 'prison', 'somalia', 'recruit', 'next', 'sinai', 'huthi', 'strike', 'feder', 'crime', 'ramadan', 'store', 'detail', 'hand', 'albanian', 'caus', 'teacher', 'minor', 'missil', 'associ', 'krishna', 'reason', 'aqim', 'face', 'revolutionari', 'environ', 'isil', 'undermin', 'went', 'haram', 'plan', 'nepali', 'jew', 'extort', 'although', 'committe', 'host', 'right', 'kumpulan', 'parti', 'elder', 'serb', 'corrupt', 'peopl', 'treatment', 'themselv', 'gaza', 'process', 'baloch', 'ucpn', 'abu', 'fail', 'cooper', 'five', 'sever', 'hotel', 'presidenti', 'day', 'search', 'wife', 'control', 'spike', 'an', 'respons', 'aqap', 'facil', 'priest', 'specul', 'myanmar', 'son', 'ltte', 'independ', 'compani', 'enforc', 'respond', 'irish', 'cultur', 'punjab', 'shop', 'peac', 'one', 'public', 'allow', 'armi', 'promin', 'monitor', 'presenc', 'nazi', 'design', 'spread', 'near', 'bla', 'further', 'command', 'in', 'hostil', 'america', 'runda', 'terrorist', 'provid', 'innoc', 'begin', 'pij', 'york', 'there', 'resist', 'site', 'nation', 'mani', 'polic', 'headquart', 'where', 'tactic', 'open', 'payment', 'sell', 'citizen', 'while', 'men', 'perceiv', 'pressur', 'secular', 'night', 'reconcili', 'balochi', 'regim', 'rule', 'properti', 'media', 'abus', 'crackdown', 'preach', 'surround', 'agreement', 'top', 'celebr', 'ttp', 'activist', 'sought', 'polio', 'destabil', 'report', 'bandh', 'heavi', 'egypt', 'would', 'brigad', 'district', 'turkey', 'within', 'syria', 'sanaa', 'bara', 'through', 'event', 'solidar', 'syrian', 'rebel', 'differ', 'tehrik', 'implement', 'defens', 'news', 'achiev', 'could', 'congress', 'sign', 'defeat', 'under', 'asg', 'contain', 'baghdad', 'express', 'arrest', 'our', 'agent', 'now', 'justic', 'bomb', 'measur', 'shah', 'disrupt', 'nc', 'ocalan', 'abdallah', 'conduct', 'intensifi', 'maghreb', 'minist', 'chechen', 'paper', 'opposit', 'meant', 'school', 'american', 'restaur', 'citi', 'up', 'taliban', 'if', 'milf', 'provinci', 'senior', 'muammar', 'way', 'master', 'protect', 'sinc', 'presid', 'khan', 'grenad', 'said', 'led', 'cpi', 'omar', 'instruct', 'comment', 'project', 'bomber', 'ida', 'messag', 'bastar', 'denounc', 'domin', 'suppli', 'multin', 'variou', 'show', 'own', 'establish', 'warn', 'into', 'is', 'ulfa', 'colombia', 'retaliatori', 'war', 'popul', 'part', 'februari', 'agre', 'sympathet', 'algeria', 'allegi', 'supremacist', 'lashkar', 'communist', 'fellow', 'of', 'strategi', 'march', 'brother', 'mission', 'mass', 'friday', 'fbi', 'how', 'might', 'ga', 'western', 'elect', 'barack', 'lord', 'factori', 'tribe', 'author', 'but', 'morsi', 'major', 'incid', 'depos', 'boko', 'gaya', 'attack', 'infertil', 'against', 'guerrilla', 'govern', 'schedul', 'iraq', 'nationwid', 'promis', 'bu', 'it', 'pledg', 'leagu', 'import', 'lankan', 'effort', 'leader', 'indian', 'contract', 'thi', 'inaugur', 'israel', 'state', 'includ', 'militari', 'parad', 'khel', 'violent', 'link', 'libya', 'confer', 'jihad', 'continu', 'kenya', 'shelter', 'fire', 'sentenc', 'food', 'rico', 'colombian', 'target', 'basqu', 'embassi', 'maulana', 'symbol', 'question', 'accord', 'negoti', 'given', 'out', 'thailand', 'announc', 'devic', 'run', 'thai', 'like', 'separ', 'sunni', 'sheikh', 'blast', 'capit', 'allianc', 'nigerian', 'prevent', 'build', 'tourism', 'secur', 'birthday', 'west', 'ceasefir', 'summari', 'home', 'kashmir', 'kosovo', 'week', 'affect', 'such', 'shabak', 'front', 'relat', 'apost', 'increas', 'obama', 'hama', 'isra', 'threat', 'impos', 'ban', 'local', 'perpetr', 'franc', 'futur', 'meet', 'sri', 'terror', 'larger', 'qa', 'known', 'result', 'down', 'their', 'imprison', 'capitalist', 'produc', 'clash', 'attent', 'somali', 'as', 'agenc', 'speak', 'movement', 'claim', 'iraqi', 'india', 'chechnya', 'practic', 'stay', 'him', 'sharia', 'sabotag', 'littl'}\n"
],
[
"# Print the words that likely contributed to false negative classification\nprint(set(fn_words) & set(negcl_wrds))",
"{'assassin', 'amisom', 'levant', 'zone', 'money', 'fighter', 'muttahida', 'anim', 'garo', 'yala', 'youth', 'illeg', 'focus', 'strip', 'aqi', 'replac', 'unknown', 'black', 'intimid', 'star', 'anp', 'afghan', 'promot', 'so', 'were', 'governor', 'southern', 'ndfb', 'foreign', 'mazioti', 'kidnapp', 'threaten', 'destruct', 'intern', 'pkk', 'organ', 'lead', 'hassan', 'rel', 'assail', 'democrat', 'confisc', 'them', 'parliamentari', 'weaken', 'especi', 'than', 'captur', 'jammu', 'network', 'send', 'have', 'and', 'medic', 'persecut', 'shabaab', 'activ', 'arabia', 'anoth', 'pay', 'act', 'first', 'abduct', 'taken', 'kurdistan', 'nato', 'not', 'worker', 'between', 'countri', 'demonstr', 'colleg', 'refus', 'aid', 'destroy', 'kurd', 'hi', 'amid', 'indic', 'memori', 'rise', 'train', 'leav', 'panchayat', 'allegedli', 'excess', 'qaida', 'frequenc', 'oper', 'block', 'secret', 'offici', 'highway', 'necessari', 'intellig', 'campaign', 'tortur', 'past', 'drive', 'atroc', 'obtain', 'equip', 'two', 'month', 'region', 'shutdown', 'moreov', 'elimin', 'murder', 'assembl', 'ethnic', 'individu', 'gspc', 'rivalri', 'previous', 'intent', 'commun', 'releas', 'propos', 'condemn', 'hizbul', 'connect', 'get', 'wa', 'caucasu', 'tripoli', 'inform', 'assam', 'at', 'ehsan', 'sourc', 'th', 'treat', 'women', 'had', 'bahadur', 'british', 'illegitim', 'territori', 'eelam', 'industri', 'bin', 'retribut', 'poll', 'also', 'hostag', 'morcha', 'violat', 'enter', 'arm', 'marin', 'inabl', 'soldier', 'occur', 'letter', 'press', 'christma', 'develop', 'hate', 'coverag', 'inmat', 'chief', 'join', 'from', 'resort', 'assault', 'took', 'well', 'momentum', 'are', 'kill', 'suicid', 'hawijah', 'road', 'tribal', 'leftist', 'across', 'al', 'convert', 'involv', 'referendum', 'need', 'both', 'element', 'they', 'over', 'gorkha', 'though', 'few', 'gain', 'land', 'hope', 'provinc', 'violenc', 'harvest', 'to', 'deal', 'hour', 'cell', 'yemen', 'mehsud', 'arafat', 'construct', 'possess', 'find', 'dawn', 'when', 'npa', 'advis', 'befor', 'influenc', 'held', 'civilian', 'unit', 'pakistan', 'advanc', 'stolen', 'offic', 'emancip', 'carri', 'about', 'separatist', 'her', 'ehsanullah', 'ireland', 'jeay', 'prior', 'follow', 'incom', 'arson', 'ali', 'name', 'club', 'remov', 'or', 'step', 'peninsula', 'jamaat', 'abubakar', 'wide', 'june', 'consid', 'holiday', 'along', 'did', 'cleric', 'cut', 'epdp', 'burn', 'anarchist', 'anti', 'eln', 'assert', 'support', 'similar', 'appear', 'cite', 'sectarian', 'repres', 'tamil', 'law', 'hazara', 'detent', 'spell', 'neg', 'candid', 'made', 'kidnap', 'earlier', 'year', 'suggest', 'mahaz', 'demand', 'former', 'bodo', 'no', 'lee', 'gorkhaland', 'dure', 'group', 'abdul', 'free', 'victim', 'dismiss', 'outskirt', 'coordin', 'fsa', 'arabian', 'bengal', 'professor', 'end', 'fight', 'escal', 'truck', 'awami', 'strict', 'protest', 'east', 'jharkhand', 'ha', 'loss', 'discrimin', 'chairman', 'muslim', 'african', 'pro', 'forc', 'steven', 'accus', 'just', 'play', 'main', 'mnlf', 'member', 'departur', 'on', 'council', 'for', 'non', 'until', 'shortli', 'respect', 'palestinian', 'radio', 'ahmad', 'ski', 'tri', 'chapter', 'mujahideen', 'posit', 'natur', 'maoist', 'incit', 'extremist', 'agenda', 'pattani', 'outspoken', 'hinder', 'tension', 'yemeni', 'prosecutor', 'jewish', 'august', 'pilgrimag', 'off', 'who', 'ignor', 'oppos', 'rais', 'interpret', 'be', 'commit', 'seiz', 'famili', 'world', 'film', 'sirt', 'work', 'constitu', 'zamboanga', 'shot', 'superintend', 'evacu', 'exact', 'with', 'all', 'ballot', 'resid', 'those', 'kind', 'polit', 'coup', 'vehicl', 'nimr', 'remain', 'becaus', 'abdullah', 'elector', 'advoc', 'program', 'lack', 'parenthood', 'explain', 'januari', 'maintain', 'note', 'toward', 'wall', 'strife', 'itali', 'addit', 'voter', 'light', 'after', 'rigbi', 'he', 'post', 'embargo', 'take', 'time', 'reveng', 'clinic', 'station', 'phase', 'koran', 'combat', 'drug', 'say', 'complex', 'decemb', 'racial', 'start', 'hm', 'later', 'moder', 'jtmm', 'that', 'unrest', 'despit', 'other', 'narathiwat', 'border', 'trend', 'gener', 'muhammad', 'employe', 'kirkuk', 'repris', 'raid', 'jamiat', 'jsmm', 'exploit', 'eight', 'particip', 'plantat', 'tag', 'settler', 'previou', 'alleg', 'russian', 'algerian', 'second', 'want', 'marxist', 'spray', 'spokesperson', 'councilor', 'journalist', 'three', 'counter', 'suspect', 'repair', 'busi', 'hindu', 'poster', 'behind', 'shepherd', 'polici', 'death', 'villag', 'mqm', 'upcom', 'due', 'april', 'abort', 'around', 'republ', 'keep'}\n"
]
],
[
[
"The removal of such words could help in reduction of false positives and false negatives. However, there is a limit to how much of these words can be removed. Care should be excercised to not overtune the model such that it loses it's generalizability and ends up with increased false positives and false negatives in the predictions. Explore the effects of removing 50 such words (leading to misclassification) for false positive classes.\n",
"_____no_output_____"
],
[
"Examine effect of model performance with removal of the identified words leading to false negative classifications. roc_auc measures the trade-off between sensitivity and specificity. While sensitivity is the metric to optimize for, currently, the sensitivity is quite good (85%). Explore if the specificity could be improved without impacting sensitivity too adversely.",
"_____no_output_____"
]
],
[
[
"# Reset stopwords\ns_words = stopwords.words('english')\n# Instantiate the custom list of stopwords for modelling from P5_01\nown_stop = ['motive','specific','unknown','attack','sources','noted', 'claimed','stated','incident','targeted',\\\n 'responsibility','violence','carried','government','suspected','trend','speculated','al','sectarian',\\\n 'retaliation','group','related','security','forces','people','bomb','bombing','bombings']",
"_____no_output_____"
],
[
"# Create 2nd set stopword (50 words)\n\nown_stopfn = ['shutdown', 'journalist', 'candid', 'pattani', 'intern', 'shortli', 'rais', 'select', 'palestinian',\\\n 'assail', 'al', 'member', 'shaykh', 'violenc', 'unrest', 'august', 'those', 'muhammad', 'arafat', 'strip',\\\n 'network', 'natur', 'maoist', 'between', 'concern', 'sourc', 'burn', 'offic', 'frequent', 'organ', 'time',\\\n 'civilian', 'region', 'fighter', 'alleg', 'remain', 'egyptian', 'reform', 'repris', 'muslim', 'tamil',\\\n 'italian', 'main', 'am', 'assault', 'with', 'victim', 'black', 'kirkuk', 'novemb']",
"_____no_output_____"
],
[
"# Extend the Stop words\ns_words.extend(own_stopfn)\n# Check the addition of firstset_words\ns_words[-5:]",
"_____no_output_____"
]
],
[
[
"Define function to clean the motive text, remove stopwords. Note the derived list of misclassification words are stemmed word output. Therefore, rearrange the stemming before stopwords remover in defined function below.",
"_____no_output_____"
]
],
[
[
"def selftext_to_wordsr(motive_text):\n \n # 1. Remove non-letters.\n letters_only = re.sub(\"[^a-zA-Z]\", \" \", motive_text)\n \n # 2. Split into individual words\n words = letters_only.split()\n \n # 5.5 Stemming of words\n meaningful_words = [p_stemmer.stem(w) for w in words] \n # 3. In Python, searching a set is much faster than searching\n \n # a list, so convert the stopwords to a set.\n stops = set(s_words)\n\n # 5. Remove stopwords.\n meaningful_words = [w for w in words if w not in stops]\n \n # 6. Join the words back into one string separated by space, \n # and return the result\n return(\" \".join(meaningful_words))",
"_____no_output_____"
],
[
"#Initialize an empty list to hold the clean test text.\nX_train_cleanr = []\nX_test_cleanr = []\n\n# Using whole of train set (i.e. trainsub and validate set)\nfor text in X_train['motive']:\n \"\"\"Convert text to words, then append to X_trainf_cleanr.\"\"\"\n X_train_cleanr.append(selftext_to_wordsr(text))\n \n# For test set\nfor text in X_test['motive']:\n \"\"\"Convert text to words, then append to X_test_cleanr.\"\"\"\n X_test_cleanr.append(selftext_to_wordsr(text))",
"_____no_output_____"
],
[
"# Instantiate our CountVectorizer\ncv = CountVectorizer(ngram_range=(1,2),max_df=0.9,min_df=3,max_features=10000)\n\n# Fit and transform on whole training data\nX_train_cleancvr = cv.fit_transform(X_train_cleanr)\n\n# Transform test data\nX_test_cleancvr = cv.transform(X_test_cleanr)",
"_____no_output_____"
],
[
"# Instantiate model\nlr_prod = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)\n\n# Fit model on whole training data\nmodel_lrprod = lr_prod.fit(X_train_cleancvr,y_train)\n\n# Generate predictions from test set\npred_lrprod = lr_prod.predict(X_test_cleancvr)\nprint(f\"Accuracy on whole test set: {model_lrprod.score(X_test_cleancvr, y_test)}\")",
"Accuracy on whole test set: 0.6913143735588009\n"
]
],
[
[
"There is an slight decrease in test accuracy with the removal of the stop words contributing to false negatives. ",
"_____no_output_____"
]
],
[
[
"# Confusion matrix for test set using NB model\n# Pass in true values, predicted values to confusion matrix\n# Convert Confusion matrix into dataframe\n# Positive class (class 1) is bomb\ncm = confusion_matrix(y_test, pred_lrprod)\ncm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])\ncm_df",
"_____no_output_____"
]
],
[
[
"Number of false negatives (predict non-bomb but is actual bomb) reduced by 8 from former 439 to 431, with the removal of 50 words contributing to false negatives.",
"_____no_output_____"
]
],
[
[
"# return nparray as a 1-D array.\nconfusion_matrix(y_test, pred_lrprod).ravel()\n\n# Save TN/FP/FN/TP values.\ntn, fp, fn, tp = confusion_matrix(y_test, pred_lrprod).ravel()\n\n# Summary of metrics for LR model\nspec = tn/(tn+fp)\nsens = tp/(tp+fn)\nprint(f\"Specificity: {round(spec,4)}\")\nprint(f\"Sensitivity: {round(sens,4)}\")\n\n# To compute the ROC AUC curve, first\n# Create a dataframe called pred_df that contains:\n# 1. The list of true values of our test set.\n# 2. The list of predicted probabilities based on our model.\n\npred_proba = [i[1] for i in lr_prod.predict_proba(X_test_cleancvr)]\n\npred_df = pd.DataFrame({'test_values': y_test,\n 'pred_probs':pred_proba})\n\n# Calculate ROC AUC.\nprint(f\"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}\")",
"Specificity: 0.5442\nSensitivity: 0.8529\nroc_auc: 0.7619\n"
],
[
"#Create figure\nplt.figure(figsize = (10,7))\n\n# Create threshold values. (Dashed blue line in plot.)\nthresholds = np.linspace(0, 1, 200)\n\n# Define function to calculate sensitivity. (True positive rate.)\ndef TPR(df, true_col, pred_prob_col, threshold):\n true_positive = df[(df[true_col] == 1) & (df[pred_prob_col] >= threshold)].shape[0]\n false_negative = df[(df[true_col] == 1) & (df[pred_prob_col] < threshold)].shape[0]\n return true_positive / (true_positive + false_negative)\n \n# Define function to calculate 1 - specificity. (False positive rate.)\ndef FPR(df, true_col, pred_prob_col, threshold):\n true_negative = df[(df[true_col] == 0) & (df[pred_prob_col] <= threshold)].shape[0]\n false_positive = df[(df[true_col] == 0) & (df[pred_prob_col] > threshold)].shape[0]\n return 1 - (true_negative / (true_negative + false_positive))\n \n# Calculate sensitivity & 1-specificity for each threshold between 0 and 1.\ntpr_values = [TPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\nfpr_values = [FPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\n\n# Plot ROC curve.\nplt.plot(fpr_values, # False Positive Rate on X-axis\n tpr_values, # True Positive Rate on Y-axis\n label='ROC Curve')\n\n# Plot baseline. (Perfect overlap between the two populations.)\nplt.plot(np.linspace(0, 1, 200),\n np.linspace(0, 1, 200),\n label='baseline',\n linestyle='--')\n\n# Label axes.\nplt.title(f'ROC Curve with AUC = {round(roc_auc_score(pred_df[\"test_values\"], pred_df[\"pred_probs\"]),4)}', fontsize=22)\nplt.ylabel('Sensitivity', fontsize=18)\nplt.xlabel('1 - Specificity', fontsize=18)\n\n# Create legend.\nplt.legend(fontsize=16);",
"_____no_output_____"
],
[
"# Summary of the production Log Reg Model scores in Dataframe\nsummary_df = pd.DataFrame({'accuracy' : [0.6898, 0.6859],\n 'specificity' : [0.5363, 0.5266],\n 'sensitivity' : [0.8584, 0.861],\n 'roc_auc' : [0.7622, 0.7566]})\n# Transpose dataframe\nsummary_dft = summary_df.T\n# Rename columns\nsummary_dft.columns = ['LogReg model','LogReg model (50 fn wrds removed)']\nsummary_dft",
"_____no_output_____"
]
],
[
[
"There is marginal increase in sensitivity but at a cost to the roc_auc. Explore the effects of removing significant overlap of words between classes in this section onwards.",
"_____no_output_____"
]
],
[
[
"# Visualize extent of word feature-class overlap\n# Create figure.\nplt.figure(figsize = (20,7))\n\n# Create two histograms of observations.\nplt.hist(pred_df[pred_df['test_values'] == 0]['pred_probs'],\n bins=25,\n color='b',\n alpha = 0.6,\n label='bomb = 0')\nplt.hist(pred_df[pred_df['test_values'] == 1]['pred_probs'],\n bins=25,\n color='orange',\n alpha = 0.6,\n label='bomb = 1')\n\n# Add vertical line at P(Outcome = 1) = 0.5.\nplt.vlines(x=0.5,\n ymin = 0,\n ymax = 1000,\n color='r',\n linestyle = '--')\n\n# Label axes.\nplt.title('Distribution of P(bomb = 1)', fontsize=22)\nplt.ylabel('Frequency', fontsize=18)\nplt.xlabel('Predicted Probability that bomb = 1', fontsize=18)\nplt.xticks(np.arange(0, 1.01, 0.01),rotation=90, fontsize=10)\n# Commented out the 2 lines below to display overview; \n# pls comment above line out and below 2 lines in to see the emphirical range of overlap in detail\n#plt.xticks(np.arange(0, 1.0005, 0.0005),rotation=90, fontsize=10)\n#plt.xlim(0.518,0.559)\n# Create legend.\nplt.legend(fontsize=20);",
"_____no_output_____"
]
],
[
[
"There is the near ubiquitous presence of word features contributing to misclassifications inherent in the dataset. In particular, there is **significant occurring overlaps for word features in the prediction probability (0.518-0.559) range**. Note the overlap extent is **not a perfect overlap**; there is a small percentage that is purely dominant positive class. One **can adjust the code block above (commented out lines) to review the overlap in greater resolution (if desired)**. This section onwards explore if the removal these significant occurring and overlapped words, leaving the dominant positive class words would help improve model performance.",
"_____no_output_____"
]
],
[
[
"# Utilize the inputs from Log Reg model\ndf_locate = pred_df.copy()\n# Create column of predicted classes from Log Reg model\ndf_locate['predicted_bomb'] = pred_lrprod\n# Include the selftext data\ndf_locate['motive'] = X_test_clean",
"_____no_output_____"
],
[
"# Review the dataframe\ndf_locate.head()",
"_____no_output_____"
],
[
"# Filter the words\ndf_locate_filtr = df_locate.loc[(df_locate['pred_probs']>=0.518) & (df_locate['pred_probs']<=0.559)]",
"_____no_output_____"
],
[
"# Extract the Log Reg model coefficients and the word features\ndf_r1 = pd.DataFrame(lr_prod.coef_[0,:])\ndf_r1.columns = ['coef']\ndf_r2=pd.DataFrame(cv.get_feature_names())\ndf_r2.columns=['wrd_features']\ndf_rc = pd.concat([df_r2,df_r1], axis=1)",
"_____no_output_____"
],
[
"# Sort word features by coef value\nlrsorted_fea = df_rc.sort_values('coef',ascending=False)\nlrsorted_fea.reset_index(drop=True,inplace=True)",
"_____no_output_____"
],
[
"# word features\nwrdft = lrsorted_fea.copy()\nwrds = wrdft['wrd_features'].tolist()",
"_____no_output_____"
],
[
"# Empty list to hold false positive words\nmisclass_words = []\n# Split string in false positive selftext to individual words\n# Populate the empty list\nfor i in df_locate_filtr['motive'].index:\n words = df_locate_filtr['motive'][i].split()\n misclass_words.extend(words)",
"_____no_output_____"
],
[
"# Print the words that likely contributed to false positive classification\n# with pred proba 0.52-0.56 \nprint(set(wrds) & set(misclass_words))",
"{'ritual', 'would', 'christian', 'levant', 'iraq', 'district', 'zone', 'period', 'islam', 'step', 'money', 'humanitarian', 'peninsula', 'commit', 'doctor', 'uganda', 'within', 'isil', 'arab', 'town', 'get', 'string', 'plan', 'nuclear', 'along', 'extort', 'puerto', 'although', 'bangsamoro', 'syrian', 'rebel', 'unknown', 'stop', 'lankan', 'africa', 'order', 'tehrik', 'treatment', 'assam', 'gaza', 'news', 'anarchist', 'anti', 'could', 'eln', 'abu', 'cairo', 'sent', 'support', 'sign', 'leader', 'request', 'th', 'women', 'governor', 'male', 'state', 'sectarian', 'pakistani', 'muharram', 'law', 'day', 'hazara', 'camp', 'businessmen', 'jihad', 'list', 'air', 'eelam', 'union', 'personnel', 'control', 'iran', 'poll', 'bomb', 'kidnap', 'earlier', 'enter', 'gave', 'assad', 'disrupt', 'food', 'rico', 'nusrah', 'demand', 'hebdo', 'note', 'ltte', 'price', 'target', 'paper', 'put', 'occur', 'attempt', 'meant', 'school', 'shop', 'american', 'symbol', 'bashar', 'draw', 'letter', 'accord', 'one', 'taliban', 'karachi', 'group', 'given', 'thailand', 'jammu', 'vietnam', 'run', 'free', 'like', 'sunni', 'progress', 'among', 'make', 'allow', 'station', 'shabaab', 'direct', 'came', 'interview', 'fsa', 'seek', 'phase', 'area', 'chief', 'khan', 'bengal', 'politician', 'owner', 'draft', 'cpi', 'freedom', 'command', 'racial', 'fight', 'west', 'bifm', 'mujao', 'truck', 'project', 'french', 'bank', 'kashmir', 'move', 'took', 'terrorist', 'week', 'protest', 'kill', 'statement', 'reaction', 'act', 'first', 'hawijah', 'un', 'power', 'show', 'tribal', 'establish', 'warn', 'trend', 'front', 'african', 'ulfa', 'resist', 'pro', 'nation', 'war', 'part', 'raid', 'regard', 'final', 'aid', 'farc', 'destroy', 'niger', 'biddam', 'ban', 'local', 'amid', 'head', 'algeria', 'gain', 'payment', 'tactic', 'meet', 'mali', 'council', 'oil', 'sri', 'men', 'communist', 'place', 'larger', 'hour', 'non', 'may', 'mission', 'russian', 'algerian', 'white', 'result', 'want', 'oon', 'panchayat', 'northern', 'aim', 'spokesperson', 'rule', 'qi', 'ukrainian', 'militia', 'recent', 'three', 'media', 'children', 'block', 'secret', 'held', 'posit', 'highway', 'elect', 'suspect', 'musab', 'somalia', 'clash', 'extremist', 'campaign', 'unit', 'nscn', 'pakistan', 'sinai', 'peshmerga', 'strike', 'lebanon', 'hinder', 'zionist', 'ttp', 'crime', 'blood', 'hand', 'separatist', 'major', 'report', 'mua', 'movement', 'met', 'prior', 'death', 'claim', 'iraqi', 'tiger', 'attack', 'india', 'blame', 'april', 'disturb', 'keep'}\n"
],
[
"# Create 2nd set stopword\nown_stop2 = ['make', 'anti', 'bifm', 'food', 'assad', 'station', 'lankan', 'children', 'governor', 'progress',\\\n 'ukrainian', 'payment', 'note', 'nusrah', 'three', 'male', 'jammu', 'highway', 'act', 'list', 'move',\\\n 'arab', 'white', 'elect', 'could', 'tribal', 'day', 'rebel', 'may', 'order', 'musab', 'secret', 'mua',\\\n 'men', 'extort', 'nscn', 'algerian', 'gain', 'tehrik', 'although', 'local', 'disturb', 'war', 'puerto',\\\n 'among', 'oon', 'would', 'prior', 'attack', 'eelam', 'met', 'eln', 'owner', 'taliban', 'took', 'union',\\\n 'zone', 'aid', 'doctor', 'accord', 'commit', 'truck', 'result', 'oil', 'cairo', 'aim', 'show', 'support',\\\n 'disrupt', 'peshmerga', 'group', 'northern', 'iraq', 'rico', 'mujao', 'african', 'target', 'major',\\\n 'project', 'pro', 'bomb', 'biddam', 'farc', 'meant', 'lebanon', 'stop', 'place', 'establish', 'communist',\\\n 'direct', 'poll', 'sunni', 'power', 'within', 'protest', 'report', 'state', 'hinder', 'occur', 'april',\\\n 'shop', 'sectarian', 'hazara', 'amid', 'kidnap', 'step', 'posit', 'fsa', 'raid', 'terrorist', 'bangsamoro',\\\n 'letter', 'shabaab', 'news', 'movement', 'suspect', 'racial', 'warn', 'head', 'enter', 'came', 'algeria',\\\n 'treatment', 'bashar', 'ltte', 'control', 'assam', 'khan', 'destroy', 'resist', 'islam', 'ritual',\\\n 'peninsula', 'crime', 'block', 'west', 'karachi', 'like', 'demand', 'non', 'allow', 'chief', 'sinai',\\\n 'pakistani', 'held', 'price', 'french', 'death', 'want', 'earlier', 'extremist', 'gave', 'along', 'hawijah',\\\n 'niger', 'anarchist', 'media', 'claim', 'levant', 'town', 'bengal', 'phase', 'nation', 'iraqi', 'russian',\\\n 'area', 'draft', 'thailand', 'symbol', 'command', 'pakistan', 'un', 'unit', 'first', 'hebdo', 'front',\\\n 'ban', 'given', 'regard', 'fight', 'panchayat', 'air', 'freedom', 'qi', 'humanitarian', 'district', 'sent',\\\n 'one', 'bank', 'muharram', 'mission', 'sri', 'spokesperson', 'week', 'draw', 'interview', 'militia',\\\n 'leader', 'trend', 'tiger', 'strike', 'recent', 'seek', 'zionist', 'reaction', 'council', 'run', 'nuclear',\\\n 'paper', 'get', 'camp', 'th', 'unknown', 'blame', 'women', 'put', 'iran', 'india', 'christian', 'ulfa',\\\n 'attempt', 'law', 'rule', 'sign', 'meet', 'part', 'personnel', 'money', 'final', 'statement', 'blood',\\\n 'clash', 'kill', 'larger', 'keep', 'africa', 'request', 'period', 'tactic', 'syrian', 'abu', 'campaign',\\\n 'gaza', 'kashmir', 'vietnam', 'politician', 'hand', 'uganda', 'separatist', 'isil', 'school', 'mali',\\\n 'businessmen', 'ttp', 'plan', 'somalia', 'cpi', 'free', 'american', 'hour', 'string', 'jihad']",
"_____no_output_____"
],
[
"# Reset the stopwords\ns_words = stopwords.words('english')\n# Instantiate the custom list of stopwords for modelling from P5_01\nown_stop = ['motive','specific','unknown','attack','sources','noted', 'claimed','stated','incident','targeted',\\\n 'responsibility','violence','carried','government','suspected','trend','speculated','al','sectarian',\\\n 'retaliation','group','related','security','forces','people','bomb','bombing','bombings']",
"_____no_output_____"
],
[
"# Extend the Stop words\ns_words.extend(own_stop)\ns_words.extend(own_stop2)\n#s_words.extend(own_stopfn)\n# Check the addition of firstset_words\ns_words[-5:]",
"_____no_output_____"
]
],
[
[
"Define function to clean the motive text, remove stopwords. Note the derived list of false positive and false negative words are stemmed word output. Therefore, rearrange the stemming before stopwords remover in defined function below.",
"_____no_output_____"
]
],
[
[
"def selftext_to_wordsr(motive_text):\n \n # 1. Remove non-letters.\n letters_only = re.sub(\"[^a-zA-Z]\", \" \", motive_text)\n \n # 2. Split into individual words\n words = letters_only.split()\n \n # 5.5 Stemming of words\n meaningful_words = [p_stemmer.stem(w) for w in words] \n # 3. In Python, searching a set is much faster than searching\n \n # a list, so convert the stopwords to a set.\n stops = set(s_words)\n\n # 5. Remove stopwords.\n meaningful_words = [w for w in words if w not in stops]\n \n # 6. Join the words back into one string separated by space, \n # and return the result\n return(\" \".join(meaningful_words))",
"_____no_output_____"
],
[
"#Initialize an empty list to hold the clean test text.\nX_train_cleanr = []\nX_test_cleanr = []\n\n# Using whole of train set (i.e. trainsub and validate set)\nfor text in X_train['motive']:\n \"\"\"Convert text to words, then append to X_trainf_cleanr.\"\"\"\n X_train_cleanr.append(selftext_to_wordsr(text))\n \n# For test set\nfor text in X_test['motive']:\n \"\"\"Convert text to words, then append to X_test_cleanr.\"\"\"\n X_test_cleanr.append(selftext_to_wordsr(text))",
"_____no_output_____"
],
[
"# Instantiate our CountVectorizer\ncv = CountVectorizer(ngram_range=(1,2),max_df=0.9,min_df=3,max_features=10000)\n\n# Fit and transform on whole training data\nX_train_cleancvr = cv.fit_transform(X_train_cleanr)\n\n# Transform test data\nX_test_cleancvr = cv.transform(X_test_cleanr)",
"_____no_output_____"
],
[
"# Instantiate model\nlr_prod = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)\n\n# Fit model on whole training data\nmodel_lrprod = lr_prod.fit(X_train_cleancvr,y_train)\n\n# Generate predictions from test set\npred_lrprod = lr_prod.predict(X_test_cleancvr)\nprint(f\"Accuracy on whole test set: {model_lrprod.score(X_test_cleancvr, y_test)}\")",
"Accuracy on whole test set: 0.6840891621829363\n"
]
],
[
[
"There is an slight decrease in test accuracy with the removal of the stop words contributing to false positives and false negatives. Review the other metrics before concluding the analysis.",
"_____no_output_____"
]
],
[
[
"# Confusion matrix for test set using NB model\n# Pass in true values, predicted values to confusion matrix\n# Convert Confusion matrix into dataframe\n# Positive class (class 1) is googlehome\ncm = confusion_matrix(y_test, pred_lrprod)\ncm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])\ncm_df",
"_____no_output_____"
],
[
"# return nparray as a 1-D array.\nconfusion_matrix(y_test, pred_lrprod).ravel()\n\n# Save TN/FP/FN/TP values.\ntn, fp, fn, tp = confusion_matrix(y_test, pred_lrprod).ravel()\n\n# Summary of metrics for naive bayes model\nspec = tn/(tn+fp)\nsens = tp/(tp+fn)\nprint(f\"Specificity: {round(spec,4)}\")\nprint(f\"Sensitivity: {round(sens,4)}\")\n\n# To compute the ROC AUC curve, first\n# Create a dataframe called pred_df that contains:\n# 1. The list of true values of our test set.\n# 2. The list of predicted probabilities based on our model.\n\npred_proba = [i[1] for i in lr_prod.predict_proba(X_test_cleancvr)]\n\npred_df = pd.DataFrame({'test_values': y_test,\n 'pred_probs':pred_proba})\n\n# Calculate ROC AUC.\nprint(f\"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}\")",
"Specificity: 0.5242\nSensitivity: 0.8597\nroc_auc: 0.7526\n"
],
[
"#Create figure\nplt.figure(figsize = (10,7))\n\n# Create threshold values. (Dashed blue line in plot.)\nthresholds = np.linspace(0, 1, 200)\n\n# Define function to calculate sensitivity. (True positive rate.)\ndef TPR(df, true_col, pred_prob_col, threshold):\n true_positive = df[(df[true_col] == 1) & (df[pred_prob_col] >= threshold)].shape[0]\n false_negative = df[(df[true_col] == 1) & (df[pred_prob_col] < threshold)].shape[0]\n return true_positive / (true_positive + false_negative)\n \n# Define function to calculate 1 - specificity. (False positive rate.)\ndef FPR(df, true_col, pred_prob_col, threshold):\n true_negative = df[(df[true_col] == 0) & (df[pred_prob_col] <= threshold)].shape[0]\n false_positive = df[(df[true_col] == 0) & (df[pred_prob_col] > threshold)].shape[0]\n return 1 - (true_negative / (true_negative + false_positive))\n \n# Calculate sensitivity & 1-specificity for each threshold between 0 and 1.\ntpr_values = [TPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\nfpr_values = [FPR(pred_df, 'test_values', 'pred_probs', prob) for prob in thresholds]\n\n# Plot ROC curve.\nplt.plot(fpr_values, # False Positive Rate on X-axis\n tpr_values, # True Positive Rate on Y-axis\n label='ROC Curve')\n\n# Plot baseline. (Perfect overlap between the two populations.)\nplt.plot(np.linspace(0, 1, 200),\n np.linspace(0, 1, 200),\n label='baseline',\n linestyle='--')\n\n# Label axes.\nplt.title(f'ROC Curve with AUC = {round(roc_auc_score(pred_df[\"test_values\"], pred_df[\"pred_probs\"]),4)}', fontsize=22)\nplt.ylabel('Sensitivity', fontsize=18)\nplt.xlabel('1 - Specificity', fontsize=18)\n\n# Create legend.\nplt.legend(fontsize=16);",
"_____no_output_____"
],
[
"# Summary of the production Log Reg Model scores in Dataframe\nsummary_df = pd.DataFrame({'accuracy' : [0.6898, 0.6859, 0.6841],\n 'specificity' : [0.5363, 0.5266, 0.5242],\n 'sensitivity' : [0.8584, 0.861, 0.8597],\n 'roc_auc' : [0.7622, 0.7566, 0.7526]})\n\n# Transpose dataframe\nsummary_dft = summary_df.T\n# Rename columns\nsummary_dft.columns = ['LR modl','LR mdl (50 fn wrd rmvd)', 'LR mdl (sigfn ovrlap wrd rmvd)']\nsummary_dft",
"_____no_output_____"
]
],
[
[
"The results confirms the hypothesis that removal of these overlap words would not necessarily improve the model performance (sensitivity and roc_auc). Reason being the words removal are also a dominant feature for the class 1 prediction, and there exists a variance between the proportion of overlap (higher proportion on positive bomb class). This means sensitivity could decrease with the removal of the words. If the overlap gap is not as large (e.g. in order of 50-100 instances), it is expected that the sensitivty may not decrease as much.",
"_____no_output_____"
],
[
"### Recommendations (Part1)",
"_____no_output_____"
],
[
"The Logistic Regression (LR) model performs better than the naive bayes model in terms of the sensitivity and roc_auc scores. In general, the LR production model versions has good sensitivity and roc_auc (priority is to minimize false negatives) above 85% and 75% respectively. False positive is not considered a high cost for CT, since they will be expecting an terror incident). Considering the sensitivity and roc_auc score, I propose the second logistic regression model (50 false negative words removed) as the finalized production model (best sensitivity score, with roc_auc above 75%. \n\nThe removal of common occurring words (between classes) that has low frequency is a common technique used for tuning model performance; however, from the distribution of word features between the two classes, this is assessed to be not suitable for this particular dataset.\n\nThe exploration of topic modeling for text classification is explored in the next notebook 5.2.",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecbce1306759814e8cd9eca53cfe4f9efe0e2740 | 36,224 | ipynb | Jupyter Notebook | src/2Unet3D_Overtime.ipynb | JohSchoeneberg/pyLattice_deepLearning | 12da9e970af0d58ec3270de0eccb0493a59d479d | [
"BSD-3-Clause"
] | 4 | 2020-08-19T03:47:09.000Z | 2020-12-30T13:55:01.000Z | src/2Unet3D_Overtime.ipynb | JohSchoeneberg/pyLattice_deepLearning | 12da9e970af0d58ec3270de0eccb0493a59d479d | [
"BSD-3-Clause"
] | 5 | 2020-09-26T01:22:29.000Z | 2022-02-10T02:17:09.000Z | src/2Unet3D_Overtime.ipynb | JohSchoeneberg/pyLattice_deepLearning | 12da9e970af0d58ec3270de0eccb0493a59d479d | [
"BSD-3-Clause"
] | 1 | 2021-09-29T20:00:29.000Z | 2021-09-29T20:00:29.000Z | 99.516484 | 17,024 | 0.798945 | [
[
[
"import os\nimport sys\nimport random\n\n#TODO Make universal imports for all py files?\n\n#os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"-1\"\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau\nsys.modules['keras'] = keras\n\nimport numpy as np\n\nfrom dice import dice_coef, dice_loss\nfrom generator import DataGen\nfrom visualize import display_slice_from_batch\n\nseed = 2019\nrandom.seed = seed\n#TODO make config[seed] and fix below\n#np.random.seed = seed\ntf.seed = seed",
"_____no_output_____"
],
[
"image_size = 96\npatch_size = 48\npercent_covered = 1e-10\ntrain_path = \"dataset/train\"\nmodel_path = \"jul21_48_nonzero_standardized_global.h5\"\nepochs = 500\npatience = 50\nbatch_size = 2\n\ntrain_ids = next(os.walk(train_path))[1] # Returns all directories under train_path\n\nval_data_size = 8 # Needs to be greater than batch_size\n\nvalid_ids = train_ids[:val_data_size]\ntrain_ids = train_ids[val_data_size:]",
"_____no_output_____"
],
[
"gen = DataGen(train_ids, train_path, batch_size=batch_size, image_size=image_size, patch_size=patch_size,\n percent_covered = percent_covered)\nx, y = gen.__getitem__(0)\nprint(x.shape, y.shape)",
"(16, 48, 48, 48, 1) (16, 48, 48, 48, 1)\n"
],
[
"n=2\nz=40\n\ndisplay_slice_from_batch(x, n, z)\nprint(x[n, :, :, z].shape)\nprint(np.amax(x[n, :, :, z]))\n\n\ndisplay_slice_from_batch(y, n, z)\n",
"(48, 48, 1)\n6.287474\n"
],
[
"def down_block(x, filters, kernel_size=(3, 3, 3), padding=\"same\", strides=(1, 1, 1)):\n c = keras.layers.Conv3D(filters, kernel_size, padding=padding, strides=strides, activation=\"relu\")(x)\n c = keras.layers.Conv3D(filters*2, kernel_size, padding=padding, strides=strides, activation=\"relu\")(c)\n p = keras.layers.MaxPool3D((2, 2, 2))(c)\n return c, p\n\ndef up_block(x, skip, filters, kernel_size=(3, 3, 3), padding=\"same\", strides=(1, 1, 1)):\n us = keras.layers.Conv3DTranspose(filters*4, (2, 2, 2), (2, 2, 2))(x)\n concat = keras.layers.Concatenate()([us, skip])\n c = keras.layers.Conv3D(filters*2, kernel_size, padding=padding, strides=strides, activation=\"relu\")(concat)\n c = keras.layers.Conv3D(filters*2, kernel_size, padding=padding, strides=strides, activation=\"relu\")(c)\n return c\n\ndef bottleneck(x, filters, kernel_size=(3, 3, 3), padding=\"same\", strides=(1, 1, 1)):\n c = keras.layers.Conv3D(filters, kernel_size, padding=padding, strides=strides, activation=\"relu\")(x)\n c = keras.layers.Conv3D(filters*2, kernel_size, padding=padding, strides=strides, activation=\"relu\")(c)\n return c",
"_____no_output_____"
],
[
"def UNet():\n #f = [16, 32, 64, 128, 256]\n f = [32, 64, 128, 256]\n inputs = keras.layers.Input((patch_size, patch_size, patch_size, 1))\n \n p0 = inputs\n c1, p1 = down_block(p0, f[0]) #32 -> 16\n c2, p2 = down_block(p1, f[1]) #16 -> 8\n c3, p3 = down_block(p2, f[2]) #8 -> 4\n #c4, p4 = down_block(p3, f[3]) #16->8\n \n bn = bottleneck(p3, f[3])\n \n u1 = up_block(bn, c3, f[2]) #4 -> 8\n u2 = up_block(u1, c2, f[1]) #8 -> 16\n u3 = up_block(u2, c1, f[0]) #16 -> 32\n #u4 = up_block(u3, c1, f[0]) #64 -> 128\n \n outputs = keras.layers.Conv3D(1, (1, 1, 1), padding=\"same\", activation=\"sigmoid\")(u3)\n model = keras.models.Model(inputs, outputs)\n return model",
"_____no_output_____"
],
[
"model = UNet()\nmodel.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-5), loss=dice_loss(smooth=1.), metrics=[dice_coef, 'accuracy'])#, sample_weight_mode=\"temporal\")\nmodel.summary()\n\n#TODO Does valid_gen use percent_covered = 0 or nonzero?\ntrain_gen = DataGen(train_ids, train_path, image_size=image_size, patch_size=patch_size, batch_size=batch_size, percent_covered=percent_covered)\nvalid_gen = DataGen(valid_ids, train_path, image_size=image_size, patch_size=patch_size, batch_size=batch_size, percent_covered=percent_covered)",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 48, 48, 48, 1 0 \n__________________________________________________________________________________________________\nconv3d (Conv3D) (None, 48, 48, 48, 3 896 input_1[0][0] \n__________________________________________________________________________________________________\nconv3d_1 (Conv3D) (None, 48, 48, 48, 6 55360 conv3d[0][0] \n__________________________________________________________________________________________________\nmax_pooling3d (MaxPooling3D) (None, 24, 24, 24, 6 0 conv3d_1[0][0] \n__________________________________________________________________________________________________\nconv3d_2 (Conv3D) (None, 24, 24, 24, 6 110656 max_pooling3d[0][0] \n__________________________________________________________________________________________________\nconv3d_3 (Conv3D) (None, 24, 24, 24, 1 221312 conv3d_2[0][0] \n__________________________________________________________________________________________________\nmax_pooling3d_1 (MaxPooling3D) (None, 12, 12, 12, 1 0 conv3d_3[0][0] \n__________________________________________________________________________________________________\nconv3d_4 (Conv3D) (None, 12, 12, 12, 1 442496 max_pooling3d_1[0][0] \n__________________________________________________________________________________________________\nconv3d_5 (Conv3D) (None, 12, 12, 12, 2 884992 conv3d_4[0][0] \n__________________________________________________________________________________________________\nmax_pooling3d_2 (MaxPooling3D) (None, 6, 6, 6, 256) 0 conv3d_5[0][0] \n__________________________________________________________________________________________________\nconv3d_6 (Conv3D) (None, 6, 6, 6, 256) 1769728 max_pooling3d_2[0][0] \n__________________________________________________________________________________________________\nconv3d_7 (Conv3D) (None, 6, 6, 6, 512) 3539456 conv3d_6[0][0] \n__________________________________________________________________________________________________\nconv3d_transpose (Conv3DTranspo (None, 12, 12, 12, 5 2097664 conv3d_7[0][0] \n__________________________________________________________________________________________________\nconcatenate (Concatenate) (None, 12, 12, 12, 7 0 conv3d_transpose[0][0] \n conv3d_5[0][0] \n__________________________________________________________________________________________________\nconv3d_8 (Conv3D) (None, 12, 12, 12, 2 5308672 concatenate[0][0] \n__________________________________________________________________________________________________\nconv3d_9 (Conv3D) (None, 12, 12, 12, 2 1769728 conv3d_8[0][0] \n__________________________________________________________________________________________________\nconv3d_transpose_1 (Conv3DTrans (None, 24, 24, 24, 2 524544 conv3d_9[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 24, 24, 24, 3 0 conv3d_transpose_1[0][0] \n conv3d_3[0][0] \n__________________________________________________________________________________________________\nconv3d_10 (Conv3D) (None, 24, 24, 24, 1 1327232 concatenate_1[0][0] \n__________________________________________________________________________________________________\nconv3d_11 (Conv3D) (None, 24, 24, 24, 1 442496 conv3d_10[0][0] \n__________________________________________________________________________________________________\nconv3d_transpose_2 (Conv3DTrans (None, 48, 48, 48, 1 131200 conv3d_11[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 48, 48, 48, 1 0 conv3d_transpose_2[0][0] \n conv3d_1[0][0] \n__________________________________________________________________________________________________\nconv3d_12 (Conv3D) (None, 48, 48, 48, 6 331840 concatenate_2[0][0] \n__________________________________________________________________________________________________\nconv3d_13 (Conv3D) (None, 48, 48, 48, 6 110656 conv3d_12[0][0] \n__________________________________________________________________________________________________\nconv3d_14 (Conv3D) (None, 48, 48, 48, 1 65 conv3d_13[0][0] \n==================================================================================================\nTotal params: 19,068,993\nTrainable params: 19,068,993\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"#TODO Account for filtered patchese\ntrain_steps = len(train_ids)*8//batch_size\nvalid_steps = len(valid_ids)*8//batch_size\n\ncallbacks = [EarlyStopping(monitor='val_loss', patience=patience, verbose=1),\n ModelCheckpoint(filepath=model_path, monitor='val_loss', save_best_only=True, verbose=1),\n ReduceLROnPlateau(factor=0.5, patience=10, verbose=1)]\n\nhistory = model.fit_generator(train_gen, validation_data=valid_gen, steps_per_epoch=train_steps, validation_steps=valid_steps, \n epochs=epochs, callbacks=callbacks)",
"Epoch 1/500\n 63/220 [=======>......................] - ETA: 6:52 - loss: -0.0153 - dice_coef: 0.0153 - acc: 0.3046"
],
[
"## Save the Weights\n#model.save_weights(\"UNet3D1.h5\")\n\n## Dataset for prediction\ntest_gen = DataGen(valid_ids, train_path, image_size=image_size, patch_size=patch_size, batch_size=batch_size, percent_covered=0)\nx, y = test_gen.__getitem__(0)\nprint(x.shape)\n#result = model.predict(x)\n#print(np.amax(result))\n#print(np.count_nonzero(result == 1.0))\n#print(result.shape)\n#result = result > 0.5\n\n#print(np.count_nonzero(result == 1.0))\n#print(result.shape)\n#print(np.where(result[0]==1.0)[0])\n#print(result[0])\n\nn=100\nz=8\n\ndisplay_slice_from_batch(x,n,z)\n\ndisplay_slice_from_batch(y,n,z)\n\ndisplay_slice_from_batch(result,n,z)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbce34d7ff7148f5bbef175cf1f512ec42b6933 | 28,084 | ipynb | Jupyter Notebook | Python/GameOfLife/GameOfLife.ipynb | Pietervanhalem/Pieters-Personal-Repository | c31e3c86b1d42f29876455e8553f350d4d527ee5 | [
"MIT"
] | 2 | 2020-02-26T13:02:44.000Z | 2020-03-06T07:09:10.000Z | Python/GameOfLife/GameOfLife.ipynb | Pietervanhalem/Pieters-Personal-Repository | c31e3c86b1d42f29876455e8553f350d4d527ee5 | [
"MIT"
] | 11 | 2020-03-06T07:17:10.000Z | 2022-02-26T22:32:59.000Z | Python/GameOfLife/GameOfLife.ipynb | Pietervanhalem/Personal-Code-Examples | c31e3c86b1d42f29876455e8553f350d4d527ee5 | [
"MIT"
] | null | null | null | 98.540351 | 1,920 | 0.646382 | [
[
[
"import numpy as np\nfrom IPython.display import clear_output\nimport datetime, time\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"class GameOfLife:\n def __init__(self, *args, **kwargs):\n \n # If input are interger create random grid\n if isinstance(args[0], int):\n n = m = args[0]\n if len(args) > 1 and isinstance(args[1], int):\n m = args[1]\n \n r = np.random.random((n,m ))\n random = (r > 0.75)\n \n grid = random\n \n # If input is grid then use grid\n elif isinstance(args[0], np.ndarray):\n grid = args[0]\n \n # Otherwise the input is invalid\n else:\n raise AssertionError('not a valid input')\n \n \n self.grid = grid\n self.n, self.m = grid.shape\n\n def step(self):\n n, m = self.n, self.m\n\n large_grid = np.zeros([n + 2, m + 2])\n large_grid[1:-1, 1:-1] = self.grid\n\n count_grid = (\n np.zeros([n, m])\n + large_grid[0:-2, 0:-2]\n + large_grid[2:, 2:]\n + large_grid[0:-2, 2:]\n + large_grid[2:, 0:-2]\n + large_grid[0:-2, 1:-1]\n + large_grid[2:, 1:-1]\n + large_grid[1:-1, 0:-2]\n + large_grid[1:-1, 2:]\n )\n\n new_grid = np.zeros([n, m])\n\n c1 = self.grid == 1\n c2 = count_grid == 2\n new_grid[c1 & c2] = 1\n new_grid[count_grid == 3] = 1\n new_grid[count_grid > 3] = 0\n\n self.grid = new_grid\n\n def plot(self):\n clear_output(wait=True)\n fig = plt.figure(figsize=(10,10))\n plt.imshow(self.grid, interpolation=\"nearest\")\n plt.show()\n\n def run(self, run_time=10, fps=5):\n for i in range(run_time * fps):\n t0 = datetime.datetime.now().timestamp()\n self.step()\n self.plot()\n\n t = datetime.datetime.now().timestamp() - t0\n wait_time = 1 / fps - t\n if wait_time > 0:\n time.sleep(wait_time)\n",
"_____no_output_____"
],
[
"g = GameOfLife(50)\ng.run()",
"_____no_output_____"
],
[
"g = GameOfLife(50, 40)\ng.run()",
"_____no_output_____"
],
[
"Static = np.zeros((6, 21))\n\nStatic[2:4, 1:3] = 1\n\nStatic[1:4, 5:9] = [\n [0, 1, 1, 0],\n [1, 0, 0, 1],\n [0, 1, 1, 0]\n]\nStatic[1:5, 11:15] = [\n [0, 1, 1, 0],\n [1, 0, 0, 1],\n [0, 1, 0, 1],\n [0, 0, 1, 0]\n]\n\nStatic[1:4, 17:20] = [\n [1, 1, 0],\n [1, 0, 1],\n [0, 1, 0]\n]\n\ng = GameOfLife(Static)\ng.run(run_time=1)",
"_____no_output_____"
],
[
"gun = np.zeros([11, 38]) \n\ngun[5][1] = gun[5][2] = 1\ngun[6][1] = gun[6][2] = 1\n\ngun[3][13] = gun[3][14] = 1\ngun[4][12] = gun[4][16] = 1\ngun[5][11] = gun[5][17] = 1\ngun[6][11] = gun[6][15] = gun[6][17] = gun[6][18] = 1\ngun[7][11] = gun[7][17] = 1\ngun[8][12] = gun[8][16] = 1\ngun[9][13] = gun[9][14] = 1\n\ngun[1][25] = 1\ngun[2][23] = gun[2][25] = 1\ngun[3][21] = gun[3][22] = 1\ngun[4][21] = gun[4][22] = 1\ngun[5][21] = gun[5][22] = 1\ngun[6][23] = gun[6][25] = 1\ngun[7][25] = 255\n\ngun[3][35] = gun[3][36] = 1\ngun[4][35] = gun[4][36] = 1\n\ng = GameOfLife(gun)\ng.run()",
"_____no_output_____"
],
[
"Pulsar = np.zeros((17, 17))\nPulsar[2, 4:7] = 1\nPulsar[4:7, 7] = 1\nPulsar += Pulsar.T\nPulsar += Pulsar[:, ::-1]\nPulsar += Pulsar[::-1, :]\n\ng = GameOfLife(Pulsar)\ng.run()",
"_____no_output_____"
],
[
"img_array = plt.imread('Conways_game_of_life_breeder_animation.gif') \ngrid = np.array(img_array[:,:,0])\ngrid[grid==0] = 1\ngrid[grid==255] = 0\n\ng = GameOfLife(grid)\ng.run(fps = 100)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbce6f53fe12b59905af2ef50c5452897e36d51 | 224,748 | ipynb | Jupyter Notebook | Platforms/Kaggle/Courses/Data_Visualization/1.Hello_Seaborn/exercise-hello-seaborn.ipynb | metehkaya/ML-Archive | c656c4502477ca0ae516d125c58f84feb79079fa | [
"MIT"
] | null | null | null | Platforms/Kaggle/Courses/Data_Visualization/1.Hello_Seaborn/exercise-hello-seaborn.ipynb | metehkaya/ML-Archive | c656c4502477ca0ae516d125c58f84feb79079fa | [
"MIT"
] | null | null | null | Platforms/Kaggle/Courses/Data_Visualization/1.Hello_Seaborn/exercise-hello-seaborn.ipynb | metehkaya/ML-Archive | c656c4502477ca0ae516d125c58f84feb79079fa | [
"MIT"
] | null | null | null | 224,748 | 224,748 | 0.9421 | [
[
[
"**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/hello-seaborn).**\n\n---\n",
"_____no_output_____"
],
[
"In this exercise, you will write your first lines of code and learn how to use the coding environment for the micro-course!\n\n## Setup\n\nFirst, you'll learn how to run code, and we'll start with the code cell below. (Remember that a **code cell** in a notebook is just a gray box containing code that we'd like to run.)\n- Begin by clicking inside the code cell. \n- Click on the blue triangle (in the shape of a \"Play button\") that appears to the left of the code cell.\n- If your code was run sucessfully, you will see `Setup Complete` as output below the cell.\n\n",
"_____no_output_____"
],
[
"The code cell below imports and configures the Python libraries that you need to complete the exercise.\n\nClick on the cell and run it.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\npd.plotting.register_matplotlib_converters()\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\n# Set up code checking\nimport os\nif not os.path.exists(\"../input/fifa.csv\"):\n os.symlink(\"../input/data-for-datavis/fifa.csv\", \"../input/fifa.csv\") \nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.data_viz_to_coder.ex1 import *\nprint(\"Setup Complete\")",
"Setup Complete\n"
]
],
[
[
"The code you just ran sets up the system to give you feedback on your work. You'll learn more about the feedback system in the next step.\n\n## Step 1: Explore the feedback system\n\nEach exercise lets you test your new skills with a real-world dataset. Along the way, you'll receive feedback on your work. You'll see if your answer is right, get customized hints, and see the official solution (_if you'd like to take a look!_).\n\nTo explore the feedback system, we'll start with a simple example of a coding problem. Follow the following steps in order:\n1. Run the code cell below without making any edits. It will show the following output: \n> <font color='#ccaa33'>Check:</font> When you've updated the starter code, `check()` will tell you whether your code is correct. You need to update the code that creates variable `one`\n\n This means you need to change the code to set the variable `one` to something other than the blank provided below (`____`).\n\n\n2. Replace the underline with a `2`, so that the line of code appears as `one = 2`. Then, run the code cell. This should return the following output:\n> <font color='#cc3333'>Incorrect:</font> Incorrect value for `one`: `2`\n\n This means we still have the wrong answer to the question.\n\n\n3. Now, change the `2` to `1`, so that the line of code appears as `one = 1`. Then, run the code cell. The answer should be marked as <font color='#33cc33'>Correct</font>. You have now completed this problem!",
"_____no_output_____"
]
],
[
[
"# Fill in the line below\none = 1\n\n# Check your answer\nstep_1.check()",
"_____no_output_____"
]
],
[
[
"In this exercise, you were responsible for filling in the line of code that sets the value of variable `one`. **Don't edit the code that checks your answer.** You'll need to run the lines of code like `step_1.check()` and `step_2.check()` just as they are provided.\n\nThis problem was relatively straightforward, but for more difficult problems, you may like to receive a hint or view the official solution. Run the code cell below now to receive both for this problem.",
"_____no_output_____"
]
],
[
[
"step_1.hint()\nstep_1.solution()",
"_____no_output_____"
]
],
[
[
"## Step 2: Load the data\n\nYou are ready to get started with some data visualization! You'll begin by loading the dataset from the previous tutorial. \n\nThe code you need is already provided in the cell below. Just run that cell. If it shows <font color='#33cc33'>Correct</font> result, you're ready to move on!",
"_____no_output_____"
]
],
[
[
"# Path of the file to read\nfifa_filepath = \"../input/fifa.csv\"\n\n# Read the file into a variable fifa_data\nfifa_data = pd.read_csv(fifa_filepath, index_col=\"Date\", parse_dates=True)\n\n# Check your answer\nstep_2.check()",
"_____no_output_____"
]
],
[
[
"Next, recall the difference between comments and executable code:\n- **Comments** are preceded by a pound sign (`#`) and contain text that appear faded and italicized. They are completely ignored by the computer when the code is run.\n- **Executable code** is code that is run by the computer.\n\nIn the code cell below, every line is a comment:\n```python\n# Uncomment the line below to receive a hint\n#step_2.hint()\n#step_2.solution()\n```\n\nIf you run the code cell below without making any changes, it won't return any output. Try this now!",
"_____no_output_____"
]
],
[
[
"# Uncomment the line below to receive a hint\nstep_2.hint()\n# Uncomment the line below to see the solution\nstep_2.solution()",
"_____no_output_____"
]
],
[
[
"Next, remove the pound sign before `step_2.hint()` so that the code cell above appears as follows:\n```python\n# Uncomment the line below to receive a hint\nstep_2.hint()\n#step_2.solution()\n```\nWhen we remove the pound sign before a line of code, we say we **uncomment** the line. This turns the comment into a line of executable code that is run by the computer. Run the code cell now, which should return the <font color='#3366cc'>Hint</font> as output.\n\nFinally, uncomment the line to see the solution, so the code cell appears as follows:\n```python\n# Uncomment the line below to receive a hint\nstep_2.hint()\nstep_2.solution()\n```\nThen, run the code cell. You should receive both a <font color='#3366cc'>Hint</font> and the <font color='#33cc99'>Solution</font>.\n\nIf at any point you're having trouble with coming up with the correct answer to a problem, you are welcome to obtain either a hint or the solution before completing the cell. (So, you don't need to get a <font color='#33cc33'>Correct</font> result before running the code that gives you a <font color='#3366cc'>Hint</font> or the <font color='#33cc99'>Solution</font>.)\n\n## Step 3: Plot the data\n\nNow that the data is loaded into the notebook, you're ready to visualize it! \n\nRun the next code cell without changes to make a line chart. The code may not make sense yet - you'll learn all about it in the next tutorial!",
"_____no_output_____"
]
],
[
[
"# Set the width and height of the figure\nplt.figure(figsize=(16,6))\n\n# Line chart showing how FIFA rankings evolved over time\nsns.lineplot(data=fifa_data)\n\n# Check your answer\nstep_3.a.check()",
"_____no_output_____"
]
],
[
[
"Some questions won't require you to write any code. Instead, you'll interpret visualizations.\n\nAs an example, consider the question: Considering only the years represented in the dataset, which countries spent at least 5 consecutive years in the #1 ranked spot?\n\nTo receive a <font color='#3366cc'>Hint</font>, uncomment the line below, and run the code cell.",
"_____no_output_____"
]
],
[
[
"step_3.b.hint()",
"_____no_output_____"
]
],
[
[
"Once you have an answer, check the <font color='#33cc99'>Solution</font> to get credit for completing the problem and to ensure your interpretation is right.",
"_____no_output_____"
]
],
[
[
"# Check your answer (Run this code cell to receive credit!)\nstep_3.b.solution()",
"_____no_output_____"
]
],
[
[
"Congratulations - you have completed your first coding exercise!\n\n# Keep going\n\nMove on to learn to create your own **[line charts](https://www.kaggle.com/alexisbcook/line-charts)** with a new dataset.",
"_____no_output_____"
],
[
"---\n\n\n\n\n*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161291) to chat with other Learners.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecbcee71cb67c078a0bbb73a3a2403546af1c02b | 12,075 | ipynb | Jupyter Notebook | Module 7/Ex 7.2 Baselined REINFORCE.ipynb | Aboubacar2012/DAT257x | c8a24219161bdecb4c210919fd48cbd64d33c029 | [
"Unlicense"
] | null | null | null | Module 7/Ex 7.2 Baselined REINFORCE.ipynb | Aboubacar2012/DAT257x | c8a24219161bdecb4c210919fd48cbd64d33c029 | [
"Unlicense"
] | null | null | null | Module 7/Ex 7.2 Baselined REINFORCE.ipynb | Aboubacar2012/DAT257x | c8a24219161bdecb4c210919fd48cbd64d33c029 | [
"Unlicense"
] | null | null | null | 42.073171 | 305 | 0.606708 | [
[
[
"# DAT257x: Reinforcement Learning Explained\n\n## Lab 7: Policy Gradient\n\n### Exercise 7.2: Baselined REINFORCE",
"_____no_output_____"
],
[
"This assignment features the Cartpole domain which tasks the agent with balancing a pole affixed to a movable cart. The agent employs two discrete actions which apply force to the cart. Episodes provide +1 reward for each step in which the pole has not fallen over, up to a maximum of 200 steps. \n\n## Objectives\n* Implement a baselined version of REINFORCE\n* Define a Value function network $V_\\phi(s)$\n* Build the trainer and associated loss function\n* Train REINFORCE using baselined rewards $\\nabla_\\theta J(\\theta)=\\sum_{t=0}^T \\nabla_\\theta \\log \\pi_\\theta(a_t|s_t) (R - V_\\phi(s_t))$\n\n## Success Criterion\nThe main difference in a correct implementation of baselined REINFORCE will be a reduction in the variance. If correct, the baselined REINFORCE will achieve successful trials (of 200 steps) with a variance of less than 1000 on average, whereas the original REINFORCE was close to 2500.\n\nBaselined REINFORCE should still solve the Cartpole domain about as often as before, but now with a lower variance.",
"_____no_output_____"
]
],
[
[
"import cntk as C\nfrom cntk.layers import Sequential, Dense\nfrom cntk.logging import ProgressPrinter\nimport numpy as np\nimport gym\n\nimport sys\nif \"../\" not in sys.path:\n sys.path.append(\"../\") \n \nfrom lib.running_variance import RunningVariance\nfrom itertools import count\nimport sys\n\nfrom lib import plotting\n\nnp.random.seed(123)\nC.cntk_py.set_fixed_random_seed(123)\nC.cntk_py.force_deterministic_algorithms()\n\nenv = gym.make('CartPole-v0')\n\nstate_dim = env.observation_space.shape[0] # Dimension of state space\naction_count = env.action_space.n # Number of actions\nhidden_size = 128 # Number of hidden units\nupdate_frequency = 20\n\n# The policy network maps an observation to a probability of taking action 0 or 1.\nobservations = C.sequence.input_variable(state_dim, np.float32, name=\"obs\")\nW1 = C.parameter(shape=(state_dim, hidden_size), init=C.glorot_uniform(), name=\"W1\")\nb1 = C.parameter(shape=hidden_size, name=\"b1\")\nlayer1 = C.relu(C.times(observations, W1) + b1)\nW2 = C.parameter(shape=(hidden_size, action_count), init=C.glorot_uniform(), name=\"W2\")\nb2 = C.parameter(shape=action_count, name=\"b2\")\nlayer2 = C.times(layer1, W2) + b2\noutput = C.sigmoid(layer2, name=\"output\")\n\n# Label will tell the network what action it should have taken.\nlabel = C.sequence.input_variable(1, np.float32, name=\"label\")\n# return_weight is a scalar containing the discounted return. It will scale the PG loss.\nreturn_weight = C.sequence.input_variable(1, np.float32, name=\"weight\")\n# PG Loss \nloss = -C.reduce_mean(C.log(C.square(label - output) + 1e-4) * return_weight, axis=0, name='loss')\n\n# Build the optimizer\nlr_schedule = C.learning_rate_schedule(lr=0.1, unit=C.UnitType.sample) \nm_schedule = C.momentum_schedule(0.99)\nvm_schedule = C.momentum_schedule(0.999)\noptimizer = C.adam([W1, W2], lr_schedule, momentum=m_schedule, variance_momentum=vm_schedule)\n\n# Create a buffer to manually accumulate gradients\ngradBuffer = dict((var.name, np.zeros(shape=var.shape)) for var in loss.parameters if var.name in ['W1', 'W2', 'b1', 'b2'])\n\ndef discount_rewards(r, gamma=0.999):\n \"\"\"Take 1D float array of rewards and compute discounted reward \"\"\"\n discounted_r = np.zeros_like(r)\n running_add = 0\n for t in reversed(range(0, r.size)):\n running_add = running_add * gamma + r[t]\n discounted_r[t] = running_add\n return discounted_r",
"_____no_output_____"
]
],
[
[
"Now we need to define a critic network which will estimate the value function $V_\\phi(s_t)$. You can likely reuse code from the policy network or look at other CNTK network examples.\n\nAdditionally, you'll need to define a squared error loss function, optimizer, and trainer for this newtork.",
"_____no_output_____"
]
],
[
[
"# TODO 1: Define the critic network that learns the value function V(s).\n# Hint: you can use a similar architecture as the policy network, except\n# the output should just be a scalar value estimate. Also, since the value\n# function learning is more standard, it's possible to use stanard CNTK\n# wrappers such as Trainer(). \n\ncritic_input = None # Modify this\ncritic_output = None # Modify this\n\ncritic = Sequential([\n Dense(critic_input, activation=C.relu, init=C.glorot_uniform()),\n Dense(critic_output, activation=None, init=C.glorot_uniform(scale=.01))\n])(observations)\n\n# TODO 2: Define target and Squared Error Loss Function, adam optimizier, and trainer for the Critic.\ncritic_target = None # Modify this\ncritic_loss = None # Modify this\ncritic_lr_schedule = C.learning_rate_schedule(lr=0.1, unit=C.UnitType.sample) \ncritic_optimizer = C.adam(critic.parameters, critic_lr_schedule, momentum=m_schedule, variance_momentum=vm_schedule)\ncritic_trainer = C.Trainer(critic, (critic_loss, None), critic_optimizer)",
"_____no_output_____"
]
],
[
[
"The main training loop is nearly identical except you'll need to train the critic to minimize the difference between the predicted and observed return at each step. Additionally, you'll need to update the Reinforce update to subtract the baseline.",
"_____no_output_____"
]
],
[
[
"running_variance = RunningVariance()\nreward_sum = 0\n\nmax_number_of_episodes = 500\n\nstats = plotting.EpisodeStats(\n episode_lengths=np.zeros(max_number_of_episodes),\n episode_rewards=np.zeros(max_number_of_episodes),\n episode_running_variance=np.zeros(max_number_of_episodes))\n\n\nfor episode_number in range(max_number_of_episodes):\n states, rewards, labels = [],[],[]\n done = False\n observation = env.reset()\n t = 1\n while not done:\n state = np.reshape(observation, [1, state_dim]).astype(np.float32)\n states.append(state)\n\n # Run the policy network and get an action to take.\n prob = output.eval(arguments={observations: state})[0][0][0]\n # Sample from the bernoulli output distribution to get a discrete action\n action = 1 if np.random.uniform() < prob else 0\n\n # Pseudo labels to encourage the network to increase\n # the probability of the chosen action. This label will be used\n # in the loss function above.\n y = 1 if action == 0 else 0 # a \"fake label\"\n labels.append(y)\n\n # step the environment and get new measurements\n observation, reward, done, _ = env.step(action)\n reward_sum += float(reward)\n\n # Record reward (has to be done after we call step() to get reward for previous action)\n rewards.append(float(reward))\n \n stats.episode_rewards[episode_number] += reward\n stats.episode_lengths[episode_number] = t\n t += 1\n\n # Stack together all inputs, hidden states, action gradients, and rewards for this episode\n epx = np.vstack(states)\n epl = np.vstack(labels).astype(np.float32)\n epr = np.vstack(rewards).astype(np.float32)\n\n # Compute the discounted reward backwards through time.\n discounted_epr = discount_rewards(epr)\n\n # TODO 3\n # Train the critic to predict the discounted reward from the observation\n # - use train_minibatch() function of the critic_trainer. \n # - observations is epx which are the states, and critic_target is discounted_epr\n # - then predict the discounted reward using the eval() function of the critic network and assign it to baseline\n critic_trainer.train_minibatch() # modify this\n baseline = None # modify this\n \n # Compute the baselined returns: A = R - b(s). Weight the gradients by this value.\n baselined_returns = discounted_epr - baseline\n \n # Keep a running estimate over the variance of the discounted rewards (in this case baselined_returns)\n for r in baselined_returns:\n running_variance.add(r[0])\n\n # Forward pass\n arguments = {observations: epx, label: epl, return_weight: baselined_returns}\n state, outputs_map = loss.forward(arguments, outputs=loss.outputs,\n keep_for_backward=loss.outputs)\n\n # Backward pass\n root_gradients = {v: np.ones_like(o) for v, o in outputs_map.items()}\n vargrads_map = loss.backward(state, root_gradients, variables=set([W1, W2]))\n\n for var, grad in vargrads_map.items():\n gradBuffer[var.name] += grad\n\n # Only update every 20 episodes to reduce noise\n if episode_number % update_frequency == 0:\n grads = {W1: gradBuffer['W1'].astype(np.float32),\n W2: gradBuffer['W2'].astype(np.float32)}\n updated = optimizer.update(grads, update_frequency)\n\n # reset the gradBuffer\n gradBuffer = dict((var.name, np.zeros(shape=var.shape))\n for var in loss.parameters if var.name in ['W1', 'W2', 'b1', 'b2'])\n\n print('Episode: %d/%d. Average reward for episode %f. Variance %f' % (episode_number, max_number_of_episodes, reward_sum / update_frequency, running_variance.get_variance()))\n \n sys.stdout.flush()\n \n reward_sum = 0\n \n stats.episode_running_variance[episode_number] = running_variance.get_variance()",
"_____no_output_____"
],
[
"plotting.plot_pgresults(stats)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecbd1e703032e38cece9703cb90bb2082ef9edb4 | 22,654 | ipynb | Jupyter Notebook | tutorials/tutorial05_networks.ipynb | kristery/flow | 2638f8137541424af8de23159260d73c571f2e04 | [
"MIT"
] | 1 | 2021-06-17T03:25:13.000Z | 2021-06-17T03:25:13.000Z | tutorials/tutorial05_networks.ipynb | kristery/flow | 2638f8137541424af8de23159260d73c571f2e04 | [
"MIT"
] | 1 | 2019-12-05T09:04:05.000Z | 2019-12-05T21:23:49.000Z | tutorials/tutorial05_networks.ipynb | kristery/flow | 2638f8137541424af8de23159260d73c571f2e04 | [
"MIT"
] | 3 | 2019-12-07T11:36:21.000Z | 2020-01-04T16:29:57.000Z | 45.765657 | 522 | 0.590227 | [
[
[
"# Tutorial 05: Creating Custom Networks\n\nThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. \n\nIn this exercise, we will recreate the ring road network, seen in the figure below.\n\n<img src=\"img/ring_network.png\">\n\nIn order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.\n\nWe begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.",
"_____no_output_____"
]
],
[
[
"# import Flow's base network class\nfrom flow.networks import Network\n\n# define the network class, and inherit properties from the base network class\nclass myNetwork(Network):\n pass",
"_____no_output_____"
]
],
[
[
"The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes.\n\n## 1. Specifying Traffic Network Features\n\nOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: \n\n* **specify_nodes**: specifies the attributes of nodes in the network\n* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network\n* **specify_routes**: specifies the routes vehicles can take starting from any edge\n\nAdditionally, the following optional functions may also be defined:\n\n* **specify_types**: specifies the attributes of various edge types (if any exist)\n* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.\n\nAll of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.\n\nThis tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively.",
"_____no_output_____"
],
[
"### 1.1 ADDITIONAL_NET_PARAMS\n\nThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:",
"_____no_output_____"
]
],
[
[
"ADDITIONAL_NET_PARAMS = {\n \"radius\": 40,\n \"num_lanes\": 1,\n \"speed_limit\": 30,\n}",
"_____no_output_____"
]
],
[
[
"All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the \"radius\" parameter, we simply type:\n\n radius = net_params.additional_params[\"radius\"]\n\n### 1.2 specify_nodes\n\nThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: \n* **id**: name of the node\n* **x**: x coordinate of the node\n* **y**: y coordinate of the node\n* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptions#Node_Descriptions\n\nRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:",
"_____no_output_____"
]
],
[
[
"class myNetwork(myNetwork): # update my network class\n\n def specify_nodes(self, net_params):\n # one of the elements net_params will need is a \"radius\" value\n r = net_params.additional_params[\"radius\"]\n\n # specify the name and position (x,y) of each node\n nodes = [{\"id\": \"bottom\", \"x\": 0, \"y\": -r},\n {\"id\": \"right\", \"x\": r, \"y\": 0},\n {\"id\": \"top\", \"x\": 0, \"y\": r},\n {\"id\": \"left\", \"x\": -r, \"y\": 0}]\n\n return nodes",
"_____no_output_____"
]
],
[
[
"### 1.3 specify_edges\n\nOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:\n\n* **id**: name of the edge\n* **from**: name of the node the edge starts from\n* **to**: the name of the node the edges ends at\n* **length**: length of the edge\n* **numLanes**: the number of lanes on the edge\n* **speed**: the speed limit for vehicles on the edge\n* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptions#Edge_Descriptions.\n\nOne useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. \n\nWe now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:",
"_____no_output_____"
]
],
[
[
"# some mathematical operations that may be used\nfrom numpy import pi, sin, cos, linspace\n\nclass myNetwork(myNetwork): # update my network class\n\n def specify_edges(self, net_params):\n r = net_params.additional_params[\"radius\"]\n edgelen = r * pi / 2\n # this will let us control the number of lanes in the network\n lanes = net_params.additional_params[\"num_lanes\"]\n # speed limit of vehicles in the network\n speed_limit = net_params.additional_params[\"speed_limit\"]\n\n edges = [\n {\n \"id\": \"edge0\",\n \"numLanes\": lanes,\n \"speed\": speed_limit, \n \"from\": \"bottom\", \n \"to\": \"right\", \n \"length\": edgelen,\n \"shape\": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]\n },\n {\n \"id\": \"edge1\",\n \"numLanes\": lanes, \n \"speed\": speed_limit,\n \"from\": \"right\",\n \"to\": \"top\",\n \"length\": edgelen,\n \"shape\": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]\n },\n {\n \"id\": \"edge2\",\n \"numLanes\": lanes,\n \"speed\": speed_limit,\n \"from\": \"top\",\n \"to\": \"left\", \n \"length\": edgelen,\n \"shape\": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},\n {\n \"id\": \"edge3\", \n \"numLanes\": lanes, \n \"speed\": speed_limit,\n \"from\": \"left\", \n \"to\": \"bottom\", \n \"length\": edgelen,\n \"shape\": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]\n }\n ]\n\n return edges",
"_____no_output_____"
]
],
[
[
"### 1.4 specify_routes\n\nThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled \"edge0\" (see section 1.3) must traverse, in sequence, the edges \"edge0\", \"edge1\", \"edge2\", and \"edge3\", before restarting its path.\n\nIn order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:\n\n**1. Single route per edge:**\n\nIn this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.\n\nFor this network, the available routes under this setting can be defined as follows:",
"_____no_output_____"
]
],
[
[
"class myNetwork(myNetwork): # update my network class\n\n def specify_routes(self, net_params):\n rts = {\"edge0\": [\"edge0\", \"edge1\", \"edge2\", \"edge3\"],\n \"edge1\": [\"edge1\", \"edge2\", \"edge3\", \"edge0\"],\n \"edge2\": [\"edge2\", \"edge3\", \"edge0\", \"edge1\"],\n \"edge3\": [\"edge3\", \"edge0\", \"edge1\", \"edge2\"]}\n\n return rts",
"_____no_output_____"
]
],
[
[
"**2. Multiple routes per edge:**\n\nAlternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.\n\nFor example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:",
"_____no_output_____"
]
],
[
[
"class myNetwork(myNetwork): # update my network class\n\n def specify_routes(self, net_params):\n rts = {\"edge0\": [([\"edge0\", \"edge1\", \"edge2\", \"edge3\"], 1)],\n \"edge1\": [([\"edge1\", \"edge2\", \"edge3\", \"edge0\"], 1)],\n \"edge2\": [([\"edge2\", \"edge3\", \"edge0\", \"edge1\"], 1)],\n \"edge3\": [([\"edge3\", \"edge0\", \"edge1\", \"edge2\"], 1)]}\n\n return rts",
"_____no_output_____"
]
],
[
[
"**3. Per-vehicle routes:**\n\nFinally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.\n\nAs an example, assume we have a vehicle named \"human_0\" in the network (as we will in the later sections), and it is initialized in the edge names \"edge_0\". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:",
"_____no_output_____"
]
],
[
[
"class myNetwork(myNetwork): # update my network class\n\n def specify_routes(self, net_params):\n rts = {\"edge0\": [\"edge0\", \"edge1\", \"edge2\", \"edge3\"],\n \"edge1\": [\"edge1\", \"edge2\", \"edge3\", \"edge0\"],\n \"edge2\": [\"edge2\", \"edge3\", \"edge0\", \"edge1\"],\n \"edge3\": [\"edge3\", \"edge0\", \"edge1\", \"edge2\"],\n \"human_0\": [\"edge0\", \"edge1\", \"edge2\", \"edge3\"]}\n\n return rts",
"_____no_output_____"
]
],
[
[
"In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e.\n\n >>> print(network.rts)\n\n {\n \"edge0\": [\n ([\"edge0\", \"edge1\", \"edge2\", \"edge3\"], 1)\n ],\n \"edge1\": [\n ([\"edge1\", \"edge2\", \"edge3\", \"edge0\"], 1)\n ],\n \"edge2\": [\n ([\"edge2\", \"edge3\", \"edge0\", \"edge1\"], 1)\n ],\n \"edge3\": [\n ([\"edge3\", \"edge0\", \"edge1\", \"edge2\"], 1)\n ],\n \"human_0\": [\n ([\"edge0\", \"edge1\", \"edge2\", \"edge3\"], 1)\n ]\n }\n\nwhere the vehicle-specific route is only included in the third case.",
"_____no_output_____"
],
[
"## 2. Specifying Auxiliary Network Features\n\nOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:\n\n* **specify_edge_starts**: defines edge starts for road sections with respect to some global reference\n\nOther optional abstract methods within the base network class include:\n\n* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section\n* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.\n* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network\n\n### 2.2 Specifying the Starting Position of Edges\n\nAll of the above functions starting with \"specify\" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].\n\nThe data specified in `specify_edge_starts` is used to provide a \"global\" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).\n\nIn section 1, we created a network with 4 edges named: \"edge0\", \"edge1\", \"edge2\", and \"edge3\". We assume that the edge titled \"edge0\" is the origin, and accordingly the position of the edge start of \"edge0\" is 0. The next edge, \"edge1\", begins a quarter of the length of the network from the starting point of edge \"edge0\", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:",
"_____no_output_____"
]
],
[
[
"# import some math functions we may use\nfrom numpy import pi\n\nclass myNetwork(myNetwork): # update my network class\n\n def specify_edge_starts(self):\n r = self.net_params.additional_params[\"radius\"]\n\n edgestarts = [(\"edge0\", 0),\n (\"edge1\", r * 1/2 * pi),\n (\"edge2\", r * pi),\n (\"edge3\", r * 3/2 * pi)]\n\n return edgestarts",
"_____no_output_____"
]
],
[
[
"## 3. Testing the New Network\nIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.\n\nWe begin by defining some of the components needed to run a sumo experiment.",
"_____no_output_____"
]
],
[
[
"from flow.core.params import VehicleParams\nfrom flow.controllers import IDMController, ContinuousRouter\nfrom flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams\n\nvehicles = VehicleParams()\nvehicles.add(veh_id=\"human\",\n acceleration_controller=(IDMController, {}),\n routing_controller=(ContinuousRouter, {}),\n num_vehicles=22)\n\nsumo_params = SumoParams(sim_step=0.1, render=True)\n\ninitial_config = InitialConfig(bunching=40)",
"_____no_output_____"
]
],
[
[
"For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.",
"_____no_output_____"
]
],
[
[
"from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS\n\nenv_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)",
"_____no_output_____"
]
],
[
[
"Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.",
"_____no_output_____"
]
],
[
[
"additional_net_params = ADDITIONAL_NET_PARAMS.copy()\nnet_params = NetParams(additional_params=additional_net_params)",
"_____no_output_____"
]
],
[
[
"We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.",
"_____no_output_____"
]
],
[
[
"from flow.core.experiment import Experiment\n\nnetwork = myNetwork( # we use the newly defined network class\n name=\"test_network\",\n vehicles=vehicles,\n net_params=net_params,\n initial_config=initial_config\n)\n\n# AccelEnv allows us to test any newly generated network quickly\nenv = AccelEnv(env_params, sumo_params, network)\n\nexp = Experiment(env)\n\n# run the sumo simulation for a set number of time steps\n_ = exp.run(1, 1500)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbd233b4e0d4d6aa92f888d50ffda008f295d1f | 17,041 | ipynb | Jupyter Notebook | proto/SQL/playing_with_psycopg.ipynb | pjkundert/wikienergy | ac3a13780bccb001c81d6f8ee27d3f5706cfa77e | [
"MIT"
] | 29 | 2015-01-08T19:20:37.000Z | 2021-04-20T08:25:56.000Z | proto/SQL/playing_with_psycopg.ipynb | pjkundert/wikienergy | ac3a13780bccb001c81d6f8ee27d3f5706cfa77e | [
"MIT"
] | null | null | null | proto/SQL/playing_with_psycopg.ipynb | pjkundert/wikienergy | ac3a13780bccb001c81d6f8ee27d3f5706cfa77e | [
"MIT"
] | 17 | 2015-02-01T18:12:04.000Z | 2020-06-15T14:13:04.000Z | 30.214539 | 463 | 0.390235 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecbd236b13b6b84cac138bd445fbc69c4010a0e2 | 7,602 | ipynb | Jupyter Notebook | DVM Notebook.ipynb | warmachine0609/Decentralized-Voting-Machine | 537df6f40a39014f3192183f24345bf1776b4361 | [
"MIT"
] | 1 | 2018-02-15T18:19:00.000Z | 2018-02-15T18:19:00.000Z | DVM Notebook.ipynb | warmachine0609/Decentralized-Voting-Machine | 537df6f40a39014f3192183f24345bf1776b4361 | [
"MIT"
] | null | null | null | DVM Notebook.ipynb | warmachine0609/Decentralized-Voting-Machine | 537df6f40a39014f3192183f24345bf1776b4361 | [
"MIT"
] | null | null | null | 32.211864 | 374 | 0.597343 | [
[
[
"# Decentralized Voting Machine\n\nFor years, the powerful and the rich have influenced (or) sometimes rigged the election. <br> Using a decentrialzed application instead a generic client-server architecture, our project aims at a genuine choice of leadership. <br> This type of arrangement makes it almost impossible for anyone to tamper or hack. It is with a great leader that a nation can prosper.\n\n## Problem Statement:\n\nElections conducted using conventional methods have a threefold problem\n\n\n## 1. Prone to tampering/hacking \n\n<img src=\"https://cdn.cnn.com/cnnnext/dam/assets/161217110439-inside-the-russian-hack-on-us-election-00034024.jpg\">\n\nEven India has become victim to such attacks\n\n<img src=\"files/images/hack2.PNG\">\n\n## 2. Wastage of resources & money \n\n\n<img src=\"files/images/hack3.PNG\">\n\n## 3. Cost of Public Holiday & Convenience\n\n\n<img src=\"http://nation.lk/online/wp-content/uploads/2015/07/Public-Holidays-Around-the-World_IPS.jpg\">\n\n\n## So how can we solve it?\n\n## We are going to combine two powerful technologies - Blockchain + ML\n\n## Module 1 - UIDAI Validation\n\nEvery user has to Login using their unique Aadhar ID and OTP. From these details such as \"Age\", \"Gender\", \"State\", \"City\" can be parsed.\n\n## Module 2 - Voting Smart Contract\n\n\nIt further has four steps - \n\nStep 1 - Setting up Environment <br>\nStep 2 - Creating Voting Smart Contract <br>\nStep 3 - Interacting with the Contract via the Nodejs Console <br>\nStep 4 - Creating GUI interface <br>\n\n\n",
"_____no_output_____"
],
[
"## Step 1 - Setting up Environment\n\n\n``npm install ethereumjs-testrpc web3``\n\ntestrpc creates 10 test accounts to play with automatically. These accounts come preloaded with 100 (fake) ethers.",
"_____no_output_____"
],
[
"## Step 2 - Creating Voting Smart Contract\n\n- Ethereum's solidity programming language is used to write our contract \n- Our contract (think of contract as a class) is called Voting with a constructor which initializes an array of candidates\n- 2 functions, one to return the total votes a candidate has received & another to increment vote count for a candidate.\n- Deployed contracts are immutable. If any changes, we just make a new one. \n\nInstall the dependencies on node console\n\n``npm install solc``\n\nAfter writing our smart contract, we'll use Web3js to deploy our app and interact with it\n\n```\nnaman:~/DVM$ node\n> Web3 = require('web3')\n> web3 = new Web3(new Web3.providers.HttpProvider(\"http://localhost:8545\"));\n```\n\nThen ensure Web3js is initalized and can query all accounts on the blockchain\n\n```\n> web3.eth.accounts\n```\n\nLastly, compile the contract by loading the code from Voting.sol in to a string variable and compiling it\n\n```\n> code = fs.readFileSync('Voting.sol').toString()\n> solc = require('solc')\n> compiledCode = solc.compile(code)\n```\n\nDeploy the contract!\n\n- dCode.contracts[‘:Voting’].bytecode: bytecode which will be deployed to the blockchain.\n- compiledCode.contracts[‘:Voting’].interface: interface of the contract (called abi) which tells the contract user what methods are available in the contract. \n\n```\n> abiDefinition = JSON.parse(compiledCode.contracts[':Voting'].interface)\n> VotingContract = web3.eth.contract(abiDefinition)\n> byteCode = compiledCode.contracts[':Voting'].bytecode\n> deployedContract = VotingContract.new(['Rahul Gandhi','Narendra Modi','Nitish Kumar'],{data: byteCode, from: web3.eth.accounts[0], gas: 4700000})\n> deployedContract.address\n> contractInstance = VotingContract.at(deployedContract.address)\n```\n\n- deployedContract.address. When you have to interact with your contract, you need this deployed address and abi definition we talked about earlier.",
"_____no_output_____"
],
[
"## Step 3 - Interacting with the Contract via the Nodejs Console\n```\n> contractInstance.totalVotesFor.call('Rahul Gandhi')\n{ [String: '0'] s: 1, e: 0, c: [ 0 ] }\n> contractInstance.voteForCandidate('Rahul Gandhi', {from: web3.eth.accounts[0]})\n'0xdedc7ae544c3dde74ab5a0b07422c5a51b5240603d31074f5b75c0ebc786bf53'\n> contractInstance.voteForCandidate('Rahul Gandhi', {from: web3.eth.accounts[0]})\n'0x02c054d238038d68b65d55770fabfca592a5cf6590229ab91bbe7cd72da46de9'\n> contractInstance.voteForCandidate('Rahul Gandhi', {from: web3.eth.accounts[0]})\n'0x3da069a09577514f2baaa11bc3015a16edf26aad28dffbcd126bde2e71f2b76f'\n> contractInstance.totalVotesFor.call('Rahul Gandhi').toLocaleString()\n'3'\n```",
"_____no_output_____"
],
[
"## Step 4 - Creating GUI interface\n\nHTML + JS client is used for this purpose \n",
"_____no_output_____"
],
[
"## Module 3 - Dashboard\n\n\n\n\n## Module 4 - Chatbot for Candidate enquiry\n\n\n\n\n\n## Module 5 - Using ML to extract people's opinion\n\n\n\n\n",
"_____no_output_____"
],
[
"## Generic Decentralized app looks like this - \n\n",
"_____no_output_____"
],
[
"## So what's next?\n\n1. Adding OCR to automate parsing\n2. Scaling \n3. Facilitation of Personal AI for specially abled people\n4. Increasing Robustness of Dashboard",
"_____no_output_____"
],
[
"## Conclusion \n\nWe mave not be a billion dollar product but our product will impact billion lives",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecbd27e9a32052ea746755b32af7e74d396c2362 | 14,053 | ipynb | Jupyter Notebook | notebooks/nlp/raw/ex1.ipynb | aurnik/learntools | 4d7ab1d2e2e40c60b9e277bafeaa041eacbae90b | [
"Apache-2.0"
] | 1 | 2020-09-02T22:34:54.000Z | 2020-09-02T22:34:54.000Z | notebooks/nlp/raw/ex1.ipynb | aurnik/learntools | 4d7ab1d2e2e40c60b9e277bafeaa041eacbae90b | [
"Apache-2.0"
] | null | null | null | notebooks/nlp/raw/ex1.ipynb | aurnik/learntools | 4d7ab1d2e2e40c60b9e277bafeaa041eacbae90b | [
"Apache-2.0"
] | null | null | null | 30.483731 | 311 | 0.574753 | [
[
[
"# Basic Text Processing with Spacy\n \nYou're a consultant for [DelFalco's Italian Restaurant](https://defalcosdeli.com/index.html).\nThe owner asked you to identify whether there are any foods on their menu that diners find disappointing. \n\n<img src=\"https://i.imgur.com/8DZunAQ.jpg\" alt=\"Meatball Sub\" width=\"250\"/>\n\nBefore getting started, run the following cell to set up code checking.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\n# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.nlp.ex1 import *\nprint('Setup Complete')",
"_____no_output_____"
]
],
[
[
"The business owner suggested you use diner reviews from the Yelp website to determine which dishes people liked and disliked. You pulled the data from Yelp. Before you get to analysis, run the code cell below for a quick look at the data you have to work with.",
"_____no_output_____"
]
],
[
[
"# Load in the data from JSON file\ndata = pd.read_json('../input/nlp-course/restaurant.json')\ndata.head()",
"_____no_output_____"
]
],
[
[
"The owner also gave you this list of menu items and common alternate spellings.",
"_____no_output_____"
]
],
[
[
"menu = [\"Cheese Steak\", \"Cheesesteak\", \"Steak and Cheese\", \"Italian Combo\", \"Tiramisu\", \"Cannoli\",\n \"Chicken Salad\", \"Chicken Spinach Salad\", \"Meatball\", \"Pizza\", \"Pizzas\", \"Spaghetti\",\n \"Bruchetta\", \"Eggplant\", \"Italian Beef\", \"Purista\", \"Pasta\", \"Calzones\", \"Calzone\",\n \"Italian Sausage\", \"Chicken Cutlet\", \"Chicken Parm\", \"Chicken Parmesan\", \"Gnocchi\",\n \"Chicken Pesto\", \"Turkey Sandwich\", \"Turkey Breast\", \"Ziti\", \"Portobello\", \"Reuben\",\n \"Mozzarella Caprese\", \"Corned Beef\", \"Garlic Bread\", \"Pastrami\", \"Roast Beef\",\n \"Tuna Salad\", \"Lasagna\", \"Artichoke Salad\", \"Fettuccini Alfredo\", \"Chicken Parmigiana\",\n \"Grilled Veggie\", \"Grilled Veggies\", \"Grilled Vegetable\", \"Mac and Cheese\", \"Macaroni\", \n \"Prosciutto\", \"Salami\"]",
"_____no_output_____"
]
],
[
[
"# Step 1: Plan Your Analysis",
"_____no_output_____"
],
[
"Given the data from Yelp and the list of menu items, do you have any ideas for how you could find which menu items have disappointed diners?\n\nThink about your answer. Then run the cell below to see one approach.",
"_____no_output_____"
]
],
[
[
"# Check your answer (Run this code cell to receive credit!)\nq_1.solution()",
"_____no_output_____"
]
],
[
[
"# Step 2: Find items in one review\n\nYou'll pursue this plan of calculating average scores of the reviews mentioning each menu item.\n\nAs a first step, you'll write code to extract the foods mentioned in a single review.\n\nSince menu items are multiple tokens long, you'll use `PhraseMatcher` which can match series of tokens.\n\nFill in the `____` values below to get a list of items matching a single menu item.",
"_____no_output_____"
]
],
[
[
"import spacy\nfrom spacy.matcher import PhraseMatcher\n\nindex_of_review_to_test_on = 14\ntext_to_test_on = data.text.iloc[index_of_review_to_test_on]\n\n# Load the SpaCy model\nnlp = spacy.blank('en')\n\n# Create the tokenized version of text_to_test_on\nreview_doc = ____\n\n# Create the PhraseMatcher object. The tokenizer is the first argument. Use attr = 'LOWER' to make consistent capitalization\nmatcher = PhraseMatcher(nlp.vocab, attr='LOWER')\n\n# Create a list of tokens for each item in the menu\nmenu_tokens_list = [____ for item in menu]\n\n# Add the item patterns to the matcher. \n# Look at https://spacy.io/api/phrasematcher#add in the docs for help with this step\n# Then uncomment the lines below \n\n# \n#matcher.add(\"MENU\", # Just a name for the set of rules we're matching to\n# None, # Special actions to take on matched words\n# ____ \n# )\n\n# Find matches in the review_doc\n# matches = ____\n\n# Uncomment to check your work\n#q_2.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_2.hint()\n#_COMMENT_IF(PROD)_\nq_2.solution()",
"_____no_output_____"
]
],
[
[
"After implementing the above cell, uncomment the following cell to print the matches.",
"_____no_output_____"
]
],
[
[
"# for match in matches:\n# print(f\"Token number {match[1]}: {review_doc[match[1]:match[2]]}\")",
"_____no_output_____"
],
[
"#%%RM_IF(PROD)%%\nimport spacy\nfrom spacy.matcher import PhraseMatcher\n\n# Load the SpaCy model\ntokenizer = spacy.blank('en')\n\nindex_of_review_to_test_on = 14\ntext_to_test_on = data.text.iloc[index_of_review_to_test_on]\n\n# Create the tokenized review_doc\nreview_doc = tokenizer(text_to_test_on)\n\n# Create the PhraseMatcher object. The tokenizer is the first argument.\n# Reviews don't have consistent capitalization. Perform case-insensitive matching by adding argument attr='LOWER'\nmatcher = PhraseMatcher(tokenizer.vocab, attr='LOWER')\n\n# Create a list of docs for each item in the menu\nmenu_tokens_list = [tokenizer(item) for item in menu]\n\n# Add the item patterns to the matcher\nmatcher.add(\"MENU\", # Just a name for the set of rules we're matching to\n None,\n *menu_tokens_list # Add the patterns to match to. In this case the menu_tokens_list\n )\n\n# Find matches in the review_doc\nmatches = matcher(review_doc)\n\n# Uncomment when checking code is complete\nq_2.assert_check_passed()",
"_____no_output_____"
]
],
[
[
"# Step 3: Matching on the whole dataset\n\nNow run this matcher over the whole dataset and collect ratings for each menu item. Each review has a rating, `review.stars`. For each item that appears in the review text (`review.text`), append the review's rating to a list of ratings for that item. The lists are kept in a dictionary `item_ratings`.\n\nTo get the matched phrases, you can reference the `PhraseMatcher` documentation for the structure of each match object:\n\n>A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end]`. The `match_id` is the ID of the added match pattern.",
"_____no_output_____"
]
],
[
[
"from collections import defaultdict\n\n# item_ratings is a dictionary of lists. If a key doesn't exist in item_ratings,\n# the key is added with an empty list as the value.\nitem_ratings = defaultdict(list)\n\nfor idx, review in data.iterrows():\n doc = ____\n # Using the matcher from the previous exercise\n matches = ____\n \n # Create a set of the items found in the review text\n found_items = ____\n \n # Update item_ratings with rating for each item in found_items\n # Transform the item strings to lowercase to make it case insensitive\n ____\n\nq_3.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_3.hint()\n#_COMMENT_IF(PROD)_\nq_3.solution()",
"_____no_output_____"
],
[
"#%%RM_IF(PROD)%%\n\nfrom collections import defaultdict\n\nitem_ratings = defaultdict(list)\n\nfor idx, review in data.iterrows():\n doc = tokenizer(review.text)\n matches = matcher(doc)\n\n found_items = set([doc[match[1]:match[2]] for match in matches])\n \n for item in found_items:\n item_ratings[str(item).lower()].append(review.stars)\n \nq_3.assert_check_passed()",
"_____no_output_____"
]
],
[
[
"# Step 4: What's the worst reviewed item?\n\nUsing these item ratings, find the menu item with the worst average rating.",
"_____no_output_____"
]
],
[
[
"# Calculate the mean ratings for each menu item as a dictionary\nmean_ratings = ____\n\n# Find the worst item, and write it as a string in worst_text. This can be multiple lines of code if you want.\nworst_item = ____\n\nq_4.check()",
"_____no_output_____"
],
[
"# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_4.hint()\n#_COMMENT_IF(PROD)_\nq_4.solution()",
"_____no_output_____"
],
[
"# After implementing the above cell, uncomment and run this to print \n# out the worst item, along with its average rating. \n\n#print(worst_item)\n#print(mean_ratings[worst_item])",
"_____no_output_____"
],
[
"#%%RM_IF(PROD)%%\n\nmean_ratings = {item: sum(ratings)/len(ratings) for item, ratings in item_ratings.items()}\nworst_item = sorted(mean_ratings, key=mean_ratings.get)[0]\n \nq_4.assert_check_passed()",
"_____no_output_____"
]
],
[
[
"# Step 5: Are counts important here?\n\nSimilar to the mean ratings, you can calculate the number of reviews for each item.",
"_____no_output_____"
]
],
[
[
"counts = {item: len(ratings) for item, ratings in item_ratings.items()}\n\nitem_counts = sorted(counts, key=counts.get, reverse=True)\nfor item in item_counts:\n print(f\"{item:>25}{counts[item]:>5}\")",
"_____no_output_____"
]
],
[
[
"Here is code to print the 10 best and 10 worst rated items. Look at the results, and decide whether you think it's important to consider the number of reviews when interpreting scores of which items are best and worst.",
"_____no_output_____"
]
],
[
[
"sorted_ratings = sorted(mean_ratings, key=mean_ratings.get)\n\nprint(\"Worst rated menu items:\")\nfor item in sorted_ratings[:10]:\n print(f\"{item:20} Ave rating: {mean_ratings[item]:.2f} \\tcount: {counts[item]}\")\n \nprint(\"\\n\\nBest rated menu items:\")\nfor item in sorted_ratings[-10:]:\n print(f\"{item:20} Ave rating: {mean_ratings[item]:.2f} \\tcount: {counts[item]}\")",
"_____no_output_____"
]
],
[
[
"Run the following line after you've decided your answer.",
"_____no_output_____"
]
],
[
[
"# Check your answer (Run this code cell to receive credit!)\nq_5.solution()",
"_____no_output_____"
]
],
[
[
"# Keep Going\n\nNow that you are ready to combine your NLP skills with your ML skills, **[see how it's done](#$NEXT_NOTEBOOK_URL$)**.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbd34c6b0ae3a83fe3227fd634c772aaec5fca9 | 12,545 | ipynb | Jupyter Notebook | lessons/ObjectOrientedProgramming/JupyterNotebooks/4.OOP_code_magic_methods/.ipynb_checkpoints/magic_methods-checkpoint.ipynb | dgander000/DSND_Term2 | 15bb5c1bf036ddbd4c39aa45cbee192d9c28c8f2 | [
"MIT"
] | null | null | null | lessons/ObjectOrientedProgramming/JupyterNotebooks/4.OOP_code_magic_methods/.ipynb_checkpoints/magic_methods-checkpoint.ipynb | dgander000/DSND_Term2 | 15bb5c1bf036ddbd4c39aa45cbee192d9c28c8f2 | [
"MIT"
] | null | null | null | lessons/ObjectOrientedProgramming/JupyterNotebooks/4.OOP_code_magic_methods/.ipynb_checkpoints/magic_methods-checkpoint.ipynb | dgander000/DSND_Term2 | 15bb5c1bf036ddbd4c39aa45cbee192d9c28c8f2 | [
"MIT"
] | null | null | null | 36.362319 | 308 | 0.472937 | [
[
[
"# Magic Methods\n\nBelow you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.\n\nAs in previous exercises, there is an answer key that you can look at if you get stuck. Click on the \"Jupyter\" icon at the top of this notebook, and open the folder 4.OOP_code_magic_methods. You'll find the answer.py file inside the folder.",
"_____no_output_____"
]
],
[
[
"import math\nimport matplotlib.pyplot as plt\n\nclass Gaussian():\n \"\"\" Gaussian distribution class for calculating and \n visualizing a Gaussian distribution.\n \n Attributes:\n mean (float) representing the mean value of the distribution\n stdev (float) representing the standard deviation of the distribution\n data_list (list of floats) a list of floats extracted from the data file\n \n \"\"\"\n def __init__(self, mu = 0, sigma = 1):\n \n self.mean = mu\n self.stdev = sigma\n self.data = []\n \n \n def calculate_mean(self):\n \n \"\"\"Method to calculate the mean of the data set.\n \n Args: \n None\n \n Returns: \n float: mean of the data set\n \n \"\"\" \n if len(self.data) == 0:\n self.mean = 0\n else:\n self.mean = sum(self.data) / len(self.data)\n return self.mean \n\n\n def calculate_stdev(self, sample=True):\n\n \"\"\"Method to calculate the standard deviation of the data set.\n \n Args: \n sample (bool): whether the data represents a sample or population\n \n Returns: \n float: standard deviation of the data set\n \n \"\"\"\n if sample:\n n = len(self.data) - 1\n else:\n n = len(self.data)\n \n mean = self.mean\n sigma = 0\n\n for data_point in self.data:\n sigma += (data_point - mean)**2\n \n self.stdev = math.sqrt(sigma / n)\n \n return self.stdev\n\n\n def read_data_file(self, file_name, sample=True):\n \n \"\"\"Method to read in data from a txt file. The txt file should have\n one number (float) per line. The numbers are stored in the data attribute. \n After reading in the file, the mean and standard deviation are calculated\n \n Args:\n file_name (string): name of a file to read from\n \n Returns:\n None\n \n \"\"\"\n \n # This code opens a data file and appends the data to a list called data_list\n with open(file_name) as file:\n data_list = []\n line = file.readline()\n while line:\n data_list.append(int(line))\n line = file.readline()\n file.close()\n \n self.data = data_list\n self.mean = self.calculate_mean()\n self.stdev = self.calculate_stdev(sample)\n \n \n def plot_histogram(self):\n \"\"\"Method to output a histogram of the instance variable data using \n matplotlib pyplot library.\n \n Args:\n None\n \n Returns:\n None\n \"\"\"\n plt.hist(self.data)\n plt.title('Histogram of data')\n plt.xlabel('data')\n plt.ylabel('count')\n \n \n def pdf(self, x):\n \"\"\"Probability density function calculator for the gaussian distribution.\n \n Args:\n x (float): point for calculating the probability density function\n \n \n Returns:\n float: probability density function output\n \"\"\"\n return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) \n\n \n def plot_histogram_pdf(self, n_spaces = 50):\n\n \"\"\"Method to plot the normalized histogram of the data and a plot of the \n probability density function along the same range\n \n Args:\n n_spaces (int): number of data points \n \n Returns:\n list: x values for the pdf plot\n list: y values for the pdf plot\n \n \"\"\"\n \n #TODO: Nothing to do for this method. Try it out and see how it works.\n \n mu = self.mean\n sigma = self.stdev\n\n min_range = min(self.data)\n max_range = max(self.data)\n \n # calculates the interval between x values\n interval = 1.0 * (max_range - min_range) / n_spaces\n\n x = []\n y = []\n \n # calculate the x values to visualize\n for i in range(n_spaces):\n tmp = min_range + interval*i\n x.append(tmp)\n y.append(self.pdf(tmp))\n\n # make the plots\n fig, axes = plt.subplots(2,sharex=True)\n fig.subplots_adjust(hspace=.5)\n axes[0].hist(self.data, density=True)\n axes[0].set_title('Normed Histogram of Data')\n axes[0].set_ylabel('Density')\n\n axes[1].plot(x, y)\n axes[1].set_title('Normal Distribution for \\n Sample Mean and Sample Standard Deviation')\n axes[0].set_ylabel('Density')\n plt.show()\n\n return x, y\n\n def __add__(self, other):\n \n \"\"\"Magic method to add together two Gaussian distributions\n \n Args:\n other (Gaussian): Gaussian instance\n \n Returns:\n Gaussian: Gaussian distribution\n \n \"\"\"\n \n # TODO: Calculate the results of summing two Gaussian distributions\n # When summing two Gaussian distributions, the mean value is the sum\n # of the means of each Gaussian.\n #\n # When summing two Gaussian distributions, the standard deviation is the\n # square root of the sum of square ie sqrt(stdev_one ^ 2 + stdev_two ^ 2)\n \n # create a new Gaussian object\n result = Gaussian()\n\n result.mean = self.mean + other.mean\n result.stdev = math.sqrt(self.stdev**2 + other.stdev**2)\n \n return result\n\n \n def __repr__(self):\n \n \"\"\"Magic method to output the characteristics of the Gaussian instance\n \n Args:\n None\n \n Returns:\n string: characteristics of the Gaussian\n \n \"\"\" \n return 'mean {}, standard deviation {}'.format(self.mean, self.stdev)",
"_____no_output_____"
],
[
"# Unit tests to check your solution\n\nimport unittest\n\nclass TestGaussianClass(unittest.TestCase):\n def setUp(self):\n self.gaussian = Gaussian(25, 2)\n\n def test_initialization(self): \n self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')\n self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')\n\n def test_pdf(self):\n self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\\\n 'pdf function does not give expected result') \n\n def test_meancalculation(self):\n self.gaussian.read_data_file('numbers.txt', True)\n self.assertEqual(self.gaussian.calculate_mean(),\\\n sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')\n\n def test_stdevcalculation(self):\n self.gaussian.read_data_file('numbers.txt', True)\n self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')\n self.gaussian.read_data_file('numbers.txt', False)\n self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')\n\n def test_add(self):\n gaussian_one = Gaussian(25, 3)\n gaussian_two = Gaussian(30, 4)\n gaussian_sum = gaussian_one + gaussian_two\n \n self.assertEqual(gaussian_sum.mean, 55)\n self.assertEqual(gaussian_sum.stdev, 5)\n\n def test_repr(self):\n gaussian_one = Gaussian(25, 3)\n \n self.assertEqual(str(gaussian_one), \"mean 25, standard deviation 3\")\n \ntests = TestGaussianClass()\n\ntests_loaded = unittest.TestLoader().loadTestsFromModule(tests)\n\nunittest.TextTestRunner().run(tests_loaded)",
".....F\n======================================================================\nFAIL: test_stdevcalculation (__main__.TestGaussianClass)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"<ipython-input-12-26fbb96f9061>\", line 26, in test_stdevcalculation\n self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')\nAssertionError: 92.87 != 88.55 : population standard deviation incorrect\n\n----------------------------------------------------------------------\nRan 6 tests in 0.007s\n\nFAILED (failures=1)\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
ecbd35474e7c6e128a64ddab101c54695f0a7a00 | 176,019 | ipynb | Jupyter Notebook | notebooks/testbed.ipynb | udion/mscopy_AE | 28644994d675cd5b0f82a0ace06e377f44bea662 | [
"MIT"
] | null | null | null | notebooks/testbed.ipynb | udion/mscopy_AE | 28644994d675cd5b0f82a0ace06e377f44bea662 | [
"MIT"
] | null | null | null | notebooks/testbed.ipynb | udion/mscopy_AE | 28644994d675cd5b0f82a0ace06e377f44bea662 | [
"MIT"
] | null | null | null | 99.165634 | 97,388 | 0.804731 | [
[
[
"import random\nimport os\nimport shutil\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.optim as optim\nimport torchvision.transforms as transforms\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms, utils\nimport torchvision.datasets as dsets\nimport torchvision\n\nfrom scipy.ndimage.filters import gaussian_filter\nimport PIL\nfrom PIL import Image\n\nrandom.seed(42)",
"_____no_output_____"
],
[
"class resBlock(nn.Module):\n def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):\n super(resBlock, self).__init__()\n\n self.conv1 = nn.Conv2d(in_channels, out_channels, k, stride=s, padding=p)\n self.bn1 = nn.BatchNorm2d(out_channels)\n self.conv2 = nn.Conv2d(out_channels, out_channels, k, stride=s, padding=p)\n self.bn2 = nn.BatchNorm2d(out_channels)\n\n def forward(self, x):\n y = F.relu(self.bn1(self.conv1(x)))\n return self.bn2(self.conv2(y)) + x\n \nclass resTransposeBlock(nn.Module):\n def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):\n super(resTransposeBlock, self).__init__()\n\n self.conv1 = nn.ConvTranspose2d(in_channels, out_channels, k, stride=s, padding=p)\n self.bn1 = nn.BatchNorm2d(out_channels)\n self.conv2 = nn.ConvTranspose2d(out_channels, out_channels, k, stride=s, padding=p)\n self.bn2 = nn.BatchNorm2d(out_channels)\n\n def forward(self, x):\n y = F.relu(self.bn1(self.conv1(x)))\n return self.bn2(self.conv2(y)) + x\n \nclass VGG19_extractor(nn.Module):\n def __init__(self, cnn):\n super(VGG19_extractor, self).__init__()\n self.features1 = nn.Sequential(*list(cnn.features.children())[:3])\n self.features2 = nn.Sequential(*list(cnn.features.children())[:5])\n self.features3 = nn.Sequential(*list(cnn.features.children())[:12])\n def forward(self, x):\n return self.features1(x), self.features2(x), self.features3(x)",
"_____no_output_____"
],
[
"vgg19_exc = VGG19_extractor(torchvision.models.vgg19(pretrained=True))\nvgg19_exc = vgg19_exc.cuda()",
"_____no_output_____"
]
],
[
[
"### Designing Encoder (E)",
"_____no_output_____"
]
],
[
[
"class Encoder(nn.Module):\n def __init__(self, n_res_blocks=5):\n super(Encoder, self).__init__()\n self.n_res_blocks = n_res_blocks\n self.conv1 = nn.Conv2d(3, 64, 3, stride=2, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_1' + str(i+1), resBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))\n self.conv2 = nn.Conv2d(64, 32, 3, stride=2, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_2' + str(i+1), resBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))\n self.conv3 = nn.Conv2d(32, 8, 3, stride=1, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_3' + str(i+1), resBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))\n self.conv4 = nn.Conv2d(8, 1, 3, stride=1, padding=1)\n \n def forward(self, x):\n y = F.relu(self.conv1(x))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))\n y = F.relu(self.conv2(y))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))\n y = F.relu(self.conv3(y))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))\n y = self.conv4(y)\n return y\n\nE1 = Encoder(n_res_blocks=10)",
"_____no_output_____"
]
],
[
[
"### Designing Decoder (D)",
"_____no_output_____"
]
],
[
[
"class Decoder(nn.Module):\n def __init__(self, n_res_blocks=5):\n super(Decoder, self).__init__()\n self.n_res_blocks = n_res_blocks\n self.conv1 = nn.ConvTranspose2d(1, 8, 3, stride=1, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_1' + str(i+1), resTransposeBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))\n self.conv2 = nn.ConvTranspose2d(8, 32, 3, stride=1, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_2' + str(i+1), resTransposeBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))\n self.conv3 = nn.ConvTranspose2d(32, 64, 3, stride=2, padding=1)\n for i in range(n_res_blocks):\n self.add_module('residual_block_3' + str(i+1), resTransposeBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))\n self.conv4 = nn.ConvTranspose2d(64, 3, 3, stride=2, padding=1)\n \n def forward(self, x):\n y = F.relu(self.conv1(x))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))\n y = F.relu(self.conv2(y))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))\n y = F.relu(self.conv3(y))\n for i in range(self.n_res_blocks):\n y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))\n y = self.conv4(y)\n return y\n\nD1 = Decoder(n_res_blocks=10)",
"_____no_output_____"
]
],
[
[
"### Putting it in box, VAE",
"_____no_output_____"
]
],
[
[
"class VAE(nn.Module):\n def __init__(self, encoder, decoder):\n super(VAE, self).__init__()\n self.E = encoder\n self.D = decoder\n self._enc_mu = nn.Linear(11*11, 128)\n self._enc_log_sigma = nn.Linear(11*11, 128)\n self._din_layer = nn.Linear(128, 11*11)\n \n def _sample_latent(self, h_enc):\n '''\n Return the latent normal sample z ~ N(mu, sigma^2)\n '''\n mu = self._enc_mu(h_enc)\n# print('mu size : ', mu.size())\n log_sigma = self._enc_log_sigma(h_enc)\n# print('log_sigma size : ', log_sigma.size())\n sigma = torch.exp(log_sigma)\n# print('sigma size : ', sigma.size())\n std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()\n\n self.z_mean = mu\n self.z_sigma = sigma\n \n return mu + sigma * Variable(std_z, requires_grad=False).cuda() # Reparameterization trick\n\n def forward(self, x):\n n_bacth = x.size()[0]\n h_enc = self.E(x)\n indim1_D, indim2_D = h_enc.size()[2], h_enc.size()[3] \n# print('h_enc size : ', h_enc.size())\n h_enc = h_enc.view(n_bacth, 1, -1)\n# print('h_enc size : ', h_enc.size())\n z = self._sample_latent(h_enc)\n# print('z_size : ', z.size())\n z = self._din_layer(z)\n# print('z_size : ', z.size())\n z = z.view(n_bacth, 1, indim1_D, indim2_D)\n# print('z_size : ', z.size())\n# print('Dz_size : ', self.D(z).size())\n return self.D(z)",
"_____no_output_____"
],
[
"V = VAE(E1, D1)\nV = V.cuda()",
"_____no_output_____"
],
[
"class AE(nn.Module):\n def __init__(self, encoder, decoder):\n super(AE, self).__init__()\n self.E = encoder\n self.D = decoder\n def forward(self, x):\n h_enc = self.E(x)\n# print('encoder out checking for nan ', np.isnan(h_enc.data.cpu()).any())\n y = self.D(h_enc)\n# print('decoder out checking for nan ', np.isnan(y.data.cpu()).any())\n return y",
"_____no_output_____"
],
[
"A = AE(E1, D1)\nA = A.cuda()",
"_____no_output_____"
]
],
[
[
"### Dataloading and stuff",
"_____no_output_____"
]
],
[
[
"mytransform1 = transforms.Compose(\n [transforms.RandomCrop((41,41)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\ndef mynorm2(x):\n m1 = torch.min(x)\n m2 = torch.max(x)\n return (x-m1)/(m2-m1)\n\nmytransform2 = transforms.Compose(\n [transforms.RandomCrop((246,246)),\n transforms.Lambda( lambda x : Image.fromarray(gaussian_filter(x, sigma=(10,10,0)) )),\n transforms.Resize((41,41)),\n transforms.ToTensor(),\n transforms.Lambda( lambda x : x)])\n\ntrainset = dsets.ImageFolder(root='../sample_dataset/train/',transform=mytransform2)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2)\n\ntestset = dsets.ImageFolder(root='../sample_dataset/test/',transform=mytransform2)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True, num_workers=2)\n# functions to show an image\ndef imshow(img):\n #img = img / 2 + 0.5 \n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n \ndef imshow2(img):\n m1 = torch.min(img)\n m2 = torch.max(img)\n img = (img-m1)/(m2-m1)\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n# get some random training images\ndataiter = iter(trainloader)\nimages, labels = next(dataiter) #all the images under the same 'unlabeled' folder\n# print(labels)\n# show images\nimshow(torchvision.utils.make_grid(images))",
"_____no_output_____"
]
],
[
[
"### training thingy",
"_____no_output_____"
]
],
[
[
"def latent_loss(z_mean, z_stddev):\n mean_sq = z_mean * z_mean\n stddev_sq = z_stddev * z_stddev\n return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)",
"_____no_output_____"
],
[
"def save_model(model, model_name):\n try:\n os.makedirs('../saved_models')\n except OSError:\n pass\n torch.save(model.state_dict(), '../saved_models/'+model_name)\n print('model saved at '+'../saved_models/'+model_name)",
"_____no_output_____"
],
[
"# dataloader = iter(trainloader)\ntestiter = iter(testloader)\ntestX, _ = next(testiter)\ndef eval_model(model):\n X = testX\n print('input looks like ...')\n plt.figure()\n imshow(torchvision.utils.make_grid(X))\n \n X = Variable(X).cuda()\n Y = model(X)\n print('output looks like ...')\n plt.figure()\n imshow2(torchvision.utils.make_grid(Y.data.cpu()))",
"Process Process-3:\nProcess Process-4:\nTraceback (most recent call last):\nTraceback (most recent call last):\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/process.py\", line 252, in _bootstrap\n self.run()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/process.py\", line 252, in _bootstrap\n self.run()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/process.py\", line 93, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/process.py\", line 93, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/utils/data/dataloader.py\", line 36, in _worker_loop\n r = index_queue.get()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/site-packages/torch/utils/data/dataloader.py\", line 36, in _worker_loop\n r = index_queue.get()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/queues.py\", line 335, in get\n res = self._reader.recv_bytes()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/queues.py\", line 334, in get\n with self._rlock:\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/connection.py\", line 216, in recv_bytes\n buf = self._recv_bytes(maxlength)\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n return self._semlock.__enter__()\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/connection.py\", line 407, in _recv_bytes\n buf = self._recv(4)\n File \"/home/udion/anaconda3/envs/DeepCV3.5/lib/python3.5/multiprocessing/connection.py\", line 379, in _recv\n chunk = read(handle, remaining)\nKeyboardInterrupt\nKeyboardInterrupt\n"
],
[
"def train(model, rec_interval=2, disp_interval=20, eval_interval=1):\n nepoch = 500\n Criterion1 = nn.MSELoss()\n Criterion2 = nn.L1Loss()\n optimizer = optim.Adam(model.parameters(), lr=1e-5)\n loss_track = []\n for eph in range(nepoch):\n dataloader = iter(trainloader)\n print('starting epoch {} ...'.format(eph))\n for i, (X, _) in enumerate(dataloader):\n X = Variable(X).cuda()\n optimizer.zero_grad()\n reconX = model(X)\n# KLTerm = latent_loss(model.z_mean, model.z_sigma)\n reconTerm = Criterion1(reconX, X) + Criterion2(reconX, X)\n# loss = reconTerm + 100*KLTerm\n loss =reconTerm\n loss.backward()\n optimizer.step()\n \n if i%rec_interval == 0:\n loss_track.append(loss.data[0])\n if i%disp_interval == 0:\n# print('epoch : {}, iter : {}, KLterm : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, KLTerm.data[0], reconTerm.data[0], loss.data[0]))\n print('epoch : {}, iter : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, reconTerm.data[0], loss.data[0]))\n# if eph%eval_interval == 0:\n# print('after epoch {} ...'.format(eph))\n# eval_model(model)\n \n return loss_track",
"_____no_output_____"
],
[
"def train_ae(model, rec_interval=2, disp_interval=20, eval_interval=1):\n nepoch = 3\n Criterion2 = nn.MSELoss()\n Criterion1 = nn.L1Loss()\n optimizer = optim.Adam(model.parameters(), lr=1e-5)\n loss_track = []\n for eph in range(nepoch):\n dataloader = iter(trainloader)\n print('starting epoch {} ...'.format(eph))\n for i, (X, _) in enumerate(dataloader):\n X = Variable(X).cuda()\n optimizer.zero_grad()\n reconX = model(X)\n l2 = Criterion2(reconX, X)\n# l1 = Criterion1(reconX, X)\n \n t1, t2, t3 = vgg19_exc(X)\n rt1, rt2, rt3 = vgg19_exc(reconX)\n \n# t1 = Variable(t1.data)\n# rt1 = Variable(rt1.data)\n# t2 = Variable(t2.data)\n# rt2 = Variable(rt2.data)\n t3 = Variable(t3.data)\n rt3 = Variable(rt3.data)\n \n# vl1 = Criterion2(rt1, t1)\n# vl2 = Criterion2(rt2, t2)\n vl3 = Criterion2(rt3, t3)\n \n reconTerm = 10*l2 + vl3\n loss = reconTerm\n loss.backward()\n optimizer.step()\n \n if i%rec_interval == 0:\n loss_track.append(loss.data[0])\n if i%disp_interval == 0:\n print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(\n eph, i, l2.data[0], vl3.data[0], loss.data[0]))\n return loss_track",
"_____no_output_____"
]
],
[
[
"#### Notes on training\nIt seems like the combination of L1 and L2 loss is not helping and also the features from deeper layers from VGG19 are more effective than the features on the shallow leve",
"_____no_output_____"
]
],
[
[
"loss_track = train_ae(A, disp_interval=10)",
"starting epoch 0 ...\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 0, L2term: 0.8983635306358337, vl3: 7.830014228820801, totalLoss: 16.813648223876953\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 10, L2term: 0.9259738326072693, vl3: 7.311758041381836, totalLoss: 16.571495056152344\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 20, L2term: 0.7158463001251221, vl3: 7.242307662963867, totalLoss: 14.40077018737793\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 30, L2term: 0.7521705031394958, vl3: 6.877910137176514, totalLoss: 14.399615287780762\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 40, L2term: 0.6198620200157166, vl3: 6.230955600738525, totalLoss: 12.42957592010498\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 50, L2term: 0.5936570763587952, vl3: 5.838574409484863, totalLoss: 11.775144577026367\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 60, L2term: 0.5245713591575623, vl3: 5.154712677001953, totalLoss: 10.400426864624023\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 70, L2term: 0.4463861286640167, vl3: 4.8986310958862305, totalLoss: 9.362492561340332\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 80, L2term: 0.4188934564590454, vl3: 4.7084641456604, totalLoss: 8.897397994995117\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 90, L2term: 0.36077985167503357, vl3: 4.6891350746154785, totalLoss: 8.296934127807617\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 100, L2term: 0.3563389480113983, vl3: 4.880843639373779, totalLoss: 8.444232940673828\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 110, L2term: 0.3144689202308655, vl3: 5.016197681427002, totalLoss: 8.160886764526367\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 120, L2term: 0.2541641294956207, vl3: 5.132085800170898, totalLoss: 7.673727035522461\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 130, L2term: 0.2907642722129822, vl3: 5.459050178527832, totalLoss: 8.366693496704102\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 140, L2term: 0.24108204245567322, vl3: 5.7163987159729, totalLoss: 8.127219200134277\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 150, L2term: 0.21142292022705078, vl3: 5.944607734680176, totalLoss: 8.058836936950684\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 0, iter: 160, L2term: 0.2169700264930725, vl3: 6.195516586303711, totalLoss: 8.365217208862305\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nstarting epoch 1 ...\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 0, L2term: 0.18315042555332184, vl3: 6.136467456817627, totalLoss: 7.9679718017578125\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 10, L2term: 0.14397552609443665, vl3: 6.515658378601074, totalLoss: 7.955413818359375\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 20, L2term: 0.1662718504667282, vl3: 6.306187152862549, totalLoss: 7.968905448913574\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 30, L2term: 0.1409144401550293, vl3: 6.448930263519287, totalLoss: 7.85807466506958\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 40, L2term: 0.11718666553497314, vl3: 6.398552894592285, totalLoss: 7.5704193115234375\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 50, L2term: 0.12458859384059906, vl3: 6.559013843536377, totalLoss: 7.8048996925354\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 60, L2term: 0.11741235852241516, vl3: 6.344666957855225, totalLoss: 7.518790245056152\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 70, L2term: 0.10559198260307312, vl3: 6.312812328338623, totalLoss: 7.368732452392578\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 80, L2term: 0.09481391310691833, vl3: 6.268637657165527, totalLoss: 7.2167768478393555\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 90, L2term: 0.10133373737335205, vl3: 6.227005481719971, totalLoss: 7.24034309387207\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 100, L2term: 0.09742426127195358, vl3: 6.161762237548828, totalLoss: 7.13600492477417\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 110, L2term: 0.09649568796157837, vl3: 5.973419189453125, totalLoss: 6.938375949859619\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 120, L2term: 0.09087282419204712, vl3: 5.819881439208984, totalLoss: 6.728609561920166\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 130, L2term: 0.08494996279478073, vl3: 5.688658237457275, totalLoss: 6.538157939910889\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 140, L2term: 0.06919872760772705, vl3: 5.663378715515137, totalLoss: 6.355365753173828\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 150, L2term: 0.06761890649795532, vl3: 5.355268955230713, totalLoss: 6.031457901000977\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 1, iter: 160, L2term: 0.05640771612524986, vl3: 5.065552234649658, totalLoss: 5.629629135131836\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nstarting epoch 2 ...\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 0, L2term: 0.06574390083551407, vl3: 5.063083648681641, totalLoss: 5.720522880554199\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 10, L2term: 0.05296388268470764, vl3: 4.96236515045166, totalLoss: 5.492003917694092\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 20, L2term: 0.05274214595556259, vl3: 4.836935997009277, totalLoss: 5.3643574714660645\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 30, L2term: 0.05519568547606468, vl3: 4.40798807144165, totalLoss: 4.959944725036621\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 40, L2term: 0.05883631110191345, vl3: 4.296993255615234, totalLoss: 4.885356426239014\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 50, L2term: 0.045519109815359116, vl3: 4.143337726593018, totalLoss: 4.598528861999512\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 60, L2term: 0.051161568611860275, vl3: 4.003594875335693, totalLoss: 4.5152106285095215\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 70, L2term: 0.0403841957449913, vl3: 4.016132831573486, totalLoss: 4.4199748039245605\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 80, L2term: 0.042727742344141006, vl3: 3.7730588912963867, totalLoss: 4.200336456298828\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 90, L2term: 0.03737306967377663, vl3: 3.6081671714782715, totalLoss: 3.9818978309631348\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 100, L2term: 0.037855032831430435, vl3: 3.4353952407836914, totalLoss: 3.8139455318450928\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 110, L2term: 0.0321800522506237, vl3: 3.326998233795166, totalLoss: 3.648798704147339\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 120, L2term: 0.030673762783408165, vl3: 3.1562187671661377, totalLoss: 3.462956428527832\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 130, L2term: 0.030933598056435585, vl3: 3.078704833984375, totalLoss: 3.388040781021118\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 140, L2term: 0.031067436560988426, vl3: 2.889622211456299, totalLoss: 3.200296640396118\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 150, L2term: 0.027850667014718056, vl3: 2.7288408279418945, totalLoss: 3.007347583770752\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nepoch: 2, iter: 160, L2term: 0.02619912661612034, vl3: 2.590054512023926, totalLoss: 2.852045774459839\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\nencoder out checking for nan False\ndecoder out checking for nan False\n"
],
[
"plt.plot(loss_track)",
"_____no_output_____"
],
[
"eval_model(A)",
"_____no_output_____"
],
[
"save_model(A, 'AE_VGGFeatX1.pth')",
"_____no_output_____"
]
],
[
[
"#### experiments with the sigmoid and BCE",
"_____no_output_____"
]
],
[
[
"class AE1(nn.Module):\n def __init__(self, encoder, decoder):\n super(AE1, self).__init__()\n self.E = encoder\n self.D = decoder\n def forward(self, x):\n h_enc = self.E(x)\n return F.relu(self.D(h_enc))",
"_____no_output_____"
],
[
"A1 = AE1(E1, D1)\nA1 = A1.cuda()",
"_____no_output_____"
],
[
"def train_ae_logsmax(model, rec_interval=2, disp_interval=20, eval_interval=1):\n nepoch = 250\n Criterion2 = nn.MSELoss()\n Criterion1 = nn.L1Loss()\n optimizer = optim.Adam(model.parameters(), lr=2*1e-5)\n loss_track = []\n for eph in range(nepoch):\n dataloader = iter(trainloader)\n print('starting epoch {} ...'.format(eph))\n for i, (X, _) in enumerate(dataloader):\n X = Variable(X).cuda()\n optimizer.zero_grad()\n reconX = model(X)\n l2 = Criterion2(reconX, X)\n# l1 = Criterion1(reconX, X)\n \n t1, t2, t3 = vgg19_exc(X)\n rt1, rt2, rt3 = vgg19_exc(reconX)\n \n# t1 = Variable(t1.data)\n# rt1 = Variable(rt1.data)\n# t2 = Variable(t2.data)\n# rt2 = Variable(rt2.data)\n t3 = Variable(t3.data)\n rt3 = Variable(rt3.data)\n \n# vl1 = Criterion2(rt1, t1)\n# vl2 = Criterion2(rt2, t2)\n vl3 = Criterion2(rt3, t3)\n \n reconTerm = 10*l2 + vl3\n loss = reconTerm\n loss.backward()\n optimizer.step()\n \n if i%rec_interval == 0:\n loss_track.append(loss.data[0])\n if i%disp_interval == 0:\n print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(\n eph, i, l2.data[0], vl3.data[0], loss.data[0]))\n return loss_track",
"_____no_output_____"
],
[
"loss_track1 = train_ae_logsmax(A1, disp_interval=100)",
"_____no_output_____"
],
[
"plt.plot(loss_track1)",
"_____no_output_____"
],
[
"eval_model(A1)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbd453fd23a04ffe3699564e2c238637858d0dc | 7,267 | ipynb | Jupyter Notebook | notebooks/01-Audio_download_Rename.ipynb | Kabongosalomon/LiSTra | 6e552cce32317ce3415df5b15fb2839805fe0cee | [
"MIT"
] | 5 | 2021-04-15T03:35:56.000Z | 2022-01-24T16:28:19.000Z | notebooks/01-Audio_download_Rename.ipynb | Kabongosalomon/LiSTra | 6e552cce32317ce3415df5b15fb2839805fe0cee | [
"MIT"
] | null | null | null | notebooks/01-Audio_download_Rename.ipynb | Kabongosalomon/LiSTra | 6e552cce32317ce3415df5b15fb2839805fe0cee | [
"MIT"
] | null | null | null | 42.497076 | 1,777 | 0.60589 | [
[
[
"## Download and Unzip Speech",
"_____no_output_____"
],
[
"This data is more specific to the data used in the paper : ",
"_____no_output_____"
]
],
[
[
"!wget https://fcbhabdm.s3.amazonaws.com/mp3audiobibles2/ENGESVN1DA/ENGESVN1DA.zip\n!wget https://fcbhabdm.s3.amazonaws.com/mp3audiobibles2/ENGESVO1DA/ENGESVO1DA.zip\n \n!unzip ENGESVN1DA.zip\n!rm ENGESVN1DA.zip\n\n!unzip ENGESVO1DA.zip\n!rm ENGESVO1DA.zip\n\n!mkdir ../dataset/english/wav\n!mv English_English_Standard_Version____OT_Non-drama/* ../dataset/english/wav/\n!mv English_English_Standard_Version____NT_Non-drama/* ../dataset/english/wav/\n\n!rm -r English_English_Standard_Version____OT_Non-drama/\n!rm -r English_English_Standard_Version____NT_Non-drama/",
"_____no_output_____"
]
],
[
[
"# Rename name on wav_speech",
"_____no_output_____"
]
],
[
[
"# Rename the downloaded Audio to a standard B<number>-<verse>\n# There is an issue with the formating of the book of the audio of the book of psalms in the OT and this can be fixed manually",
"_____no_output_____"
],
[
"# Pythono3 code to rename multiple \n# files in a directory or folder \n\n# importing os module \nimport os \nimport ipdb\n\n# Function to rename multiple files \ndef folder(repository, splitter=\"___\"):\n\n for count, filename in enumerate(os.listdir(repository)): \n \n # Zfill to write al number as 001, 002 for standardization reason\n book, verse = filename.split(splitter)[0], filename.split(splitter)[1].split(\"_\")[0].zfill(3)\n\n ext = '.'+filename.split('.')[-1]\n\n \n src = repository+\"/\"+filename \n dst = repository+\"/\"+book+\"_\"+verse+ext\n\n\n ## rename() function will \n ## rename all the files \n os.rename(src, dst) \n \n \n\n# Driver Code \n# Run this bloc once \nif __name__ == '__main__': \n # Calling main() function \n # main() \n folder(repository=\"../dataset/english/wav/\", splitter=\"___\")\n folder(repository=\"../dataset/english/wav/\", splitter=\"___\")",
"_____no_output_____"
],
[
"!mv ../dataset/english/wav/*_000.mp3 ../dataset/english/rename/ ",
"_____no_output_____"
],
[
"# This must be run after insecting the renamed file and copy all the missed renamed file into the temp repository (A19)\nfolder(repository=\"../dataset/english/rename/\", splitter=\"__\")",
"_____no_output_____"
],
[
"!mv ../dataset/english/rename/* ../dataset/english/wav/ ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbd5c80aea00960e7c7a118cb56a91e0924ad33 | 18,070 | ipynb | Jupyter Notebook | 04a_tfrecord.ipynb | leofire19/scl-2020-product-detection | 67f79a5f0e14e79b75ad1782a61dd8bba7a170fd | [
"Unlicense"
] | 9 | 2020-07-05T16:46:59.000Z | 2021-04-06T21:22:12.000Z | 04a_tfrecord.ipynb | liuhh02/scl-2020-product-detection | 8f65e408241e24949f9d20402bf9018aac615f5c | [
"Unlicense"
] | null | null | null | 04a_tfrecord.ipynb | liuhh02/scl-2020-product-detection | 8f65e408241e24949f9d20402bf9018aac615f5c | [
"Unlicense"
] | 8 | 2020-07-05T05:00:52.000Z | 2021-12-19T14:54:46.000Z | 36.14 | 162 | 0.497012 | [
[
[
"# Library",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf \nimport numpy as np\nimport os\nimport glob\nimport pandas as pd\nimport PIL\nimport gc\nfrom PIL import Image",
"_____no_output_____"
],
[
"print(f'Numpy version : {np.__version__}')\nprint(f'Pandas version : {pd.__version__}')\nprint(f'Tensorflow version : {tf.__version__}')\nprint(f'Pillow version : {PIL.__version__}')",
"Numpy version : 1.18.1\nPandas version : 1.0.3\nTensorflow version : 2.2.0\nPillow version : 5.4.1\n"
]
],
[
[
"# Dataset",
"_____no_output_____"
]
],
[
[
"!ls /kaggle/input",
"csv-with-cleaned-ocr-text shopee-product-detection-student\r\n"
],
[
"df_train = pd.read_parquet('/kaggle/input/csv-with-cleaned-ocr-text/train.parquet', engine='pyarrow').sort_values(\"filename\").reset_index(drop=True)",
"_____no_output_____"
],
[
"df_test = pd.read_parquet('/kaggle/input/csv-with-cleaned-ocr-text/test.parquet', engine='pyarrow')\ndf_test",
"_____no_output_____"
]
],
[
[
"# Create TFRecord",
"_____no_output_____"
]
],
[
[
"def _bytes_feature(value):\n \"\"\"Returns a bytes_list from a string / byte.\"\"\"\n if isinstance(value, type(tf.constant(0))):\n value = value.numpy() # BytesList won't unpack a string from an EagerTensor.\n return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\ndef _float_feature(value):\n \"\"\"Returns a float_list from a float / double.\"\"\"\n return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))\n\ndef _list_float_feature(value):\n \"\"\"Returns a float_list from a float / double.\"\"\"\n return tf.train.Feature(float_list=tf.train.FloatList(value=value))\n\ndef _int64_feature(value):\n \"\"\"Returns an int64_list from a bool / enum / int / uint.\"\"\"\n return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))\n\ndef _list_int64_feature(value):\n \"\"\"Returns an int64_list from a bool / enum / int / uint.\"\"\"\n return tf.train.Feature(int64_list=tf.train.Int64List(value=value))",
"_____no_output_____"
],
[
"RESIZE_WIDTH = 512\nRESIZE_HEIGHT = 512\n\nTFRECORD_MAX_SIZE = 80 * 1024 * 1024 # 80 MB\n\nTOTAL_IMAGES = len(df_train.index)\n# TOTAL_IMAGES = len(df_test.index)\n\n# part 1 : 0:TOTAL_IMAGES // 2 (train) [CURRENT]\n# part 2 : TOTAL_IMAGES // 2:TOTAL_IMAGES (train)\n# part 3 : 0:TOTAL_IMAGES (test)\nSTART_INDEX = 0\nEND_INDEX = TOTAL_IMAGES // 2\n\nBATCH_IMAGE = 1024",
"_____no_output_____"
],
[
"def create_tfrecord(index, df):\n index = str(index).zfill(3)\n curr_file = f\"train-{index}.tfrecords\"\n writer = tf.io.TFRecordWriter(curr_file)\n for index, row in df.iterrows():\n category_str = str(row['category']).zfill(2)\n\n image = f'/kaggle/input/shopee-product-detection-student/train/train/train/{category_str}/{row[\"filename\"]}'\n img = open(image, 'rb')\n img_read = img.read()\n image_decoded = tf.image.decode_jpeg(img_read, channels=3)\n resized_img = tf.image.resize_with_pad(image_decoded,target_width=RESIZE_WIDTH,target_height=RESIZE_HEIGHT,method=tf.image.ResizeMethod.BILINEAR)\n resized_img = tf.cast(resized_img,tf.uint8)\n resized_img = tf.io.encode_jpeg(resized_img)\n\n feature = {\n 'filename': _bytes_feature(tf.compat.as_bytes(row['filename'])),\n 'label': _int64_feature(row['category']),\n 'words': _list_float_feature(row['words']),\n 'image': _bytes_feature(resized_img),\n 'height' : _int64_feature(RESIZE_HEIGHT),\n 'width' : _int64_feature(RESIZE_WIDTH)\n }\n example = tf.train.Example(features=tf.train.Features(feature=feature))\n writer.write(example.SerializeToString())\n writer.close()",
"_____no_output_____"
],
[
"for i in range(START_INDEX, END_INDEX, BATCH_IMAGE):\n print(f'Create TFRecords #{i // BATCH_IMAGE}')\n if i + BATCH_IMAGE < END_INDEX:\n create_tfrecord(i // BATCH_IMAGE, df_train.loc[i:i+BATCH_IMAGE])\n else:\n create_tfrecord(i // BATCH_IMAGE, df_train.loc[i:END_INDEX])\n gc.collect()",
"Create TFRecords #0\nCreate TFRecords #1\nCreate TFRecords #2\nCreate TFRecords #3\nCreate TFRecords #4\nCreate TFRecords #5\nCreate TFRecords #6\nCreate TFRecords #7\nCreate TFRecords #8\nCreate TFRecords #9\nCreate TFRecords #10\nCreate TFRecords #11\nCreate TFRecords #12\nCreate TFRecords #13\nCreate TFRecords #14\nCreate TFRecords #15\nCreate TFRecords #16\nCreate TFRecords #17\nCreate TFRecords #18\nCreate TFRecords #19\nCreate TFRecords #20\nCreate TFRecords #21\nCreate TFRecords #22\nCreate TFRecords #23\nCreate TFRecords #24\nCreate TFRecords #25\nCreate TFRecords #26\nCreate TFRecords #27\nCreate TFRecords #28\nCreate TFRecords #29\nCreate TFRecords #30\nCreate TFRecords #31\nCreate TFRecords #32\nCreate TFRecords #33\nCreate TFRecords #34\nCreate TFRecords #35\nCreate TFRecords #36\nCreate TFRecords #37\nCreate TFRecords #38\nCreate TFRecords #39\nCreate TFRecords #40\nCreate TFRecords #41\nCreate TFRecords #42\nCreate TFRecords #43\nCreate TFRecords #44\nCreate TFRecords #45\nCreate TFRecords #46\nCreate TFRecords #47\nCreate TFRecords #48\nCreate TFRecords #49\nCreate TFRecords #50\nCreate TFRecords #51\n"
],
[
"!ls -lah",
"total 4.8G\r\ndrwxr-xr-x 2 root root 4.0K Jul 1 09:25 .\r\ndrwxr-xr-x 6 root root 4.0K Jul 1 09:10 ..\r\n---------- 1 root root 5.2K Jul 1 09:10 __notebook__.ipynb\r\n-rw-r--r-- 1 root root 95M Jul 1 09:11 train-000.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:11 train-001.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:11 train-002.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:12 train-003.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:12 train-004.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:12 train-005.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:12 train-006.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:13 train-007.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:13 train-008.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:13 train-009.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:14 train-010.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:14 train-011.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:14 train-012.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:14 train-013.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:15 train-014.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:15 train-015.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:15 train-016.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:16 train-017.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:16 train-018.tfrecords\r\n-rw-r--r-- 1 root root 96M Jul 1 09:16 train-019.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:16 train-020.tfrecords\r\n-rw-r--r-- 1 root root 96M Jul 1 09:17 train-021.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:17 train-022.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:17 train-023.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:18 train-024.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:18 train-025.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:18 train-026.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:19 train-027.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:19 train-028.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:19 train-029.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:19 train-030.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:20 train-031.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:20 train-032.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:20 train-033.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:20 train-034.tfrecords\r\n-rw-r--r-- 1 root root 96M Jul 1 09:21 train-035.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:21 train-036.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:21 train-037.tfrecords\r\n-rw-r--r-- 1 root root 91M Jul 1 09:21 train-038.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:22 train-039.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:22 train-040.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:22 train-041.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:23 train-042.tfrecords\r\n-rw-r--r-- 1 root root 96M Jul 1 09:23 train-043.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:23 train-044.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:23 train-045.tfrecords\r\n-rw-r--r-- 1 root root 94M Jul 1 09:24 train-046.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:24 train-047.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:24 train-048.tfrecords\r\n-rw-r--r-- 1 root root 93M Jul 1 09:24 train-049.tfrecords\r\n-rw-r--r-- 1 root root 95M Jul 1 09:25 train-050.tfrecords\r\n-rw-r--r-- 1 root root 44M Jul 1 09:25 train-051.tfrecords\r\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecbd66a6d14a285226f8003466306ffb30e4a41b | 10,984 | ipynb | Jupyter Notebook | JWH_assignment_DS_223.ipynb | johnwesleyharding/DS-Unit-2-Kaggle-Challenge | 652b231afb4829bafdefc86d402bae717b35943e | [
"MIT"
] | 1 | 2019-11-05T22:23:36.000Z | 2019-11-05T22:23:36.000Z | JWH_assignment_DS_223.ipynb | johnwesleyharding/DS-Unit-2-Kaggle-Challenge | 652b231afb4829bafdefc86d402bae717b35943e | [
"MIT"
] | null | null | null | JWH_assignment_DS_223.ipynb | johnwesleyharding/DS-Unit-2-Kaggle-Challenge | 652b231afb4829bafdefc86d402bae717b35943e | [
"MIT"
] | null | null | null | 41.764259 | 448 | 0.642662 | [
[
[
"Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 3*\n\n---",
"_____no_output_____"
],
[
"# Cross-Validation\n\n\n## Assignment\n- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.\n- [ ] Continue to participate in our Kaggle challenge. \n- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.\n- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)\n- [ ] Commit your notebook to your fork of the GitHub repo.\n\n\nYou won't be able to just copy from the lesson notebook to this assignment.\n\n- Because the lesson was ***regression***, but the assignment is ***classification.***\n- Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.\n\nSo you will have to adapt the example, which is good real-world practice.\n\n1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)\n2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`\n3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)\n4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))\n\n\n\n## Stretch Goals\n\n### Reading\n- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation\n- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)\n- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation\n- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)\n- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)\n\n### Doing\n- Add your own stretch goals!\n- Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.\n- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.\n- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for \"Grid-Searching Which Model To Use\" in Chapter 6:\n\n> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...\n\nThe example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?\n",
"_____no_output_____"
],
[
"### BONUS: Stacking!\n\nHere's some code you can use to \"stack\" multiple submissions, which is another form of ensembling:\n\n```python\nimport pandas as pd\n\n# Filenames of your submissions you want to ensemble\nfiles = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']\n\ntarget = 'status_group'\nsubmissions = (pd.read_csv(file)[[target]] for file in files)\nensemble = pd.concat(submissions, axis='columns')\nmajority_vote = ensemble.mode(axis='columns')[0]\n\nsample_submission = pd.read_csv('sample_submission.csv')\nsubmission = sample_submission.copy()\nsubmission[target] = majority_vote\nsubmission.to_csv('my-ultimate-ensemble-submission.csv', index=False)\n```",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport category_encoders as ce\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.feature_selection import f_regression, SelectKBest\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.pipeline import make_pipeline\n\nfrom scipy.stats import uniform\nfrom sklearn.model_selection import RandomizedSearchCV\nfrom sklearn.pipeline import make_pipeline",
"_____no_output_____"
],
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"#import pandas as pd\n\n# Merge train_features.csv & train_labels.csv\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\n\n# Read test_features.csv & sample_submission.csv\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')",
"_____no_output_____"
]
],
[
[
"## Assignment\n- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.\n- [ ] Continue to participate in our Kaggle challenge. \n- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.\n- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)\n- [ ] Commit your notebook to your fork of the GitHub repo.",
"_____no_output_____"
]
],
[
[
"features = ['basin', 'public_meeting', 'scheme_management', 'permit', 'extraction_type', \n 'management', 'payment', 'water_quality', 'quantity', 'source', 'waterpoint_type']\ntarget = 'status_group'",
"_____no_output_____"
],
[
"# false_zeros = ['longitude', 'latitude', 'construction_year', \n# 'gps_height', 'population']\n# for feature in false_zeros:\n# X[feature] = X[feature].replace(0, np.nan)\n# X[col+'_MISSING'] = X[col].isnull()",
"_____no_output_____"
],
[
"def artpipe(train, test):\n \n X_train, X_valid, y_train, y_valid = train_test_split(train[features], train[target])\n X_test = test[features]\n \n encoder = ce.OrdinalEncoder()\n X_train = encoder.fit_transform(X_train)\n X_valid = encoder.transform(X_valid)\n X_test = encoder.transform(X_test)\n \n model = RandomForestClassifier(n_estimators = 256, n_jobs = -1) #max_depth = 64, \n \n k = 4\n scores = cross_val_score(model, X_train, y_train, cv=k, scoring = 'accuracy')\n print(f'Accuracy for {k} folds:', scores)\n \n model.fit(X_train, y_train)\n \n yv_pred = model.predict(X_valid)\n yt_pred = model.predict(X_test)\n print(accuracy_score(yv_pred, y_valid))\n \n return yt_pred",
"_____no_output_____"
],
[
"submission = sample_submission.copy()\nsubmission['status_group'] = artpipe(train, test)\nsubmission.to_csv('submission.csv', index=False)",
"Accuracy for 4 folds: [0.76703474 0.76324295 0.769956 0.7658944 ]\n0.7685521885521885\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecbd66f0327f00105f560918e83e8dac423b87b8 | 60,844 | ipynb | Jupyter Notebook | labs/2-20/2-20_math_in_scipy_solutions.ipynb | jdmarshl/LS190 | 2b79b02c31006bd60b75b82e0bf8fbc2be3130d6 | [
"MIT"
] | null | null | null | labs/2-20/2-20_math_in_scipy_solutions.ipynb | jdmarshl/LS190 | 2b79b02c31006bd60b75b82e0bf8fbc2be3130d6 | [
"MIT"
] | null | null | null | labs/2-20/2-20_math_in_scipy_solutions.ipynb | jdmarshl/LS190 | 2b79b02c31006bd60b75b82e0bf8fbc2be3130d6 | [
"MIT"
] | null | null | null | 44.639765 | 14,954 | 0.710506 | [
[
[
"# [LEGALST-190] Lab 2-20\n\n\nThis lab will provide an introduction to numpy and scipy library of Python, preparing you for optimization and machine learning.\n\n\n*Estimated Time: 30-40 minutes*\n\n---\n\n### Topics Covered\n- Numpy Array\n- Numpy matrix\n- Local minima/maxima\n- Scipy optimize\n- Scipy integrate\n\n### Table of Contents\n\n1 - [Intro to Numpy](#section 1)<br>\n\n3 - [Maxima and Minima](#section 2)<br>\n\n2 - [Intro to Scipy](#section 3)<br>\n",
"_____no_output_____"
],
[
"## Intro to Numpy <a id='section 1'></a>",
"_____no_output_____"
],
[
"Numpy uses its own data structure, an array, to do numerical computations. The Numpy library is often used in scientific and engineering contexts for doing data manipulation.\n\nFor reference, here's a link to the official [Numpy documentation](https://docs.scipy.org/doc/numpy/reference/routines.html).",
"_____no_output_____"
]
],
[
[
"## An import statement for getting the Numpy library:\nimport numpy as np\n## Also import csv to process the data file (black magic for now):\nimport csv",
"_____no_output_____"
]
],
[
[
"### Numpy Arrays\n\nArrays can hold many different data types, which makes them useful for many different purposes. Here's a few examples.",
"_____no_output_____"
]
],
[
[
"# create an array from a list of integers\nlst = [1, 2, 3]\nvalues = np.array(lst)\nprint(values)\nprint(lst)",
"[1 2 3]\n[1, 2, 3]\n"
],
[
"# nested array\nlst = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\nvalues = np.array(lst)\nprint(values)",
"[[1 2 3]\n [4 5 6]\n [7 8 9]]\n"
]
],
[
[
"What does the below operation do?",
"_____no_output_____"
]
],
[
[
"values > 3",
"_____no_output_____"
]
],
[
[
"**Your answer:** changes all matrix values that are greater than three to 'True', and all other values to 'False'",
"_____no_output_____"
]
],
[
[
"\"\"\"\nHere, we will generate a multidimensional array of zeros. This might be\nuseful as a starting value that could be filled in.\n\"\"\"\nz = np.zeros((10, 2))\nprint(z)",
"[[ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]]\n"
]
],
[
[
"### Matrix\n\nA **matrix** is a rectangular array- in Python, it looks like an array of arrays. We say that a matrix $M$ has shape ** $m$x$n$ **; that is, it has $m$ rows (different smaller arrays inside of it) and $n$ columns (elements in each smaller matrix. \n\nMatrices are used a lot in machine learning to represent sets of features and train models. Here, we'll give you some practice with manipulating them.\n\nThe **identity matrix** is a square matrix (i.e. size $n$x$n$) with all elements on the main diagonal equal to 1 and all other elements equal to zero. Make one below using `np.eye(n)`.",
"_____no_output_____"
]
],
[
[
"# identity matrix I of dimension 4x4\nnp.eye(4)",
"_____no_output_____"
]
],
[
[
"Let's do some matrix manipulation. Here are two sample matrices to use for practice.",
"_____no_output_____"
]
],
[
[
"m1 = np.array([[1, 3, 1], [1, 0, 0]])\nm2 = np.array([[0, 0, 5], [7, 5, 0]])\nprint(\"matrix 1 is:\\n\", m1)\n\nprint(\"matrix 2 is:\\n\", m2)",
"matrix 1 is:\n [[1 3 1]\n [1 0 0]]\nmatrix 2 is:\n [[0 0 5]\n [7 5 0]]\n"
]
],
[
[
"You can add two matrices together if they have the same shape. Add our two sample matrices using the `+` operator.",
"_____no_output_____"
]
],
[
[
"# matrix sum\nm1 + m2",
"_____no_output_____"
]
],
[
[
"A matrix can also be multiplied by a number, also called a **scalar**. Multiply one of the example matrices by a number using the `*` operator and see what it outputs.",
"_____no_output_____"
]
],
[
[
"# scale a matrix\nm1 * 3",
"_____no_output_____"
]
],
[
[
"You can sum all the elements of a matrix using `.sum()`.",
"_____no_output_____"
]
],
[
[
"# sum of all elements in m1\nm1.sum()",
"_____no_output_____"
]
],
[
[
"And you can get the average of the elements with `.mean()`",
"_____no_output_____"
]
],
[
[
"# mean of all elements in m2\nm2.mean()",
"_____no_output_____"
]
],
[
[
"Sometimes it is necessary to **transpose** a matrix to perform operations on it. When a matrix is transposed, its rows become its columns and its columns become its rows. Get the transpose by calling `.T` on a matrix (note: no parentheses)",
"_____no_output_____"
]
],
[
[
"# transpose of m1\nm1.T",
"_____no_output_____"
]
],
[
[
"Other times, you may need to rearrange an array of data into a particular shape of matrix. Below, we've created an array of 16 numbers:",
"_____no_output_____"
]
],
[
[
"H = np.arange(1, 17)\nH",
"_____no_output_____"
]
],
[
[
"Use `.reshape(...)` on H to change its shape. `.reshape(...)` takes two arguments: the first is the desired number of rows, and the second is the desired number of columns. Try changing H to be a 4x4 matrix.\n\nNote: if you try to make H be a 4x3 matrix, Python will error. Why?",
"_____no_output_____"
]
],
[
[
"# make H a 4x4 matrix\nH = H.reshape(4, 4)\nH",
"_____no_output_____"
]
],
[
[
"Next, we'll talk about **matrix multiplication**. First, assign H_t below to be the transpose of H.",
"_____no_output_____"
]
],
[
[
"# assign H_t to the transpose of H\nH_t = H.T\nH_t",
"_____no_output_____"
]
],
[
[
"The [matrix product](https://en.wikipedia.org/wiki/Matrix_multiplication#Matrix_product_.28two_matrices.29) get used a lot in optimization problems, among other things. It takes two matrices (one $m$x$n$, one $n$x$p$) and returns a matrix of size $m$x$p$. For example, the product of a 2x3 matrix and a 3x4 matrix is a 2x4 matrix (click the link for a visualization of what goes on with each individual element).\n\nYou can use the matrix product in numpy with `matrix1.dot(matrix2)` or `matrix1 @ matrix2`.\n\nNote: to use the matrix product, the two matrices must have the same number of elements and the number of *rows* in the first matrix must equal the number of *columns* in the second. This is why it's important to know how to reshape and transpose matrices!\n\nA property of the matrix product is that the product of a matrix and the identity matrix is just the first matrix. Check that that is the case below for the matrix `H`.",
"_____no_output_____"
]
],
[
[
"# matrix product\nI = np.eye(4)\n# a matrix m's matrix product with the identity matrix is matrix m\nH.dot(I)",
"_____no_output_____"
]
],
[
[
"Note that we keep using the term 'product', but we don't use the `*` operator. Try using `*` to multiply `H` and `I` together.",
"_____no_output_____"
]
],
[
[
"# matrix multiplication\nH * I",
"_____no_output_____"
]
],
[
[
"How is the matrix product different from simply multiplying two matrices together?\n",
"_____no_output_____"
],
[
"**YOUR ANSWER:** The matrix product does row-by-column products and summation (i.e. the dot product). Using `*` in numpy does element-wise multiplication (e.g. element i, j in the first matrix is multiplied by element i, j of the second).",
"_____no_output_____"
],
[
"#### Matrix inverse\n#### Theorem: the product of a matrix m and its inverse is an identity matrix\n\nUsing the above theorem, to solve for x in Ax=B where A and B are matrices, what do we want to multiply both sides by?",
"_____no_output_____"
],
[
"Your answer here: $A^{-1}$",
"_____no_output_____"
],
[
"You can get the inverse of a matrix with `np.linalg.inv(my_matrix)`. Try it in the cell below.\n\nNote: not all matrices are invertible.",
"_____no_output_____"
]
],
[
[
"\nm3 = np.array([[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]])\n\n# calculate the inverse of m3\nm3_inverse = np.linalg.inv(m3)\n\nprint(\"matrix m3:\\n\", m3)\nprint(\"\\ninverse matrix m3:\\n\", m3_inverse)",
"matrix m3:\n [[1 0 0 0]\n [0 2 0 0]\n [0 0 3 0]\n [0 0 0 4]]\n\ninverse matrix m3:\n [[ 1. 0. 0. 0. ]\n [ 0. 0.5 0. 0. ]\n [ 0. 0. 0.33333333 0. ]\n [ 0. 0. 0. 0.25 ]]\n"
],
[
"# do we get the identity matrix?\nm3_inverse.dot(m3)",
"_____no_output_____"
]
],
[
[
"#### exercise\nIn machine learning, we often try to predict a value or category given a bunch of data. The essential model looks like this:\n$$ \\large\nY = X^T \\theta\n$$\nWhere $Y $ is the predicted values (a vector with one value for every row of X)), $X$ is a $m$x$n$ matrix of data, and $\\theta$ (the Greek letter 'theta') is a **parameter** (an $n$-length vector). For example, X could be a matrix where each row represents a person, and it has two columns: height and age. To use height and age to predict a person's weight (our $y$), we could multiply the height and the age by different numbers ($\\theta$) then add them together to make a prediction($y$).\n\nThe fundamental problem in machine learning is often how to choose the best $\\theta$. Using linear algebra, we can show that the optimal theta is:\n$$\\large\n \\hat{\\theta{}} = \\left(X^T X\\right)^{-1} X^T Y\n$$\n\nYou now know all the functions needed to find theta. Use transpose, inverse, and matrix product operations to calculate theta using the equation above and the X and y data given below.",
"_____no_output_____"
]
],
[
[
"# example real values (the numbers 0 through 50 with random noise added)\ny = np.arange(50)+ np.random.normal(scale = 10,size=50)\n\n# example data\nx = np.array([np.arange(50)]).T\n\n# add a column of ones to represent an intercept term\nX = np.hstack([x, np.ones(x.shape)])\n\n# find the best theta\ntheta = np.linalg.inv(X.T @ X) @ X.T @ y\ntheta",
"_____no_output_____"
]
],
[
[
"In this case, our X is a matrix where the first column has values representing a feature, and the second column is entirely ones to represent an intercept term. This means our theta is a vector [m, b] for the equation y=mx[0]+b, which you might recognize from algebra as the equation for a line. Let's see how well our predictor line fits the data.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n\n#plot the data\nplt.scatter(x.T,y)\n\n#plot the fit line\nplt.plot(x.T[0], X @ theta);",
"_____no_output_____"
]
],
[
[
"Not bad!\n\nWhile it's good to know what computation goes into getting optimal parameters, it's also good that scipy has a function that will take in an X and a y and return the best theta. Run the cell below to use scikit-learn to estimate the parameters. It should output values very near to the ones you found. We'll learn how to use scikit-learn in the next lab!",
"_____no_output_____"
]
],
[
[
"# find optimal parameters for linear regression\nfrom sklearn import linear_model\n\nlin_reg = linear_model.LinearRegression(fit_intercept=True)\nlin_reg.fit(x, y)\nprint(lin_reg.coef_[0], lin_reg.intercept_)",
"1.12378720083 -3.93661918814\n"
]
],
[
[
"## Maxima and Minima <a id='section 2'></a>",
"_____no_output_____"
],
[
"The extrema of a function are the largest value (maxima) and smallest value (minima) of the function.\n\nWe say that f(a) is a **local maxima** if $f(a)\\geq f(x)$ when x is near a.\n\nWe say that f(a) is a **local minima** if $f(a)\\leq f(x)$ when x is near a.",
"_____no_output_____"
],
[
"Global vs local extrema (credit: Wikipedia)",
"_____no_output_____"
],
[
"<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Extrema_example_original.svg/440px-Extrema_example_original.svg.png\" style=\"width: 500px; height: 275px;\" />",
"_____no_output_____"
],
[
"By looking at the diagram , how are local maxima and minima of a function related to its derivative?",
"_____no_output_____"
],
[
"**YOUR ANSWER**: Local minima and maxima occur when the derivative is zero- i.e. when the slope is zero, or when the tangent line is horizontal.",
"_____no_output_____"
],
[
"Are global maxima also local maixma? Are local maxima global maxima?",
"_____no_output_____"
],
[
"**YOUR ANSWER**: Yes, global maxima are also local maxima. \n\nNo, a local maxima may not be a global maxima.",
"_____no_output_____"
],
[
"## Intro to Scipy <a id='section 3'></a>",
"_____no_output_____"
],
[
"### Optimize",
"_____no_output_____"
],
[
"Scipy.optimize is a package that provides several commonly used optimization algorithms. Today we'll learn minimize.",
"_____no_output_____"
],
[
"insert concepts about local minima",
"_____no_output_____"
]
],
[
[
"# importing minimize function\nfrom scipy.optimize import minimize",
"_____no_output_____"
]
],
[
[
"Let's define a minimization problem:\n\nminimize $x_1x_4(x_1+x_2+x_3)+x_3$ under the conditions:\n1. $x_1x_2x_3x_4\\geq 25$\n2. $x_1+x_2+x_3+2x_4 = 14$\n3. $1\\leq x_1,x_2,x_3,x_4\\leq 5$",
"_____no_output_____"
],
[
"Hmmm, looks fairly complicated, but don't worry, scipy's got it",
"_____no_output_____"
]
],
[
[
"# let's define our function\ndef objective(x):\n x1 = x[0]\n x2 = x[1]\n x3 = x[2]\n x4 = x[3]\n return x1*x4*(x1+x2+x3)+x3",
"_____no_output_____"
],
[
"# define constraints\ndef con1(x):\n return x[0]*x[1]*x[2]*x[3] - 25\ndef con2(x):\n return 14 - x[0] - x[1] - x[2] - 2*x[3]\n\nconstraint1 = {'type': 'ineq', 'fun': con1} # constraint 1 is an inequality constraint\nconstraint2 = {'type': 'eq', 'fun': con2} # constraint 2 is an equality constraint\n\ncons = [constraint1, constraint2]",
"_____no_output_____"
],
[
"# define bounds\nbound = (1, 5)\nbnds = (bound, bound, bound, bound) #the same bound applies to all four variables",
"_____no_output_____"
],
[
"# We need to supply initial values as a starting point for minimize function\nx0 = [3, 4, 2, 3]\nprint(objective(x0))",
"83\n"
]
],
[
[
"Overall, we defined objective function, constraints, bounds, and initial values. Let's get to work.\n\nWe'll use Sequential Least Squares Programming optimization algorithm (SLSQP)",
"_____no_output_____"
]
],
[
[
"solution = minimize(objective, x0, method='SLSQP', bounds=bnds, constraints=cons)",
"_____no_output_____"
],
[
"print(solution)",
" fun: 21.49999999999912\n jac: array([ 18.00000024, 1.5 , 2.5 , 11. ])\n message: 'Optimization terminated successfully.'\n nfev: 18\n nit: 3\n njev: 3\n status: 0\n success: True\n x: array([ 1. , 5. , 5. , 1.5])\n"
],
[
"# Display optimal values of each variable\nsolution.x",
"_____no_output_____"
]
],
[
[
"#### exercise\nFind the optimal solution to the following problem:\n\nminimize $x_1^2+x_2^2+x_3^2$, under conditions:\n1. $x_1 + x_2\\geq 6$\n2. $x_3 + 2x_2\\geq 4$\n3. $1.5\\leq x_1, x_2, x_3\\leq 8$\n\nTip: 3**2 gives square of 3",
"_____no_output_____"
]
],
[
[
"def func(x):\n x1 = x[0]\n x2 = x[1]\n x3 = x[2]\n return x1**2 + x2**2 + x3**2\ndef newcon1(x):\n return x[0] + x[1] - 6\ndef newcon2(x):\n return x[2] + 2*x[1] - 4",
"_____no_output_____"
]
],
[
[
"Take note of scipy's documentation on constraints:\n\n> \"Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative.\"",
"_____no_output_____"
]
],
[
[
"newcons1 = {'type': 'ineq', 'fun': newcon1}\nnewcons2 = {'type': 'ineq', 'fun': newcon2}\nnewcons = [newcons1, newcons2]\nbd = (1.5, 8)\nbds = (bd, bd, bd)\nnewx0 = [1, 4, 3]\n\n\nsum_square_solution = minimize(func, newx0, method='SLSQP', bounds=bds, constraints=newcons)\nsum_square_solution",
"_____no_output_____"
]
],
[
[
"### Integrate",
"_____no_output_____"
],
[
"scipy.integrate.quad is a function that tntegrates a function from a to b using a technique from QUADPACK library.",
"_____no_output_____"
]
],
[
[
"# importing integrate package\nfrom scipy import integrate",
"_____no_output_____"
],
[
"# define a simple function\ndef f(x):\n return np.sin(x)",
"_____no_output_____"
],
[
"# integrate sin from 0 to pi\nintegrate.quad(f, 0, np.pi)",
"_____no_output_____"
]
],
[
[
"Our quad function returned two results, first one is the result, second one is an estimate of the absolute error",
"_____no_output_____"
],
[
"#### exercise\nFind the integral of $x^2 + x$ from 3 to 10",
"_____no_output_____"
]
],
[
[
"#define the function\ndef f1(x):\n return x ** 2 + x\n\n\n#find the integral\nintegrate.quad(f1, 3, 10)",
"_____no_output_____"
]
],
[
[
"#### Integrate a normal distribution",
"_____no_output_____"
]
],
[
[
"# let's create a normal distribution with mean 0 and standard deviation 1 by simpy running the cell\nmu, sigma = 0, 1\ns = np.random.normal(mu, sigma, 100000)\n\nimport matplotlib.pyplot as plt\ncount, bins, ignored = plt.hist(s, 30, normed=True)\nplt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')\nplt.show()",
"_____no_output_____"
],
[
"# importing normal d\nfrom scipy.stats import norm",
"_____no_output_____"
]
],
[
[
"CDF is cumulative distribution function. CDF(x) is the probability that a normal distribution takes on value less than or equal to x.\n\nFor a standard normal distribution, what would CDF(0) be? (Hint: how is CDF related to p-values or confidence intervals?)",
"_____no_output_____"
],
[
"0.5",
"_____no_output_____"
],
[
"Run the cell below to confirm your answer",
"_____no_output_____"
]
],
[
[
"norm.cdf(0)",
"_____no_output_____"
]
],
[
[
"Using the cdf, integrate the normal distribution from -0.5 to 0.5",
"_____no_output_____"
]
],
[
[
"norm.cdf(0.5)",
"_____no_output_____"
]
],
[
[
"---\nNotebook developed by: Tian Qin\n\nData Science Modules: http://data.berkeley.edu/education/modules\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbd6d0851af7bca7e5783b49a2f3aaf1f76bdc8 | 642,500 | ipynb | Jupyter Notebook | ch_01/introduction_to_data_analysis.ipynb | MykolaKlishch/Hands-On-Data-Analysis-with-Pandas-2nd-edition | 1dffee1dfb9867887f51e6dacbe04089e7edb6ac | [
"MIT"
] | null | null | null | ch_01/introduction_to_data_analysis.ipynb | MykolaKlishch/Hands-On-Data-Analysis-with-Pandas-2nd-edition | 1dffee1dfb9867887f51e6dacbe04089e7edb6ac | [
"MIT"
] | null | null | null | ch_01/introduction_to_data_analysis.ipynb | MykolaKlishch/Hands-On-Data-Analysis-with-Pandas-2nd-edition | 1dffee1dfb9867887f51e6dacbe04089e7edb6ac | [
"MIT"
] | null | null | null | 954.680535 | 126,424 | 0.95294 | [
[
[
"# Introduction to Data Analysis\nThis notebook serves as a summary of the fundamentals covered in chapter 1. For a Python crash-course/refresher, work through the [`python_101.ipynb`](./python_101.ipynb) notebook.\n\n## Setup",
"_____no_output_____"
]
],
[
[
"from visual_aids import stats_viz",
"_____no_output_____"
]
],
[
[
"## Fundamentals of data analysis\nWhen conducting a data analysis, we will move back and forth between four main processes:\n\n- **Data Collection**: Every analysis starts with collecting data. We can collect data from a variety of sources, including databases, APIs, flat files, and the Internet.\n- **Data Wrangling**: After we have our data, we need to prepare it for our analysis. This may involve reshaping it, changing data types, handling missing values, and/or aggregating it. \n- **Exploratory Data Analysis (EDA)**: We can use visualizations to explore our data and summarize it. During this time, we will also begin exploring the data by looking at its structure, format, and summary statistics.\n- **Drawing Conclusions**: After we have thoroughly explored our data, we can try to draw conclusions or model it.\n\n## Statistical Foundations\nAs this is not a statistics book, we will discuss the concepts we will need to work through the book, in addition to some avenues for further exploration. By no means is this exhaustive.\n\n### Sampling\nSome resampling (sampling from the sample) techniques we will see throughout the book, especially for the chapters on machine learning (9-11):\n- **simple random sampling**: pick with a random number generator\n- **stratified random sampling**: randomly pick preserving the proportion of groups in the data\n- **bootstrapping**: sampling with replacement (more info: [YouTube video](https://www.youtube.com/watch?v=gcPIyeqymOU) and [Wikipedia article](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)))\n\n### Descriptive Statistics\nWe use descriptive statistics to describe the data. The data we work with is usually a **sample** taken from the **population**. The statistics we will discuss here are referred to as **sample statistics** because they are calculated on the sample and can be used as estimators for the population parameters.\n\n#### Measures of Center\nThree common ways to describe the central tendency of a distribution are mean, median, and mode.\n##### Mean\nThe sample mean is an estimator for the population mean ($\\mu$) and is defined as: \n\n$$\\bar{x} = \\frac{\\sum_{1}^{n} x_i}{n}$$\n##### Median\nThe median represents the 50<sup>th</sup> percentile of our data; this means that 50% of the values are greater than the median and 50% are less than the median. It is calculated by taking the middle value from an ordered list of values.\n\n##### Mode\nThe mode is the most common value in the data. We can use it to describe categorical data or, for continuous data, the shape of the distribution:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.different_modal_plots()",
"_____no_output_____"
]
],
[
[
"#### Measures of Spread\nMeasures of spread tell us how the data is dispersed; this will indicate how thin (low dispersion) or wide (very spread out) our distribution is.\n\n##### Range\nThe range is the distance between the smallest value (minimum) and the largest value (maximum):\n\n$$range = max(X) - min(X)$$\n\n##### Variance\nThe variance describes how far apart observations are spread out from their average value (the mean). When calculating the sample variance, we divide by *n - 1* instead of *n* to account for using the sample mean ($\\bar{x}$):\n\n$$s^2 = \\frac{\\sum_{1}^{n} (x_i - \\bar{x})^2}{n - 1}$$\n\nThis is referred to as Bessel's correction and is applied to get an unbiased estimator of the population variance. \n\n*Note that this will be in units-squared of whatever was being measured.*\n\n##### Standard Deviation\nThe standard deviation is the square root of the variance, giving us a measure in the same units as our data. The sample standard deviation is calculated as follows:\n\n$$s = \\sqrt{\\frac{\\sum_{1}^{n} (x_i - \\bar{x})^2}{n - 1}} = \\sqrt{s^2}$$",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.effect_of_std_dev()",
"_____no_output_____"
]
],
[
[
"*Note that $\\sigma^2$ is the population variance and $\\sigma$ is the population standard deviation.*\n\n##### Coefficient of Variation\nThe coefficient of variation (CV) gives us a unitless ratio of the standard deviation to the mean. Since, it has no units we can compare dispersion across datasets:\n\n$$CV = \\frac{s}{\\bar{x}}$$\n\n##### Interquartile Range\nThe interquartile range (IQR) gives us the spread of data around the median and quantifies how much dispersion we have in the middle 50% of our distribution:\n\n$$IQR = Q_3 - Q_1$$\n\n##### Quartile Coefficient of Dispersion\nThe quartile coefficient of dispersion also is a unitless statistic for comparing datasets. However, it uses the median as the measure of center. It is calculated by dividing the semi-quartile range (half the IQR) by the midhinge (midpoint between the first and third quartiles):\n\n$$QCD = \\frac{\\frac{Q_3 - Q_1}{2}}{\\frac{Q_1 + Q_3}{2}} = \\frac{Q_3 - Q_1}{Q_3 + Q_1}$$\n\n#### Summarizing data\nThe **5-number summary** provides 5 descriptive statistics that summarize our data:\n\n| | Quartile | Statistic | Percentile |\n| --- | --- | --- | --- |\n|1.|$Q_0$|minimum|$0^{th}$|\n|2.|$Q_1$|N/A|$25^{th}$|\n|3.|$Q_2$|median|$50^{th}$|\n|4.|$Q_3$|N/A|$75^{th}$|\n|5.|$Q_4$|maximum|$100^{th}$|\n\nThis summary can be visualized using a **box plot** (also called box-and-whisker plot). The box has an upper bound of $Q_3$ and a lower bound of $Q_1$. The median will be a line somewhere in this box. The whiskers extend from the box towards the minimum/maximum. For our purposes, they will extend to $Q_3 + 1.5 \\times IQR$ and $Q_1 - 1.5 \\times IQR$ and anything beyond will be represented as individual points for outliers:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.example_boxplot()",
"_____no_output_____"
]
],
[
[
"The box plot doesn't show us how the data is distributed within the quartiles. To get a better sense of the distribution, we can use a **histogram**, which will show us the amount of observations that fall into equal-width bins. We can vary the number of bins to use, but be aware that this can change our impression of what the distribution appears to be:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.example_histogram()",
"_____no_output_____"
]
],
[
[
"We can also visualize the distribution using a **kernel density estimate (KDE)**. This will estimate the **probability density function (PDF)**. This function shows how probability is distributed over the values. Higher values of the PDF mean higher likelihoods:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.example_kde()",
"_____no_output_____"
]
],
[
[
"Note that both the KDE and histogram estimate the distribution:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.hist_and_kde()",
"_____no_output_____"
]
],
[
[
"**Skewed distributions** have more observations on one side. The mean will be less than the median with negative skew, while the opposite is true of positive skew:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.skew_examples()",
"_____no_output_____"
]
],
[
[
"We can use the **cumulative distribution function (CDF)** to find probabilities of getting values within a certain range. The CDF is the integral of the PDF:\n\n$$CDF = F(x) = \\int_{-\\infty}^{x} f(t) dt$$\n \n*Note that $f(t)$ is the PDF and $\\int_{-\\infty}^{\\infty} f(t) dt = 1$.*\n\nThe probability of the random variable $X$ being less than or equal to the specific value of $x$ is denoted as $P(X ≤ x)$. Note that for a continuous random variable the probability of it being exactly $x$ is zero.\n\nLet's look at the estimate of the CDF from the sample data we used for the box plot, called the **empirical cumulative distribution function (ECDF)**:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.cdf_example()",
"_____no_output_____"
]
],
[
[
"*We can find any range we want if we use some algebra as in the rightmost subplot above.*\n\n#### Common Distributions\n- **Gaussian (normal) distribution**: looks like a bell curve and is parameterized by its mean (μ) and standard deviation (σ). Many things in nature happen to follow the normal distribution, like heights. Note that testing if a distribution is normal is not trivial. Written as $N(\\mu, \\sigma)$.\n- **Poisson distribution**: discrete distribution that is often used to model arrivals. Parameterized by its mean, lambda (λ). Written as $Pois(\\lambda)$.\n- **Exponential distribution**: can be used to model the time between arrivals. Parameterized by its mean, lambda (λ). Written as $Exp(\\lambda)$.\n- **Uniform distribution**: places equal likelihood on each value within its bounds (*a* and *b*). We often use this for random number generation. Written as $U(a, b)$.\n- **Bernoulli distribution**: When we pick a random number to simulate a single success/failure outcome, it is called a Bernoulli trial. This is parameterized by the probability of success (*p*). Written as $Bernoulli(p)$.\n- **Binomial distribution**: When we run the same experiment *n* times, the total number of successes is then a binomial random variable. Written as $B(n, p)$.\n\nWe can visualize both discrete and continuous distributions; however, discrete distributions give us a **probability mass function** (**PMF**) instead of a PDF:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.common_dists()",
"_____no_output_____"
]
],
[
[
"#### Scaling data\nIn order to compare variables from different distributions, we would have to scale the data, which we could do with the range by using **min-max scaling**:\n\n$$x_{scaled}=\\frac{x - min(X)}{range(X)}$$\n\nAnother way is to use a **Z-score** to standardize the data:\n\n$$z_i = \\frac{x_i - \\bar{x}}{s}$$\n\n#### Quantifying relationships between variables\nThe **covariance** is a statistic for quantifying the relationship between variables by showing how one variable changes with respect to another (also referred to as their joint variance):\n\n$$cov(X, Y) = E[(X-E[X])(Y-E[Y])]$$\n\n*E[X] is the expectation of the random variable X (its long-run average).*\n\nThe sign of the covariance gives us the direction of the relationship, but we need the magnitude as well. For that, we calculate the **Pearson correlation coefficient** ($\\rho$):\n\n$$\\rho_{X, Y} = \\frac{cov(X, Y)}{s_X s_Y}$$\n\nExamples:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.correlation_coefficient_examples()",
"_____no_output_____"
]
],
[
[
"*From left to right: no correlation, weak negative correlation, strong positive correlation, and nearly perfect negative correlation.*\n\nOften, it is more informative to use scatter plots to check for relationships between variables. This is because the correlation may be strong, but the relationship may not be linear:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.non_linear_relationships()",
"_____no_output_____"
]
],
[
[
"Remember, **correlation does not imply causation**. While we may find a correlation between X and Y, it does not mean that X causes Y or Y causes X. It is possible there is some Z that causes both or that X causes some intermediary event that causes Y — it could even be a coincidence. Be sure to check out Tyler Vigen's [Spurious Correlations blog](https://www.tylervigen.com/spurious-correlations) for some interesting correlations.\n\n#### Pitfalls of summary statistics\nNot only can our correlation coefficients be misleading, but so can summary statistics. Anscombe's quartet is a collection of four different datasets that have identical summary statistics and correlation coefficients, however, when plotted, it is obvious they are not similar:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.anscombes_quartet()",
"_____no_output_____"
]
],
[
[
"Another example of this is the [Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats):",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.datasaurus_dozen()",
"_____no_output_____"
]
],
[
[
"### Prediction and forecasting\nSay our favorite ice cream shop has asked us to help predict how many ice creams they can expect to sell on a given day. They are convinced that the temperature outside has strong influence on their sales, so they collected data on the number of ice creams sold at a given temperature. We agree to help them, and the first thing we do is make a scatter plot of the data they gave us:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.example_scatter_plot()",
"_____no_output_____"
]
],
[
[
"We can observe an upward trend in the scatter plot: more ice creams are sold at higher temperatures. In order to help out the ice cream shop, though, we need to find a way to make predictions from this data. We can use a technique called **regression** to model the relationship between temperature and ice cream sales with an equation:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.example_regression()",
"_____no_output_____"
]
],
[
[
"We can use the resulting equation to make predictions for the number of ice creams sold at various temperatures. However, we must keep in mind if we are interpolating or extrapolating. If the temperature value we are using for prediction is within the range of the original data we used to build our regression model, then we are **interpolating** (solid portion of the red line). On the other hand, if the temperature is beyond the values in the original data, we are **extrapolating**, which is very dangerous, since we can't assume the pattern continues indefinitely in each direction (dotted portion of the line). Extremely hot temperatures may cause people to stay inside, meaning no ice creams will be sold, while the equation indicates record-high sales. \n\nForecasting is a type of prediction for time series. In a process called **time series decomposition**, time series is decomposed into a trend component, a seasonality component, and a cyclical component. These components can be combined in an additive or multiplicative fashion:",
"_____no_output_____"
]
],
[
[
"ax = stats_viz.time_series_decomposition_example()",
"_____no_output_____"
]
],
[
[
"The **trend** component describes the behavior of the time series in the long-term without accounting for the seasonal or cyclical effects. Using the trend, we can make broad statements about the time series in the long-run, such as: *the population of Earth is increasing* or *the value of a stock is stagnating*. **Seasonality** of a time series explains the systematic and calendar-related movements of a time series. For example, the number of ice cream trucks on the streets of New York City is high in the summer and drops to nothing in the winter; this pattern repeats every year regardless of whether the actual amount each summer is the same. Lastly, the **cyclical** component accounts for anything else unexplained or irregular with the time series; this could be something like a hurricane driving the number of ice cream trucks down in the short-term because it isn't safe to be outside. This component is difficult to anticipate with a forecast due to its unexpected nature.\n\nWhen making models to forecast time series, some common methods include ARIMA-family methods and exponential smoothing. **ARIMA** stands for autoregressive (AR), integrated (I), moving average (MA). Autoregressive models take advantage of the fact that an observation at time $t$ is correlated to a previous observation, for example at time $t - 1$. Note that not all time series are autoregressive. The integrated component concerns the differenced data, or the change in the data from one time to another. Lastly, the moving average component uses a sliding window to average the last $x$ observations where $x$ is the length of the sliding window. We will build an ARIMA model in chapter 7.\n\nThe moving average puts equal weight on each time period in the past involved in the calculation. In practice, this isn't always a realistic expectation of our data. Sometimes all past values are important, but they vary in their influence on future data points. For these cases, we can use exponential smoothing, which allows us to put more weight on more recent values and less weight on values further away from what we are predicting.\n\n### Inferential Statistics\nInferential statistics deals with inferring or deducing things from the sample data we have in order to make statements about the population as a whole. Before doing so, we need to know whether we conducted an observational study or an experiment. An observational study can't be used to determine causation because we can't control for everything. An experiment on the other hand is controlled.\n\nRemember that the sample statistics we discussed earlier are estimators for the population parameters. Our estimators need **confidence intervals**, which provide a point estimate and a margin of error around it. This is the range that the true population parameter will be in at a certain **confidence level**. At the 95% confidence level, 95% of the confidence intervals calculated from random samples of the population contain the true population parameter.\n\nWe also have the option of using **hypothesis testing**. First, we define a null hypothesis (say the true population mean is 0), then we determine a **significance level** (1 - confidence level), which is the probability of rejecting the null hypothesis when it is true. Our result is statistically significant if the value for the null hypothesis is outside the confidence interval. [More info](https://statisticsbyjim.com/hypothesis-testing/hypothesis-tests-confidence-intervals-levels/).\n\n<hr>\n\n<div style=\"overflow: hidden; margin-bottom: 10px;\">\n <div style=\"float: left;\">\n <a href=\"./checking_your_setup.ipynb\">\n <button>Check your setup</button>\n </a>\n <a href=\"./python_101.ipynb\">\n <button>Python 101</button>\n </a>\n </div>\n <div style=\"float: right;\">\n <a href=\"./exercises.ipynb\">\n <button>Exercises</button>\n </a>\n <a href=\"../ch_02/1-pandas_data_structures.ipynb\">\n <button>Chapter 2 →</button>\n </a>\n </div>\n</div>\n<hr>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbd728022137016d91a9598293645b499ef15fb | 44,101 | ipynb | Jupyter Notebook | .ipynb_checkpoints/climate_starter-checkpoint.ipynb | merrb/sqlalchemy-challenge | 46fa6dce0e0081051334d0045a387b3b98df3407 | [
"ADSL"
] | null | null | null | .ipynb_checkpoints/climate_starter-checkpoint.ipynb | merrb/sqlalchemy-challenge | 46fa6dce0e0081051334d0045a387b3b98df3407 | [
"ADSL"
] | null | null | null | .ipynb_checkpoints/climate_starter-checkpoint.ipynb | merrb/sqlalchemy-challenge | 46fa6dce0e0081051334d0045a387b3b98df3407 | [
"ADSL"
] | null | null | null | 112.502551 | 21,980 | 0.878325 | [
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport datetime as dt",
"_____no_output_____"
]
],
[
[
"# Reflect Tables into SQLAlchemy ORM",
"_____no_output_____"
]
],
[
[
"# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func",
"_____no_output_____"
],
[
"# create engine to hawaii.sqlite\nengine = create_engine(\"sqlite:///resources/hawaii.sqlite\")\n#engine = create_engine(\"sqlite:///resources/hawaii_measurement\")\n#engine = create_engine(\"sqlite:///resources/hawaii_station\")",
"_____no_output_____"
],
[
"# reflect an existing database into a new model\nBase = automap_base()\n# reflect the tables\nBase.prepare(engine, reflect=True)",
"_____no_output_____"
],
[
"# View all of the classes that automap found\n",
"_____no_output_____"
],
[
"# Save references to each table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station\nsession = Session(engine)\n",
"_____no_output_____"
],
[
"# Create our session (link) from Python to the DB\nsession = Session(engine)",
"_____no_output_____"
]
],
[
[
"# Exploratory Precipitation Analysis",
"_____no_output_____"
]
],
[
[
"# Find the most recent date in the data set.\n# Calculate the date one year from the last date in data set\n#station,name,latitude,longitude,elevation\n#station,date,prcp,tobs\ncurrent_date=session.query(Measurement.date).\\\norder_by(Measurement.date.desc()).first()\n\nfor date in current_date:\n split_current_date=date.split('-')\n \nsplit_previous_date\ncurrent_year=int(split_current_date[0]); last_month=int(split_current_date[1]); last_day=int(split_current_date[2])",
"_____no_output_____"
],
[
"# Design a query to retrieve the last 12 months of precipitation data and plot the results\nquery_date = dt.date(last_year, last_month, last_day) - dt.timedelta(days=365)\nprint(query_date)\n\n# Starting from the most recent data point in the database. \n \n\n\n# Perform a query to retrieve the data and precipitation scores\nlast_12months_prcp=session.query(Measurement.date, Measurement.prcp).\\\nfilter(Measurement.date>=query_date).\\\norder_by(Measurement.date).all() \n\n\n# Save the query results as a Pandas DataFrame and set the index to the date column\nlast_12months_prcp=pd.DataFrame(last_12months_prcp,columns=['date', 'prcp'])\nlast_12months_prcp.set_index('date', inplace=True)\n\n# Sort the dataframe by date\nlast_12months_prcp.head()\n\n# Use Pandas Plotting with Matplotlib to plot the data\n\nprint(last_12months_prcp)\n",
"2016-08-23\n prcp\ndate \n2016-08-23 0.00\n2016-08-23 0.15\n2016-08-23 0.05\n2016-08-23 NaN\n2016-08-23 0.02\n... ...\n2017-08-22 0.50\n2017-08-23 0.00\n2017-08-23 0.00\n2017-08-23 0.08\n2017-08-23 0.45\n\n[2230 rows x 1 columns]\n"
],
[
"# Use Pandas to calcualte the summary statistics for the precipitation data\nlast_12months_prcp.plot()\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"# Exploratory Station Analysis",
"_____no_output_____"
]
],
[
[
"# Design a query to calculate the total number stations in the dataset\ntotal_stations=session.query(Measurement.station).group_by(Measurement.station).count()\nprint(total_stations)",
"9\n"
],
[
"# Design a query to find the most active stations (i.e. what stations have the most rows?)\n# List the stations and the counts in descending order.\nact_station=session.query(Measurement.station, func.count(Measurement.station)).group_by(Measurement.station). \\\norder_by(func.count(Measurement.station).desc()).all()\nprint(act_station)",
"[('USC00519281', 2772), ('USC00519397', 2724), ('USC00513117', 2709), ('USC00519523', 2669), ('USC00516128', 2612), ('USC00514830', 2202), ('USC00511918', 1979), ('USC00517948', 1372), ('USC00518838', 511)]\n"
],
[
"# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.\nsel = [Measurement.station, \n func.min(Measurement.tobs),\n func.max(Measurement.tobs),\n func.avg(Measurement.tobs)]\n\nlow_high_avg_tmp=session.query(*sel).group_by(Measurement.station). \\\norder_by(func.count(Measurement.station).desc()).first()\nprint(low_high_avg_tmp)",
"('USC00519281', 54.0, 85.0, 71.66378066378067)\n"
],
[
"# Using the most active station id\n# Query the last 12 months of temperature observation data for this station and plot the results as a histogram\nactive_station=low_high_avg_tmp[0]\n\nlast_12months_tobs_station=session.query(Measurement.date, Measurement.tobs).\\\nfilter(Measurement.station==top_station).\\\nfilter(Measurement.date>=query_date).\\\norder_by(Measurement.date).all()\n\ntop_station_tobs_last_12months=pd.DataFrame(last_12months_tobs_station,columns=['date','tobs'])\n\ntop_station_tobs_last_12months.plot.hist(bins=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Close session",
"_____no_output_____"
]
],
[
[
"# Close Session\nsession.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbd79d3df9d808fee65445b7fe449d82294ae03 | 367,302 | ipynb | Jupyter Notebook | product-wheel/Data_Manipulations.ipynb | eng-rolebot/product-wheel | cf02423e81e9ff395e93aeb7035061c7783f4ff3 | [
"MIT"
] | null | null | null | product-wheel/Data_Manipulations.ipynb | eng-rolebot/product-wheel | cf02423e81e9ff395e93aeb7035061c7783f4ff3 | [
"MIT"
] | 3 | 2020-06-17T18:33:48.000Z | 2020-06-17T18:34:19.000Z | product-wheel/Data_Manipulations.ipynb | eng-rolebot/product-wheel | cf02423e81e9ff395e93aeb7035061c7783f4ff3 | [
"MIT"
] | 1 | 2020-04-11T21:22:13.000Z | 2020-04-11T21:22:13.000Z | 47.31444 | 80,252 | 0.482116 | [
[
[
"This is Lena's jupyter for data manipulation",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\npd.set_option('display.max_rows', 300)",
"_____no_output_____"
],
[
"%load_ext pycodestyle_magic\n%pycodestyle_on",
"_____no_output_____"
]
],
[
[
"### Step 1:\nFirst lets load the main data table and fix formating issues in it",
"_____no_output_____"
]
],
[
[
"# Part 1 - Read the dataset, remove empty columns and first 204 lines,\n# use only columns 0,1,4,5,7,8,9,11,12:\ndf = pd.read_excel('../data/OrderColorTransitionsReport 1_1_17 - 4_22_19.xlsx',\n header=205, usecols=[0, 1, 4, 5, 7, 8, 9, 11, 12])\n\n# Part 2 - Organize the data:\n# Currently all Sheet columns (last 3 columns) are from string data type.\n# And they have numbers like \"1,142\".\n# We need to get rid of the \",\" and convert the string type to numeric type.\n\n# Removing the \",\" by using \"replace\" function.\n# This converts a column to a list.\ntotal_sheets = [x.replace(',', '') for x in df['Total Sheets'].astype(str)]\na_grade_sheets = [x.replace(',', '') for x in df['AGrade Sheets'].astype(str)]\ntransition_sheets = [x.replace(',', '') for x in\n df['Transition Sheets'].astype(str)]\n\n# Converting a list back to the column and to numeric data type.\ndf[df.columns[-3]] = pd.DataFrame(total_sheets)\ndf[df.columns[-3]] = pd.to_numeric(df[df.columns[-3]], errors='coerce')\ndf[df.columns[-3]] = df[df.columns[-3]].fillna(0)\n\ndf[df.columns[-2]] = pd.DataFrame(a_grade_sheets)\ndf[df.columns[-2]] = pd.to_numeric(df[df.columns[-2]], errors='coerce')\ndf[df.columns[-2]] = df[df.columns[-2]].fillna(0)\n\ndf[df.columns[-1]] = pd.DataFrame(transition_sheets)\ndf[df.columns[-1]] = pd.to_numeric(df[df.columns[-1]], errors='coerce')\ndf[df.columns[-1]] = df[df.columns[-1]].fillna(0)\n\n# Converting date/time columns to date/time format:\ndf[df.columns[-5]] = pd.to_datetime(df[df.columns[-5]])\ndf[df.columns[-4]] = pd.to_datetime(df[df.columns[-4]])\n\n# Sorting by \"Cast Start\" time, changing the original order:\ndf.sort_values(by='Cast Start', ignore_index=True, inplace=True)\n\ndf",
"_____no_output_____"
]
],
[
[
"### Step 2:\nLets take a closer look at the data:",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
],
[
"df.loc[df['Transition Sheets'] > 600]",
"_____no_output_____"
]
],
[
[
"Now lets plot the distribution of transition sheets:",
"_____no_output_____"
]
],
[
[
"# Defining a figure:\nmy_fig = plt.figure(figsize=(16, 6), facecolor='gainsboro')\n\n# Adding a title:\nmy_fig.suptitle('Transition Sheets', fontsize=16, fontweight='bold', c='k')\nax1, ax2 = my_fig.add_subplot(1, 2, 1), my_fig.add_subplot(1, 2, 2)\n\ntransition_sheets = np.asarray(df['Transition Sheets'])\ntransition_sheets_mean = transition_sheets.mean()\ntransition_sheets_std = transition_sheets.std()\n\n# Plot 1 data:\nax1.hist(transition_sheets, bins=np.arange(0, transition_sheets.max() + 5, 5),\n density=1, label='Normalized Histogram', zorder=0)\nND1 = [stats.norm.pdf(x, transition_sheets_mean,\n transition_sheets_std) for x in np.linspace(0, 600, 101)]\nax1.plot(np.linspace(0, 600, 101), ND1, color='r',\n label='PDF of the normal distribution',\n linewidth=2, zorder=1)\n\n# Plot 1 styling:\nax1.set_title('Distribution of Transition Sheets', fontsize=14,\n fontweight='bold')\nax1.set_xlabel('Transition Sheets', fontsize=12, fontstyle='italic',\n fontweight='bold')\nax1.set_ylabel('Normalized Probability', fontsize=12, fontstyle='italic',\n fontweight='bold')\nax1.grid(linestyle=':', linewidth=1)\nax1.minorticks_on()\nax1.tick_params(direction='in', length=5)\nax1.tick_params(axis='x', labelrotation=45)\nax1.tick_params(which='minor', direction='in', length=2)\nax1.set_xlim(0, 600)\nax1.set_xticks(np.arange(0, 620, 50))\nax1.legend()\n\n# Plot 2 data:\nax2.hist(transition_sheets, bins=np.arange(0, transition_sheets.max() + 5, 5),\n density=0, label='Histogram', zorder=0)\n\n# Plot 2 styling:\nax2.set_title('Distribution of Transition Sheets', fontsize=14,\n fontweight='bold')\nax2.set_xlabel('Transition Sheets', fontsize=12, fontstyle='italic',\n fontweight='bold')\nax2.set_ylabel('Number of Transitions', fontsize=12, fontstyle='italic',\n fontweight='bold')\nax2.grid(linestyle=':', linewidth=1)\nax2.minorticks_on()\nax2.tick_params(direction='in', length=5)\nax2.tick_params(axis='x', labelrotation=45)\nax2.tick_params(which='minor', direction='in', length=2)\nax2.set_xlim(0, 200)\nax2.set_xticks(np.arange(0, 220, 20))\nax2.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Step 3:\nNow lets group the data by \"color\" and \"next color\" anf check how many sheets are lost per each pair of colors:",
"_____no_output_____"
]
],
[
[
"df2 = df.groupby(['Color',\n 'Next Color Code']).agg({'Transition Sheets':\n ['mean', 'std', 'min', 'max', 'count']})\ndf2.columns = ['ts_mean', 'ts_std', 'ts_min', 'ts_max', 't_count']\ndf2 = df2.reset_index()\ndf2",
"_____no_output_____"
],
[
"df2.describe()",
"_____no_output_____"
]
],
[
[
"Now lets see 20 most frequesnt transitions",
"_____no_output_____"
]
],
[
[
"df2.sort_values(by='t_count', ignore_index=True, inplace=False,\n ascending=False).head(20)",
"_____no_output_____"
],
[
"df2['t_count'].hist(bins=31).set_xlabel('Number of repeats')\ndf2['t_count'].hist(bins=31).set_ylabel('Number of transitions')\nplt.show()",
"_____no_output_____"
],
[
"df2['t_count'].hist(bins=31).set_ylim(0, 100)\ndf2['t_count'].hist(bins=31).set_xlabel('Number of repeats')\nplt.show()",
"_____no_output_____"
]
],
[
[
"As we can see from histograms above - most of the transitions happened once up to 8 times.",
"_____no_output_____"
]
],
[
[
"# Number of unique orders from the original table:\nprint('Number of unique Orders:', len(df['Cast Order'].unique()))\nprint('Number of unique Next Orders:', len(df['Next Cast Order'].unique()))\nprint('Number of unique Colors:', len(df['Color'].unique()))\nprint('Number of unique Next Colors:', len(df['Next Color Code'].unique()))",
"Number of unique Orders: 2641\nNumber of unique Next Orders: 2671\nNumber of unique Colors: 267\nNumber of unique Next Colors: 269\n"
],
[
"# Or from the grouped by color code pair table:\nprint('Number of unique Colors:', len(df2['Color'].unique()))\nprint('Number of unique Next Colors:', len(df2['Next Color Code'].unique()))",
"Number of unique Colors: 267\nNumber of unique Next Colors: 269\n"
],
[
"# intersection &, union |, difference -, symmetric difference ^\nprint(list(set(df['Color'].unique()) - set(df['Next Color Code'].unique())))\nprint(list(set(df['Next Color Code'].unique()) - set(df['Color'].unique())))\n# there are a couple colors in next color code not in the colors column",
"[]\n['dv', 'DP']\n"
],
[
"colors = df['Color'].unique().tolist()\ncolors.sort()\ncolors",
"_____no_output_____"
]
],
[
[
"So \"Next color code\" has two extra colors in addition to colors in \"Colors\" column.\nBut we can check below if this is actually true:",
"_____no_output_____"
]
],
[
[
"colors.count('DP')",
"_____no_output_____"
],
[
"colors.count('dv')",
"_____no_output_____"
],
[
"colors.count('DV')",
"_____no_output_____"
]
],
[
[
"So \"dv\" actually exists but as a 'DV'. We'll assume this is the same thing and fix the table.",
"_____no_output_____"
]
],
[
[
"df['Color'] = df['Color'].str.upper()\ndf['Next Color Code'] = df['Next Color Code'].str.upper()",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.loc[df['Next Color Code'] == 'dv']",
"_____no_output_____"
]
],
[
[
"So we got rid of 'dv' and now we only need to add a 'DP' to a colors list:",
"_____no_output_____"
]
],
[
[
"colors.append('DP')",
"_____no_output_____"
],
[
"colors.sort()\ncolors",
"_____no_output_____"
],
[
"len(colors)",
"_____no_output_____"
]
],
[
[
"### Step 4:\nNow lets make a matrix:",
"_____no_output_____"
]
],
[
[
"df2",
"_____no_output_____"
],
[
"shape = (len(colors), len(colors))\nmatrix = np.zeros(shape)\nmatrix",
"_____no_output_____"
],
[
"colors[0]",
"_____no_output_____"
],
[
"val = df2['ts_mean'].loc[(df2['Color'] == colors[0]) &\n (df2['Next Color Code'] == colors[1])].tolist()\nval[0]",
"_____no_output_____"
],
[
"for i in range(0, len(colors)):\n for j in range(0, len(colors)):\n try:\n value = df2['ts_mean'].loc[(df2['Color'] == colors[i]) &\n (df2['Next Color Code'] == colors[j])].tolist()\n matrix[i, j] = value[0]\n except:\n matrix[i, j] = 80",
"5:80: E501 line too long (86 > 79 characters)\n7:9: E722 do not use bare 'except'\n"
],
[
"matrix",
"_____no_output_____"
]
],
[
[
"### Step ...:\nNow lets load the description data set:",
"_____no_output_____"
]
],
[
[
"# Second data table with color description\ndf_desc = pd.read_excel('../data/Variable cost per sheet.xlsx', header=3)\ndf_desc",
"_____no_output_____"
],
[
"df_desc.shape",
"_____no_output_____"
],
[
"def merge_transitions():\n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecbd7ce6d1e5ac3ff4917df69f98d4d60ab3791c | 347,456 | ipynb | Jupyter Notebook | A Fine Windy Day/solution.ipynb | nikkkhil067/HackerEarth-Machine-Learning-Challenges | 79affe6eae3cbcf440f8226ff8734b8dc2c5ef52 | [
"MIT"
] | null | null | null | A Fine Windy Day/solution.ipynb | nikkkhil067/HackerEarth-Machine-Learning-Challenges | 79affe6eae3cbcf440f8226ff8734b8dc2c5ef52 | [
"MIT"
] | null | null | null | A Fine Windy Day/solution.ipynb | nikkkhil067/HackerEarth-Machine-Learning-Challenges | 79affe6eae3cbcf440f8226ff8734b8dc2c5ef52 | [
"MIT"
] | null | null | null | 100.741084 | 162,036 | 0.757958 | [
[
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.metrics import r2_score\n# Input data files are available in the read-only \"../input/\" directory\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using \"Save & Run All\" \n# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session",
"/kaggle/input/predict-the-powerkwh-produced-by-windmills/dataset/sample_submission.csv\n/kaggle/input/predict-the-powerkwh-produced-by-windmills/dataset/train.csv\n/kaggle/input/predict-the-powerkwh-produced-by-windmills/dataset/test.csv\n"
]
],
[
[
"# Importing DataSet",
"_____no_output_____"
]
],
[
[
"train = pd.read_csv('../input/predict-the-powerkwh-produced-by-windmills/dataset/train.csv')\ntest = pd.read_csv('../input/predict-the-powerkwh-produced-by-windmills/dataset/test.csv')",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"test.head()",
"_____no_output_____"
]
],
[
[
"# EDA and Data Preprocessing of the DataSet",
"_____no_output_____"
]
],
[
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 28200 entries, 0 to 28199\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 28200 non-null object \n 1 datetime 28200 non-null object \n 2 wind_speed(m/s) 27927 non-null float64\n 3 atmospheric_temperature(°C) 24750 non-null float64\n 4 shaft_temperature(°C) 28198 non-null float64\n 5 blades_angle(°) 27984 non-null float64\n 6 gearbox_temperature(°C) 28199 non-null float64\n 7 engine_temperature(°C) 28188 non-null float64\n 8 motor_torque(N-m) 28176 non-null float64\n 9 generator_temperature(°C) 28188 non-null float64\n 10 atmospheric_pressure(Pascal) 25493 non-null float64\n 11 area_temperature(°C) 28200 non-null float64\n 12 windmill_body_temperature(°C) 25837 non-null float64\n 13 wind_direction(°) 23097 non-null float64\n 14 resistance(ohm) 28199 non-null float64\n 15 rotor_torque(N-m) 27628 non-null float64\n 16 turbine_status 26441 non-null object \n 17 cloud_level 27924 non-null object \n 18 blade_length(m) 23107 non-null float64\n 19 blade_breadth(m) 28200 non-null float64\n 20 windmill_height(m) 27657 non-null float64\n 21 windmill_generated_power(kW/h) 27993 non-null float64\ndtypes: float64(18), object(4)\nmemory usage: 4.7+ MB\n"
],
[
"test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12086 entries, 0 to 12085\nData columns (total 21 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 12086 non-null object \n 1 datetime 12086 non-null object \n 2 wind_speed(m/s) 11960 non-null float64\n 3 atmospheric_temperature(°C) 10659 non-null float64\n 4 shaft_temperature(°C) 12085 non-null float64\n 5 blades_angle(°) 11980 non-null float64\n 6 gearbox_temperature(°C) 12085 non-null float64\n 7 engine_temperature(°C) 12081 non-null float64\n 8 motor_torque(N-m) 12075 non-null float64\n 9 generator_temperature(°C) 12081 non-null float64\n 10 atmospheric_pressure(Pascal) 10935 non-null float64\n 11 area_temperature(°C) 12085 non-null float64\n 12 windmill_body_temperature(°C) 11160 non-null float64\n 13 wind_direction(°) 9926 non-null float64\n 14 resistance(ohm) 12086 non-null float64\n 15 rotor_torque(N-m) 11805 non-null float64\n 16 turbine_status 11289 non-null object \n 17 cloud_level 11961 non-null object \n 18 blade_length(m) 9972 non-null float64\n 19 blade_breadth(m) 12086 non-null float64\n 20 windmill_height(m) 11831 non-null float64\ndtypes: float64(17), object(4)\nmemory usage: 1.9+ MB\n"
],
[
"train.describe()",
"_____no_output_____"
],
[
"test.describe()",
"_____no_output_____"
]
],
[
[
"**Plotting Correlation Matrix**",
"_____no_output_____"
]
],
[
[
"corr = train.corr()\nplt.figure(figsize=(20,10))\nmask = np.zeros_like(corr,dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\nsns.heatmap(corr,mask=mask,annot=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"**For the model to be stable enough, we drop highly correlated features. So, <br>\n\"motor_torque(N-m)\" and \"generator_temperature(°C)\" are highly correlated, so will be dropped from both the dataset.**",
"_____no_output_____"
]
],
[
[
"train.drop(['generator_temperature(°C)','windmill_body_temperature(°C)'],inplace=True,axis=1)\ntest.drop(['generator_temperature(°C)','windmill_body_temperature(°C)'],inplace=True,axis=1)",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 28200 entries, 0 to 28199\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 28200 non-null object \n 1 datetime 28200 non-null object \n 2 wind_speed(m/s) 27927 non-null float64\n 3 atmospheric_temperature(°C) 24750 non-null float64\n 4 shaft_temperature(°C) 28198 non-null float64\n 5 blades_angle(°) 27984 non-null float64\n 6 gearbox_temperature(°C) 28199 non-null float64\n 7 engine_temperature(°C) 28188 non-null float64\n 8 motor_torque(N-m) 28176 non-null float64\n 9 atmospheric_pressure(Pascal) 25493 non-null float64\n 10 area_temperature(°C) 28200 non-null float64\n 11 wind_direction(°) 23097 non-null float64\n 12 resistance(ohm) 28199 non-null float64\n 13 rotor_torque(N-m) 27628 non-null float64\n 14 turbine_status 26441 non-null object \n 15 cloud_level 27924 non-null object \n 16 blade_length(m) 23107 non-null float64\n 17 blade_breadth(m) 28200 non-null float64\n 18 windmill_height(m) 27657 non-null float64\n 19 windmill_generated_power(kW/h) 27993 non-null float64\ndtypes: float64(16), object(4)\nmemory usage: 4.3+ MB\n"
],
[
"test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12086 entries, 0 to 12085\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 12086 non-null object \n 1 datetime 12086 non-null object \n 2 wind_speed(m/s) 11960 non-null float64\n 3 atmospheric_temperature(°C) 10659 non-null float64\n 4 shaft_temperature(°C) 12085 non-null float64\n 5 blades_angle(°) 11980 non-null float64\n 6 gearbox_temperature(°C) 12085 non-null float64\n 7 engine_temperature(°C) 12081 non-null float64\n 8 motor_torque(N-m) 12075 non-null float64\n 9 atmospheric_pressure(Pascal) 10935 non-null float64\n 10 area_temperature(°C) 12085 non-null float64\n 11 wind_direction(°) 9926 non-null float64\n 12 resistance(ohm) 12086 non-null float64\n 13 rotor_torque(N-m) 11805 non-null float64\n 14 turbine_status 11289 non-null object \n 15 cloud_level 11961 non-null object \n 16 blade_length(m) 9972 non-null float64\n 17 blade_breadth(m) 12086 non-null float64\n 18 windmill_height(m) 11831 non-null float64\ndtypes: float64(15), object(4)\nmemory usage: 1.8+ MB\n"
]
],
[
[
"**Checking Missing Values in the DataSet**",
"_____no_output_____"
]
],
[
[
"train.isnull().sum()",
"_____no_output_____"
],
[
"test.isnull().sum()",
"_____no_output_____"
],
[
"sns.heatmap(train.isnull(),cbar=False,yticklabels=False,cmap = 'viridis')",
"_____no_output_____"
],
[
"sns.heatmap(test.isnull(),cbar=False,yticklabels=False,cmap = 'viridis')",
"_____no_output_____"
]
],
[
[
"**Dealing with the Missing Values**\n\nReplacing the missing values by the mean",
"_____no_output_____"
]
],
[
[
"train['gearbox_temperature(°C)'].fillna(train['gearbox_temperature(°C)'].mean(),inplace=True)\ntrain['area_temperature(°C)'].fillna(train['area_temperature(°C)'].mean(),inplace=True)\ntrain['rotor_torque(N-m)'].fillna(train['rotor_torque(N-m)'].mean(),inplace=True)\ntrain['blade_length(m)'].fillna(train['blade_length(m)'].mean(),inplace=True)\ntrain['blade_breadth(m)'].fillna(train['blade_breadth(m)'].mean(),inplace=True)\ntrain['windmill_height(m)'].fillna(train['windmill_height(m)'].mean(),inplace=True)\ntrain['cloud_level'].fillna(train['cloud_level'].mode()[0],inplace=True)\ntrain['atmospheric_temperature(°C)'].fillna(train['atmospheric_temperature(°C)'].mean(),inplace=True)\ntrain['atmospheric_pressure(Pascal)'].fillna(train['atmospheric_pressure(Pascal)'].mean(),inplace=True)\ntrain['wind_speed(m/s)'].fillna(train['wind_speed(m/s)'].mean(),inplace=True)\ntrain['shaft_temperature(°C)'].fillna(train['shaft_temperature(°C)'].mean(),inplace=True)\ntrain['blades_angle(°)'].fillna(train['blades_angle(°)'].mean(),inplace=True)\ntrain['engine_temperature(°C)'].fillna(train['engine_temperature(°C)'].mean(),inplace=True)\ntrain['motor_torque(N-m)'].fillna(train['motor_torque(N-m)'].mean(),inplace=True)\ntrain['wind_direction(°)'].fillna(train['wind_direction(°)'].mean(),inplace=True)",
"_____no_output_____"
],
[
"test['gearbox_temperature(°C)'].fillna(test['gearbox_temperature(°C)'].mean(),inplace=True)\ntest['area_temperature(°C)'].fillna(test['area_temperature(°C)'].mean(),inplace=True)\ntest['rotor_torque(N-m)'].fillna(test['rotor_torque(N-m)'].mean(),inplace=True)\ntest['blade_length(m)'].fillna(test['blade_length(m)'].mean(),inplace=True)\ntest['blade_breadth(m)'].fillna(test['blade_breadth(m)'].mean(),inplace=True)\ntest['windmill_height(m)'].fillna(test['windmill_height(m)'].mean(),inplace=True)\ntest['cloud_level'].fillna(test['cloud_level'].mode()[0],inplace=True)\ntest['atmospheric_temperature(°C)'].fillna(test['atmospheric_temperature(°C)'].mean(),inplace=True)\ntest['atmospheric_pressure(Pascal)'].fillna(test['atmospheric_pressure(Pascal)'].mean(),inplace=True)\ntest['wind_speed(m/s)'].fillna(test['wind_speed(m/s)'].mean(),inplace=True)\ntest['shaft_temperature(°C)'].fillna(test['shaft_temperature(°C)'].mean(),inplace=True)\ntest['blades_angle(°)'].fillna(test['blades_angle(°)'].mean(),inplace=True)\ntest['engine_temperature(°C)'].fillna(test['engine_temperature(°C)'].mean(),inplace=True)\ntest['motor_torque(N-m)'].fillna(test['motor_torque(N-m)'].mean(),inplace=True)\ntest['wind_direction(°)'].fillna(test['wind_direction(°)'].mean(),inplace=True)",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 28200 entries, 0 to 28199\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 28200 non-null object \n 1 datetime 28200 non-null object \n 2 wind_speed(m/s) 28200 non-null float64\n 3 atmospheric_temperature(°C) 28200 non-null float64\n 4 shaft_temperature(°C) 28200 non-null float64\n 5 blades_angle(°) 28200 non-null float64\n 6 gearbox_temperature(°C) 28200 non-null float64\n 7 engine_temperature(°C) 28200 non-null float64\n 8 motor_torque(N-m) 28200 non-null float64\n 9 atmospheric_pressure(Pascal) 28200 non-null float64\n 10 area_temperature(°C) 28200 non-null float64\n 11 wind_direction(°) 28200 non-null float64\n 12 resistance(ohm) 28199 non-null float64\n 13 rotor_torque(N-m) 28200 non-null float64\n 14 turbine_status 26441 non-null object \n 15 cloud_level 28200 non-null object \n 16 blade_length(m) 28200 non-null float64\n 17 blade_breadth(m) 28200 non-null float64\n 18 windmill_height(m) 28200 non-null float64\n 19 windmill_generated_power(kW/h) 27993 non-null float64\ndtypes: float64(16), object(4)\nmemory usage: 4.3+ MB\n"
],
[
"train.dropna(how='any',axis=0,inplace=True)\n#test.dropna(how='any',axis=0,inplace=True)",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 26245 entries, 0 to 28199\nData columns (total 20 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 26245 non-null object \n 1 datetime 26245 non-null object \n 2 wind_speed(m/s) 26245 non-null float64\n 3 atmospheric_temperature(°C) 26245 non-null float64\n 4 shaft_temperature(°C) 26245 non-null float64\n 5 blades_angle(°) 26245 non-null float64\n 6 gearbox_temperature(°C) 26245 non-null float64\n 7 engine_temperature(°C) 26245 non-null float64\n 8 motor_torque(N-m) 26245 non-null float64\n 9 atmospheric_pressure(Pascal) 26245 non-null float64\n 10 area_temperature(°C) 26245 non-null float64\n 11 wind_direction(°) 26245 non-null float64\n 12 resistance(ohm) 26245 non-null float64\n 13 rotor_torque(N-m) 26245 non-null float64\n 14 turbine_status 26245 non-null object \n 15 cloud_level 26245 non-null object \n 16 blade_length(m) 26245 non-null float64\n 17 blade_breadth(m) 26245 non-null float64\n 18 windmill_height(m) 26245 non-null float64\n 19 windmill_generated_power(kW/h) 26245 non-null float64\ndtypes: float64(16), object(4)\nmemory usage: 4.2+ MB\n"
],
[
"test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12086 entries, 0 to 12085\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 12086 non-null object \n 1 datetime 12086 non-null object \n 2 wind_speed(m/s) 12086 non-null float64\n 3 atmospheric_temperature(°C) 12086 non-null float64\n 4 shaft_temperature(°C) 12086 non-null float64\n 5 blades_angle(°) 12086 non-null float64\n 6 gearbox_temperature(°C) 12086 non-null float64\n 7 engine_temperature(°C) 12086 non-null float64\n 8 motor_torque(N-m) 12086 non-null float64\n 9 atmospheric_pressure(Pascal) 12086 non-null float64\n 10 area_temperature(°C) 12086 non-null float64\n 11 wind_direction(°) 12086 non-null float64\n 12 resistance(ohm) 12086 non-null float64\n 13 rotor_torque(N-m) 12086 non-null float64\n 14 turbine_status 11289 non-null object \n 15 cloud_level 12086 non-null object \n 16 blade_length(m) 12086 non-null float64\n 17 blade_breadth(m) 12086 non-null float64\n 18 windmill_height(m) 12086 non-null float64\ndtypes: float64(15), object(4)\nmemory usage: 1.8+ MB\n"
]
],
[
[
"**Since \"turbine_status\" and \"cloud_level\" are categorical,<br>\nSo, we use Dummy Variable encoding for \"turbine_status\" and ordinally encode the \"cloud_level\"**",
"_____no_output_____"
]
],
[
[
"train['cloud_level'].replace(['Extremely Low', 'Low', 'Medium'],[0, 1, 2],inplace=True)\ntest['cloud_level'].replace(['Extremely Low', 'Low', 'Medium'],[0, 1, 2],inplace=True)",
"_____no_output_____"
],
[
"train['turbine_status'].value_counts()",
"_____no_output_____"
],
[
"dummy = ['turbine_status']\ndf_dummy = pd.get_dummies(train[dummy])\ndf_test_dummy = pd.get_dummies(test[dummy])",
"_____no_output_____"
],
[
"df_dummy",
"_____no_output_____"
],
[
"df_test_dummy",
"_____no_output_____"
],
[
"train = pd.concat([train,df_dummy],axis=1)\ntest = pd.concat([test,df_test_dummy],axis=1)",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 26245 entries, 0 to 28199\nData columns (total 34 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 26245 non-null object \n 1 datetime 26245 non-null object \n 2 wind_speed(m/s) 26245 non-null float64\n 3 atmospheric_temperature(°C) 26245 non-null float64\n 4 shaft_temperature(°C) 26245 non-null float64\n 5 blades_angle(°) 26245 non-null float64\n 6 gearbox_temperature(°C) 26245 non-null float64\n 7 engine_temperature(°C) 26245 non-null float64\n 8 motor_torque(N-m) 26245 non-null float64\n 9 atmospheric_pressure(Pascal) 26245 non-null float64\n 10 area_temperature(°C) 26245 non-null float64\n 11 wind_direction(°) 26245 non-null float64\n 12 resistance(ohm) 26245 non-null float64\n 13 rotor_torque(N-m) 26245 non-null float64\n 14 turbine_status 26245 non-null object \n 15 cloud_level 26245 non-null int64 \n 16 blade_length(m) 26245 non-null float64\n 17 blade_breadth(m) 26245 non-null float64\n 18 windmill_height(m) 26245 non-null float64\n 19 windmill_generated_power(kW/h) 26245 non-null float64\n 20 turbine_status_A 26245 non-null uint8 \n 21 turbine_status_A2 26245 non-null uint8 \n 22 turbine_status_AAA 26245 non-null uint8 \n 23 turbine_status_AB 26245 non-null uint8 \n 24 turbine_status_ABC 26245 non-null uint8 \n 25 turbine_status_AC 26245 non-null uint8 \n 26 turbine_status_B 26245 non-null uint8 \n 27 turbine_status_B2 26245 non-null uint8 \n 28 turbine_status_BA 26245 non-null uint8 \n 29 turbine_status_BB 26245 non-null uint8 \n 30 turbine_status_BBB 26245 non-null uint8 \n 31 turbine_status_BCB 26245 non-null uint8 \n 32 turbine_status_BD 26245 non-null uint8 \n 33 turbine_status_D 26245 non-null uint8 \ndtypes: float64(16), int64(1), object(3), uint8(14)\nmemory usage: 4.6+ MB\n"
],
[
"test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12086 entries, 0 to 12085\nData columns (total 33 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 12086 non-null object \n 1 datetime 12086 non-null object \n 2 wind_speed(m/s) 12086 non-null float64\n 3 atmospheric_temperature(°C) 12086 non-null float64\n 4 shaft_temperature(°C) 12086 non-null float64\n 5 blades_angle(°) 12086 non-null float64\n 6 gearbox_temperature(°C) 12086 non-null float64\n 7 engine_temperature(°C) 12086 non-null float64\n 8 motor_torque(N-m) 12086 non-null float64\n 9 atmospheric_pressure(Pascal) 12086 non-null float64\n 10 area_temperature(°C) 12086 non-null float64\n 11 wind_direction(°) 12086 non-null float64\n 12 resistance(ohm) 12086 non-null float64\n 13 rotor_torque(N-m) 12086 non-null float64\n 14 turbine_status 11289 non-null object \n 15 cloud_level 12086 non-null int64 \n 16 blade_length(m) 12086 non-null float64\n 17 blade_breadth(m) 12086 non-null float64\n 18 windmill_height(m) 12086 non-null float64\n 19 turbine_status_A 12086 non-null uint8 \n 20 turbine_status_A2 12086 non-null uint8 \n 21 turbine_status_AAA 12086 non-null uint8 \n 22 turbine_status_AB 12086 non-null uint8 \n 23 turbine_status_ABC 12086 non-null uint8 \n 24 turbine_status_AC 12086 non-null uint8 \n 25 turbine_status_B 12086 non-null uint8 \n 26 turbine_status_B2 12086 non-null uint8 \n 27 turbine_status_BA 12086 non-null uint8 \n 28 turbine_status_BB 12086 non-null uint8 \n 29 turbine_status_BBB 12086 non-null uint8 \n 30 turbine_status_BCB 12086 non-null uint8 \n 31 turbine_status_BD 12086 non-null uint8 \n 32 turbine_status_D 12086 non-null uint8 \ndtypes: float64(15), int64(1), object(3), uint8(14)\nmemory usage: 1.9+ MB\n"
]
],
[
[
"**Converting the feature \"datetime\" into pandas datetime format**",
"_____no_output_____"
]
],
[
[
"train[\"datetime\"] = pd.to_datetime(train[\"datetime\"])\ntest[\"datetime\"] = pd.to_datetime(test[\"datetime\"])\n\ntrain['dmonth'] = train['datetime'].dt.month\ntrain['dday'] = train['datetime'].dt.day\ntrain['ddayofweek'] = train['datetime'].dt.dayofweek\n\ntest['dmonth'] = test['datetime'].dt.month\ntest['dday'] = test['datetime'].dt.day\ntest['ddayofweek'] = test['datetime'].dt.dayofweek",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 26245 entries, 0 to 28199\nData columns (total 37 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 26245 non-null object \n 1 datetime 26245 non-null datetime64[ns]\n 2 wind_speed(m/s) 26245 non-null float64 \n 3 atmospheric_temperature(°C) 26245 non-null float64 \n 4 shaft_temperature(°C) 26245 non-null float64 \n 5 blades_angle(°) 26245 non-null float64 \n 6 gearbox_temperature(°C) 26245 non-null float64 \n 7 engine_temperature(°C) 26245 non-null float64 \n 8 motor_torque(N-m) 26245 non-null float64 \n 9 atmospheric_pressure(Pascal) 26245 non-null float64 \n 10 area_temperature(°C) 26245 non-null float64 \n 11 wind_direction(°) 26245 non-null float64 \n 12 resistance(ohm) 26245 non-null float64 \n 13 rotor_torque(N-m) 26245 non-null float64 \n 14 turbine_status 26245 non-null object \n 15 cloud_level 26245 non-null int64 \n 16 blade_length(m) 26245 non-null float64 \n 17 blade_breadth(m) 26245 non-null float64 \n 18 windmill_height(m) 26245 non-null float64 \n 19 windmill_generated_power(kW/h) 26245 non-null float64 \n 20 turbine_status_A 26245 non-null uint8 \n 21 turbine_status_A2 26245 non-null uint8 \n 22 turbine_status_AAA 26245 non-null uint8 \n 23 turbine_status_AB 26245 non-null uint8 \n 24 turbine_status_ABC 26245 non-null uint8 \n 25 turbine_status_AC 26245 non-null uint8 \n 26 turbine_status_B 26245 non-null uint8 \n 27 turbine_status_B2 26245 non-null uint8 \n 28 turbine_status_BA 26245 non-null uint8 \n 29 turbine_status_BB 26245 non-null uint8 \n 30 turbine_status_BBB 26245 non-null uint8 \n 31 turbine_status_BCB 26245 non-null uint8 \n 32 turbine_status_BD 26245 non-null uint8 \n 33 turbine_status_D 26245 non-null uint8 \n 34 dmonth 26245 non-null int64 \n 35 dday 26245 non-null int64 \n 36 ddayofweek 26245 non-null int64 \ndtypes: datetime64[ns](1), float64(16), int64(4), object(2), uint8(14)\nmemory usage: 5.2+ MB\n"
],
[
"test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12086 entries, 0 to 12085\nData columns (total 36 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tracking_id 12086 non-null object \n 1 datetime 12086 non-null datetime64[ns]\n 2 wind_speed(m/s) 12086 non-null float64 \n 3 atmospheric_temperature(°C) 12086 non-null float64 \n 4 shaft_temperature(°C) 12086 non-null float64 \n 5 blades_angle(°) 12086 non-null float64 \n 6 gearbox_temperature(°C) 12086 non-null float64 \n 7 engine_temperature(°C) 12086 non-null float64 \n 8 motor_torque(N-m) 12086 non-null float64 \n 9 atmospheric_pressure(Pascal) 12086 non-null float64 \n 10 area_temperature(°C) 12086 non-null float64 \n 11 wind_direction(°) 12086 non-null float64 \n 12 resistance(ohm) 12086 non-null float64 \n 13 rotor_torque(N-m) 12086 non-null float64 \n 14 turbine_status 11289 non-null object \n 15 cloud_level 12086 non-null int64 \n 16 blade_length(m) 12086 non-null float64 \n 17 blade_breadth(m) 12086 non-null float64 \n 18 windmill_height(m) 12086 non-null float64 \n 19 turbine_status_A 12086 non-null uint8 \n 20 turbine_status_A2 12086 non-null uint8 \n 21 turbine_status_AAA 12086 non-null uint8 \n 22 turbine_status_AB 12086 non-null uint8 \n 23 turbine_status_ABC 12086 non-null uint8 \n 24 turbine_status_AC 12086 non-null uint8 \n 25 turbine_status_B 12086 non-null uint8 \n 26 turbine_status_B2 12086 non-null uint8 \n 27 turbine_status_BA 12086 non-null uint8 \n 28 turbine_status_BB 12086 non-null uint8 \n 29 turbine_status_BBB 12086 non-null uint8 \n 30 turbine_status_BCB 12086 non-null uint8 \n 31 turbine_status_BD 12086 non-null uint8 \n 32 turbine_status_D 12086 non-null uint8 \n 33 dmonth 12086 non-null int64 \n 34 dday 12086 non-null int64 \n 35 ddayofweek 12086 non-null int64 \ndtypes: datetime64[ns](1), float64(15), int64(4), object(2), uint8(14)\nmemory usage: 2.2+ MB\n"
]
],
[
[
"# Data Modelling",
"_____no_output_____"
]
],
[
[
"X = train.drop(['tracking_id','datetime','windmill_generated_power(kW/h)','turbine_status'],axis=1)\ny = train['windmill_generated_power(kW/h)']",
"_____no_output_____"
],
[
"print(X.shape, y.shape)",
"(26245, 33) (26245,)\n"
],
[
"testData = test.drop(['tracking_id','datetime','turbine_status'],axis=1)",
"_____no_output_____"
],
[
"print(testData.shape)",
"(12086, 33)\n"
],
[
"from sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX = sc.fit_transform(X)\ntestData = sc.transform(testData)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.8,random_state=0)\nprint(X_train.shape,y_train.shape)\nprint(X_test.shape,y_test.shape)",
"(20996, 33) (20996,)\n(5249, 33) (5249,)\n"
]
],
[
[
"### Desicion Tree Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeRegressor\nregressor_dt = DecisionTreeRegressor(random_state = 0)\nregressor_dt.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_train_pred_dt = regressor_dt.predict(X_train)\ny_test_pred_dt = regressor_dt.predict(X_test)",
"_____no_output_____"
],
[
"print(r2_score(y_true=y_train,y_pred=y_train_pred_dt))\nprint(r2_score(y_true=y_test,y_pred=y_test_pred_dt))",
"1.0\n0.9003230730814837\n"
]
],
[
[
"### Random Forest Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\nregressor_rf = RandomForestRegressor(n_estimators=200, n_jobs=1, oob_score=True, random_state=42)\nregressor_rf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_train_pred_rf = regressor_rf.predict(X_train)\ny_test_pred_rf = regressor_rf.predict(X_test)",
"_____no_output_____"
],
[
"print(r2_score(y_true=y_train,y_pred=y_train_pred_rf))\nprint(r2_score(y_true=y_test,y_pred=y_test_pred_rf))",
"0.9942525343903907\n0.9560331608090455\n"
]
],
[
[
"### XGBoost Regression",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBRegressor\nregressor_xg = XGBRegressor(n_estimators=1000, max_depth=8, booster='gbtree', n_jobs=1, learning_rate=0.1, reg_lambda=0.01, reg_alpha=0.2)\nregressor_xg.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_train_pred_xg = regressor_xg.predict(X_train)\ny_test_pred_xg = regressor_xg.predict(X_test)",
"_____no_output_____"
],
[
"print(r2_score(y_true=y_train,y_pred=y_train_pred_xg))\nprint(r2_score(y_true=y_test,y_pred=y_test_pred_xg))",
"0.9998335635787403\n0.9568691748451127\n"
]
],
[
[
"# Testing the model on the test dataSet and creating the submission file",
"_____no_output_____"
]
],
[
[
"model = regressor_xg.predict(testData)",
"_____no_output_____"
],
[
"model",
"_____no_output_____"
],
[
"model.shape",
"_____no_output_____"
],
[
"Ywrite=pd.DataFrame(model,columns=['windmill_generated_power(kW/h)'])\nvar =pd.DataFrame(test[['tracking_id','datetime']])\ndataset_test_col = pd.concat([var,Ywrite], axis=1)\ndataset_test_col.to_csv(\"../output/Prediction.csv\",index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecbd7ff4b2a355e2b47e65f1bfe7294c38135ceb | 9,503 | ipynb | Jupyter Notebook | sub_5.ipynb | anadiamaq/Diamonds | 84e32e54e2a218b7bc681757297158e8aad5f5c7 | [
"Unlicense"
] | null | null | null | sub_5.ipynb | anadiamaq/Diamonds | 84e32e54e2a218b7bc681757297158e8aad5f5c7 | [
"Unlicense"
] | null | null | null | sub_5.ipynb | anadiamaq/Diamonds | 84e32e54e2a218b7bc681757297158e8aad5f5c7 | [
"Unlicense"
] | null | null | null | 23.876884 | 1,593 | 0.543407 | [
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"df = pd.read_csv('data.csv')",
"_____no_output_____"
],
[
"y = df[\"price\"]\nX = df.drop(columns=[\"price\"])",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder",
"_____no_output_____"
],
[
"label_encoder_cut = LabelEncoder()",
"_____no_output_____"
],
[
"label_encoder_cut.fit(X[\"cut\"])",
"_____no_output_____"
],
[
"X[\"cut\"] = label_encoder_cut.transform(X[\"cut\"])",
"_____no_output_____"
],
[
"label_encoder_color = LabelEncoder()",
"_____no_output_____"
],
[
"label_encoder_color.fit(X[\"color\"])",
"_____no_output_____"
],
[
"X[\"color\"] = label_encoder_color.transform(X[\"color\"])",
"_____no_output_____"
],
[
"label_encoder_clarity = LabelEncoder()",
"_____no_output_____"
],
[
"label_encoder_clarity.fit(X[\"clarity\"])",
"_____no_output_____"
],
[
"X[\"clarity\"] = label_encoder_clarity.transform(X[\"clarity\"])",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X,y)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import mean_squared_error as mse\nlr = make_pipeline(StandardScaler(),LinearRegression())\nlr.fit(X_train, y_train)\ny_pred_train = lr.predict(X_train)\ny_pred_test = lr.predict(X_test)\n\nprint(\"Train:\", mse(y_train,y_pred_train, squared=False))\nprint(\"Test:\", mse(y_test,y_pred_test, squared=False))",
"_____no_output_____"
],
[
"lr.fit(X,y)",
"_____no_output_____"
],
[
"predict = pd.read_csv('predict.csv')",
"_____no_output_____"
],
[
"label_encoder_cut.fit(predict[\"cut\"])",
"_____no_output_____"
],
[
"predict[\"cut\"] = label_encoder_cut.transform(predict[\"cut\"])",
"_____no_output_____"
],
[
"label_encoder_color.fit(predict[\"color\"])",
"_____no_output_____"
],
[
"predict[\"color\"] = label_encoder_color.transform(predict[\"color\"])",
"_____no_output_____"
],
[
"label_encoder_clarity.fit(predict[\"clarity\"])",
"_____no_output_____"
],
[
"predict[\"clarity\"] = label_encoder_clarity.transform(predict[\"clarity\"])",
"_____no_output_____"
],
[
"y_pred = lr.predict(predict)",
"_____no_output_____"
],
[
"df_5 = pd.DataFrame({\"index\":predict[\"index\"], \"price\":y_pred})",
"_____no_output_____"
],
[
"df_5.to_csv('submission_5.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbd93f17abe3a948dc91d2775e97153536388ef | 8,992 | ipynb | Jupyter Notebook | Evolution Strategy/.ipynb_checkpoints/CEM-checkpoint.ipynb | akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning | b7dc13b0116898848d8d0b8a95b7af182982bd6b | [
"MIT"
] | null | null | null | Evolution Strategy/.ipynb_checkpoints/CEM-checkpoint.ipynb | akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning | b7dc13b0116898848d8d0b8a95b7af182982bd6b | [
"MIT"
] | null | null | null | Evolution Strategy/.ipynb_checkpoints/CEM-checkpoint.ipynb | akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning | b7dc13b0116898848d8d0b8a95b7af182982bd6b | [
"MIT"
] | null | null | null | 33.677903 | 167 | 0.541036 | [
[
[
"# Cross-Entropy Method\n\n---\n\nIn this notebook, we will train the Cross-Entropy Method with OpenAI Gym's MountainCarContinuous environment.",
"_____no_output_____"
],
[
"### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"import gym\nimport math\nimport numpy as np\nfrom collections import deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable",
"_____no_output_____"
]
],
[
[
"### 2. Instantiate the Environment and Agent",
"_____no_output_____"
]
],
[
[
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\nenv = gym.make('MountainCarContinuous-v0')\nenv.seed(101)\nnp.random.seed(101)\n\nprint('observation space:', env.observation_space)\nprint('action space:', env.action_space)\nprint(' - low:', env.action_space.low)\nprint(' - high:', env.action_space.high)\n\nclass Agent(nn.Module):\n def __init__(self, env, h_size=16):\n super(Agent, self).__init__()\n self.env = env\n # state, hidden layer, action sizes\n self.s_size = env.observation_space.shape[0]\n self.h_size = h_size\n self.a_size = env.action_space.shape[0]\n # define layers\n self.fc1 = nn.Linear(self.s_size, self.h_size)\n self.fc2 = nn.Linear(self.h_size, self.a_size)\n \n def set_weights(self, weights):\n s_size = self.s_size\n h_size = self.h_size\n a_size = self.a_size\n # separate the weights for each layer\n fc1_end = (s_size*h_size)+h_size\n fc1_W = torch.from_numpy(weights[:s_size*h_size].reshape(s_size, h_size))\n fc1_b = torch.from_numpy(weights[s_size*h_size:fc1_end])\n fc2_W = torch.from_numpy(weights[fc1_end:fc1_end+(h_size*a_size)].reshape(h_size, a_size))\n fc2_b = torch.from_numpy(weights[fc1_end+(h_size*a_size):])\n # set the weights for each layer\n self.fc1.weight.data.copy_(fc1_W.view_as(self.fc1.weight.data))\n self.fc1.bias.data.copy_(fc1_b.view_as(self.fc1.bias.data))\n self.fc2.weight.data.copy_(fc2_W.view_as(self.fc2.weight.data))\n self.fc2.bias.data.copy_(fc2_b.view_as(self.fc2.bias.data))\n \n def get_weights_dim(self):\n return (self.s_size+1)*self.h_size + (self.h_size+1)*self.a_size\n \n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = F.tanh(self.fc2(x))\n return x.cpu().data\n \n def evaluate(self, weights, gamma=1.0, max_t=5000):\n self.set_weights(weights)\n episode_return = 0.0\n state = self.env.reset()\n for t in range(max_t):\n state = torch.from_numpy(state).float().to(device)\n action = self.forward(state)\n state, reward, done, _ = self.env.step(action)\n episode_return += reward * math.pow(gamma, t)\n if done:\n break\n return episode_return\n \nagent = Agent(env).to(device)",
"/home/oxygen/anaconda3/lib/python3.7/site-packages/gym/logger.py:30: UserWarning: \u001b[33mWARN: Box bound precision lowered by casting to float32\u001b[0m\n warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))\n"
]
],
[
[
"### 3. Train the Agent with the Cross-Entropy Method\n\nRun the code cell below to train the agent from scratch. Alternatively, you can skip to the next code cell to load the pre-trained weights from file.",
"_____no_output_____"
]
],
[
[
"def cem(n_iterations=500, max_t=1000, gamma=1.0, print_every=10, pop_size=50, elite_frac=0.2, sigma=0.5):\n \"\"\"PyTorch implementation of the cross-entropy method.\n \n Params\n ======\n n_iterations (int): maximum number of training iterations\n max_t (int): maximum number of timesteps per episode\n gamma (float): discount rate\n print_every (int): how often to print average score (over last 100 episodes)\n pop_size (int): size of population at each iteration\n elite_frac (float): percentage of top performers to use in update\n sigma (float): standard deviation of additive noise\n \"\"\"\n n_elite=int(pop_size*elite_frac)\n\n scores_deque = deque(maxlen=100)\n scores = []\n best_weight = sigma*np.random.randn(agent.get_weights_dim())\n\n for i_iteration in range(1, n_iterations+1):\n weights_pop = [best_weight + (sigma*np.random.randn(agent.get_weights_dim())) for i in range(pop_size)]\n rewards = np.array([agent.evaluate(weights, gamma, max_t) for weights in weights_pop])\n\n elite_idxs = rewards.argsort()[-n_elite:]\n elite_weights = [weights_pop[i] for i in elite_idxs]\n best_weight = np.array(elite_weights).mean(axis=0)\n\n reward = agent.evaluate(best_weight, gamma=1.0)\n scores_deque.append(reward)\n scores.append(reward)\n \n torch.save(agent.state_dict(), 'checkpoint.pth')\n \n if i_iteration % print_every == 0:\n print('Episode {}\\tAverage Score: {:.2f}'.format(i_iteration, np.mean(scores_deque)))\n\n if np.mean(scores_deque)>=90.0:\n print('\\nEnvironment solved in {:d} iterations!\\tAverage Score: {:.2f}'.format(i_iteration-100, np.mean(scores_deque)))\n break\n return scores\n\nscores = cem()\n\n# plot the scores\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(1, len(scores)+1), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"/home/oxygen/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py:1340: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.\n warnings.warn(\"nn.functional.tanh is deprecated. Use torch.tanh instead.\")\n"
]
],
[
[
"### 4. Watch a Smart Agent!\n\nIn the next code cell, you will load the trained weights from file to watch a smart agent!",
"_____no_output_____"
]
],
[
[
"# load the weights from file\nagent.load_state_dict(torch.load('checkpoint.pth'))\n\nstate = env.reset()\nwhile True:\n state = torch.from_numpy(state).float().to(device)\n with torch.no_grad():\n action = agent(state)\n env.render()\n next_state, reward, done, _ = env.step(action)\n state = next_state\n if done:\n break\n\nenv.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbd9f7eac10f489176deafee065177b462bfdf5 | 225,291 | ipynb | Jupyter Notebook | C/Tree_2k/benchmark/scaling_overhead.ipynb | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 115 | 2015-03-23T13:34:42.000Z | 2022-03-21T00:27:21.000Z | C/Tree_2k/benchmark/scaling_overhead.ipynb | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 56 | 2015-02-25T15:04:26.000Z | 2022-01-03T07:42:48.000Z | C/Tree_2k/benchmark/scaling_overhead.ipynb | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 59 | 2015-11-26T11:44:51.000Z | 2022-03-21T00:27:22.000Z | 169.391729 | 71,476 | 0.834898 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"scaling = pd.read_csv('scaling.txt', sep=' ')\nscaling",
"_____no_output_____"
],
[
"def plot_query_size_eff(scaling, point_nrs):\n plt.figure(figsize=(14, 10))\n plt.xscale('log')\n plt.xlabel('fraction of points returned by query')\n plt.yscale('log')\n plt.ylabel('ratio of naive versus tree_2k query time')\n data = []\n legend_keys = []\n for nr_points in point_nrs:\n sel = scaling.nr_points == nr_points\n data.extend([scaling[sel]['nr_results']/scaling[sel]['nr_points'],\n scaling[sel]['n_q_time']/scaling[sel]['q_time'], '-'])\n legend_keys.append('nr. points = {0:d}'.format(nr_points))\n plt.plot(*data);\n plt.legend(legend_keys)",
"_____no_output_____"
],
[
"plot_query_size_eff(scaling, [100000, 500000, 750000, 1000000])",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 10))\nplt.xlabel('nr. points')\nplt.ylabel('time for insert (s)')\nplt.plot(scaling['nr_points'], scaling['i_time']);",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 10))\nplt.xlabel('nr. points')\nplt.ylabel('time for query (s), radius = 0.1')\nsel = scaling.radius == 0.10\nplt.plot(scaling[sel]['nr_points'], scaling[sel]['q_time'],\n scaling[sel]['nr_points'], scaling[sel]['n_q_time'])\nplt.legend(['quad tree query', 'naive query'], loc='upper left');",
"_____no_output_____"
],
[
"overhead = pd.read_csv('overhead.txt', sep=' ')\noverhead",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 10))\nplt.xlabel('nr. points')\nplt.ylabel('size (MB)')\nplt.plot(overhead['nr_points'], overhead['total_size']/1024**2);",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 10))\nplt.xlabel('nr. points')\nplt.ylabel('avg. nr. nodes per bucket')\nplt.ylim([0.0, 10.0])\nplt.plot(overhead['nr_points'], overhead['avg_points']);",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbda0ea47209a5fec9c0bcb1eb76ca1e406a498 | 5,722 | ipynb | Jupyter Notebook | 02sics.ipynb | heyredhat/qbism | 192333b725495c6b66582f7a7b0b4c18a2f392a4 | [
"Apache-2.0"
] | 2 | 2021-01-27T18:39:12.000Z | 2021-02-01T06:57:02.000Z | 02sics.ipynb | heyredhat/qbism | 192333b725495c6b66582f7a7b0b4c18a2f392a4 | [
"Apache-2.0"
] | null | null | null | 02sics.ipynb | heyredhat/qbism | 192333b725495c6b66582f7a7b0b4c18a2f392a4 | [
"Apache-2.0"
] | null | null | null | 28.187192 | 215 | 0.486893 | [
[
[
"#hide\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"# default_exp sics",
"_____no_output_____"
]
],
[
[
"# SIC-POVM's",
"_____no_output_____"
]
],
[
[
"#export\nimport numpy as np\nimport qutip as qt\nfrom itertools import product\nimport pkg_resources\n\nfrom qbism.weyl_heisenberg import *",
"_____no_output_____"
],
[
"#export \ndef load_fiducial(d):\n r\"\"\"\n Loads a Weyl-Heisenberg covariant SIC-POVM fiducial state of dimension $d$ from the repository provided here: http://www.physics.umb.edu/Research/QBism/solutions.html.\n \"\"\"\n f = pkg_resources.resource_stream(__name__, \"sic_povms/d%d.txt\" % d)\n fiducial = []\n for line in f:\n if line.strip() != \"\":\n re, im = [float(v) for v in line.split()]\n fiducial.append(re + 1j*im)\n return qt.Qobj(np.array(fiducial)).unit()",
"_____no_output_____"
],
[
"#export\ndef sic_states(d):\n r\"\"\"\n Returns the $d^2$ states constructed by applying the Weyl-Heisenberg displacement operators to the SIC-POVM fiducial state of dimension $d$.\n \"\"\"\n return weyl_heisenberg_states(load_fiducial(d))",
"_____no_output_____"
],
[
"#export\ndef sic_povm(d):\n r\"\"\"\n Returns a SIC-POVM of dimension $d$.\n \"\"\"\n return weyl_heisenberg_povm(load_fiducial(d))",
"_____no_output_____"
],
[
"#export \ndef sic_gram(d):\n r\"\"\"\n The Gram matrix is the matrix of inner products: $G_{i,j} = \\langle v_{i} \\mid v_{j} \\rangle$. For a SIC, this matrix should consist of 1's along the diagonal, and all other entries $\\frac{1}{d+1}$:\n\n $$ \\begin{pmatrix} 1 & \\frac{1}{d+1} & \\frac{1}{d+1} & \\dots \\\\\n \\frac{1}{d+1} & 1 & \\frac{1}{d+1} & \\dots \\\\\n \\frac{1}{d+1} & \\frac{1}{d+1} & 1 & \\dots \\\\\n \\vdots & \\vdots & \\vdots & \\ddots \\end{pmatrix}$$\n\n \"\"\"\n return np.array([[1 if i == j else 1/(d+1) for j in range(d**2)] for i in range(d**2)])",
"_____no_output_____"
],
[
"#export\ndef hoggar_fiducial():\n r\"\"\"\n Returns a fiducial state for the exceptional SIC in dimension $8$, the Hoggar SIC.\n\n Unnormalized: $\\begin{pmatrix} -1 + 2i \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \\end{pmatrix}$.\n \"\"\"\n fiducial = qt.Qobj(np.array([-1 + 2j, 1, 1, 1, 1, 1, 1, 1])).unit()\n fiducial.dims = [[2,2,2],[1,1,1]]\n return fiducial",
"_____no_output_____"
],
[
"#export\ndef hoggar_indices():\n r\"\"\"\n Returns a list with entries $(a, b, c, d, e, f)$ for $a, b, c, d, e, f \\in [0, 1]$.\n \"\"\"\n return list(product([0,1], repeat=6))",
"_____no_output_____"
],
[
"#export\ndef hoggar_povm():\n r\"\"\"\n Constructs the Hoggar POVM, which is covariant under the tensor product of three copies of the $d=2$ Weyl-Heisenberg group. In other words, we apply the 64 displacement operators:\n\n $$ \\hat{D}_{a, b, c, d, e, f} = X^{a}Z^{b} \\otimes X^{c}Z^{d} \\otimes X^{e}Z^{f} $$\n\n To the Hoggar fiducial state, form the corresponding projectors, and rescale by $\\frac{1}{8}$.\n \"\"\"\n Z, X = clock(2), shift(2)\n indices = hoggar_indices()\n D = dict([(I, qt.tensor(X**I[0]*Z**I[1],\\\n X**I[2]*Z**I[3],\\\n X**I[4]*Z**I[5])) for I in indices])\n fiducial = hoggar_fiducial()\n hoggar_states = [D[I]*fiducial for I in indices]\n return [(1/8)*state*state.dag() for state in hoggar_states]",
"_____no_output_____"
]
],
[
[
"Let's make sure the Hoggar POVM is really a POVM:",
"_____no_output_____"
]
],
[
[
"ID = qt.identity(8)\nID.dims = [[2,2,2],[2,2,2]]\nassert np.allclose(sum(hoggar_povm()), ID)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbda9041d1e489eec682249ecab32477b4196c2 | 19,149 | ipynb | Jupyter Notebook | local_tz/first_database.ipynb | gcslui/Money-generator | 1b9e40296d30851344bb2bf06ad58ecf2e37d4fc | [
"BSD-3-Clause"
] | null | null | null | local_tz/first_database.ipynb | gcslui/Money-generator | 1b9e40296d30851344bb2bf06ad58ecf2e37d4fc | [
"BSD-3-Clause"
] | null | null | null | local_tz/first_database.ipynb | gcslui/Money-generator | 1b9e40296d30851344bb2bf06ad58ecf2e37d4fc | [
"BSD-3-Clause"
] | 1 | 2022-01-11T18:12:12.000Z | 2022-01-11T18:12:12.000Z | 33.244792 | 448 | 0.411092 | [
[
[
"import yfinance as yf\nimport sqlite3",
"_____no_output_____"
],
[
"msft = yf.Ticker(\"MSFT\")\n\n# get stock info\nmsft.info",
"_____no_output_____"
],
[
"# get historical market data\nhist = msft.history(period=\"7d\", interval='1m')\nhist['Ticker'] = \"MSFT\"\nhist['Year'] = hist.index.year\nhist['Month'] = hist.index.month\nhist['Day'] = hist.index.day\nhist['Hour'] = hist.index.hour\nhist['Minute'] = hist.index.minute\nhist = hist[['Year', 'Month', 'Day', 'Hour', 'Minute','Ticker', 'Open', 'High', 'Low', 'Close', 'Volume', 'Dividends', 'Stock Splits']]\nhist.head()",
"_____no_output_____"
],
[
"# get historical market data\nhist = msft.history(period=\"max\")\nhist['Ticker'] = \"MSFT\"\nhist['Year'] = hist.index.year\nhist['Month'] = hist.index.month\nhist['Day'] = hist.index.day\nhist = hist[['Year', 'Month', 'Day','Ticker', 'Open', 'High', 'Low', 'Close', 'Volume', 'Dividends', 'Stock Splits']]\nhist.head()",
"_____no_output_____"
],
[
"conn = sqlite3.connect('stocks.db')\nhist.to_sql('microsoft_prices', conn, if_exists='replace')",
"/home/tiantian/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py:2882: UserWarning: The spaces in these column names will not be changed. In pandas versions < 0.14, spaces were converted to underscores.\n method=method,\n"
]
],
[
[
"c = conn.cursor()\nc.execute(\"SELECT rowid, * FROM microsoft_prices\")\nitems = c.fetchall()\nprint(items[0])\nprint(items[-1])",
"_____no_output_____"
]
],
[
[
"#### PRAGMA schema.table_info(table-name);\n\nThis pragma returns one row for each column in the named table. Columns in the result set include the column name, data type, whether or not the column can be NULL, and the default value for the column. The \"pk\" column in the result set is zero for columns that are not part of the primary key, and is the index of the column in the primary key for columns that are part of the primary key.\n\nThe table named in the table_info pragma can also be a view.",
"_____no_output_____"
]
],
[
[
"c.execute(\"PRAGMA table_info(microsoft_prices)\")\ndtypes = c.fetchall()\nprint(dtypes)",
"[(0, 'Date', 'TIMESTAMP', 0, None, 0), (1, 'Year', 'INTEGER', 0, None, 0), (2, 'Month', 'INTEGER', 0, None, 0), (3, 'Day', 'INTEGER', 0, None, 0), (4, 'Ticker', 'TEXT', 0, None, 0), (5, 'Open', 'REAL', 0, None, 0), (6, 'High', 'REAL', 0, None, 0), (7, 'Low', 'REAL', 0, None, 0), (8, 'Close', 'REAL', 0, None, 0), (9, 'Volume', 'INTEGER', 0, None, 0), (10, 'Dividends', 'INTEGER', 0, None, 0), (11, 'Stock Splits', 'INTEGER', 0, None, 0)]\n"
],
[
"conn.close()",
"_____no_output_____"
]
]
] | [
"code",
"raw",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecbdaf9607280028ee7e0025423212355089af59 | 5,877 | ipynb | Jupyter Notebook | notebooks/visualize_average.ipynb | Julienbeaulieu/kaggle-computer-vision-competition | 7bc6bcb8b85d81ff1544040c403e356c0a3c8060 | [
"MIT"
] | 14 | 2020-12-07T22:24:17.000Z | 2022-03-30T05:11:55.000Z | notebooks/visualize_average.ipynb | Julienbeaulieu/kaggle-computer-vision-competition | 7bc6bcb8b85d81ff1544040c403e356c0a3c8060 | [
"MIT"
] | null | null | null | notebooks/visualize_average.ipynb | Julienbeaulieu/kaggle-computer-vision-competition | 7bc6bcb8b85d81ff1544040c403e356c0a3c8060 | [
"MIT"
] | 4 | 2020-02-22T17:54:23.000Z | 2022-01-31T06:41:11.000Z | 31.427807 | 115 | 0.595202 | [
[
[
"import pickle\nfrom pathlib import Path\nfrom PIL import Image\nimport numpy as np\n# Notebook widget for interactive exploration\nimport ipywidgets as widgets\nfrom ipywidgets import interact, interact_manual\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\nimport cv2 as cv\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))",
"_____no_output_____"
],
[
"from dotenv import load_dotenv, find_dotenv\n\n# Load the .ENV path. \nload_dotenv(find_dotenv())\n\n# Get Env variable on the pathing. \nimport os\nPATH_DATA_INTERIM=os.getenv(\"PATH_DATA_INTERIM\")\nPATH_DATA_RAW=os.getenv(\"PATH_DATA_RAW\")",
"_____no_output_____"
],
[
"# Load the training data, ~5GB\nimport pickle\nfrom pathlib import Path\nwith open(Path(PATH_DATA_INTERIM) / \"train_data.p\", 'rb') as pickle_file:\n data_train = pickle.load(pickle_file)\n# Load the validation data, about 1.3GB\nwith open(Path(PATH_DATA_INTERIM) / \"val_data.p\", 'rb') as pickle_file:\n data_val = pickle.load(pickle_file)",
"_____no_output_____"
],
[
"# Compute mean train image\n# Tuple slicing: https://stackoverflow.com/questions/33829535/how-to-slice-a-list-of-tuples-in-python\n# Get a list of all the training images\nimages_train = list(list(zip(*data_train))[0])\n# Compute its mean \nimage_train_mean = np.mean(images_train, axis=0)\n\n# Compute mean validation image\nimages_val = list(list(zip(*data_val))[0])\n# Compute its mean \nimage_val_mean = np.mean(images_val, axis=0)",
"_____no_output_____"
],
[
"f, axarr = plt.subplots(1,2)\naxarr[0].imshow(image_train_mean, cmap='gray')\naxarr[0].set_title(\"Training Dataset\")\naxarr[1].imshow(image_val_mean, cmap='gray') \naxarr[1].set_title(\"Validation Dataset\")\nf.set_size_inches(18.5, 10.5)\nf.suptitle('Composite Figure of Mean Image Intensity (No Normalization)', fontsize=40) ",
"_____no_output_____"
],
[
"# Quick image masking: https://stackoverflow.com/questions/40449781/convert-image-np-array-to-binary-image\n# Make all non-white pixels black. \n\n@interact\ndef show_binary_image(threshold=(0,255,0.5)):\n image_train_mean_binarized = 1.0 * (image_train_mean < threshold)\n image_val_mean_binarized = 1.0 * (image_val_mean < threshold)\n \n f, axarr = plt.subplots(1,2)\n axarr[0].imshow(image_train_mean_binarized, cmap='gray')\n axarr[0].set_title(\"Training Dataset\")\n axarr[1].imshow(image_val_mean_binarized, cmap='gray') \n axarr[1].set_title(\"Validation Dataset\")\n f.set_size_inches(18.5, 10.5)\n f.suptitle('Negative Threshold Filtered Images', fontsize=45)",
"_____no_output_____"
],
[
"# Quick image masking: https://stackoverflow.com/questions/40449781/convert-image-np-array-to-binary-image\n# Make all non-white pixels black. \n\n@interact\ndef show_binary_image(threshold=(0,255,0.5)): \n img_train_norm = np.zeros((137, 236))\n img_train_norm = cv.normalize(image_train_mean, img_train_norm, 0, 255, cv.NORM_MINMAX)\n \n img_val_norm = np.zeros((137, 236))\n img_val_norm = cv.normalize(image_val_mean, img_val_norm, 0, 255, cv.NORM_MINMAX)\n \n image_train_norm_binarized = 1.0 * (img_train_norm < threshold)\n image_val_norm_binarized = 1.0 * (img_val_norm < threshold)\n \n f, axarr = plt.subplots(1,2)\n axarr[0].imshow(image_train_norm_binarized, cmap='gray')\n axarr[0].set_title(\"Training Dataset\")\n axarr[1].imshow(image_val_norm_binarized, cmap='gray') \n axarr[1].set_title(\"Validation Dataset\")\n f.set_size_inches(18.5, 10.5)\n f.suptitle('Negative Threshold Filtered Images AFTER image intensity normalization', fontsize=45)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbdbcd153a68280adcccce8f365a3ea5380fd67 | 59,155 | ipynb | Jupyter Notebook | code/prey.ipynb | robingh42/ModSimPy | 82c917f26f2661f7c8ee01bdc243d1872ca8340a | [
"MIT"
] | null | null | null | code/prey.ipynb | robingh42/ModSimPy | 82c917f26f2661f7c8ee01bdc243d1872ca8340a | [
"MIT"
] | null | null | null | code/prey.ipynb | robingh42/ModSimPy | 82c917f26f2661f7c8ee01bdc243d1872ca8340a | [
"MIT"
] | null | null | null | 321.494565 | 29,004 | 0.928392 | [
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *",
"_____no_output_____"
],
[
"def make_state(elk, wolves):\n \"\"\"Make a system object for the predator-prey model.\n \n \n returns: State object\n \n \"\"\"\n\n return State(elk=elk,wolves=wolves)",
"_____no_output_____"
],
[
"def make_system(alpha, beta, gamma, delta):\n \"\"\"Make a system object for the predator-prey model.\n \n alpha,\n beta,\n gamma,\n delta\n \n returns: State object\n \"\"\"\n\n return System(alpha=alpha, beta=beta, gamma=gamma, delta=delta)",
"_____no_output_____"
],
[
"def update(state,syst):\n \n dx=(syst.alpha * state.elk) - (syst.beta * state.elk * state.wolves)\n \n dy=(syst.delta * state.elk * state.wolves) - (syst.gamma * state.wolves)\n \n return make_state(state.elk+dx,state.wolves+dy)\n ",
"_____no_output_____"
],
[
"def run_sim(state,syst,update,end):\n \n frame = TimeFrame(columns=('elk','wolves'))\n frame.row[0] = state\n for t in linrange(end):\n frame.row[t+1] = update(frame.row[t], syst)\n\n \n return frame\n",
"_____no_output_____"
],
[
"state=State(elk=1,wolves=1)\nsyst=System(alpha=.05,beta=.1,gamma=.1,delta=.1)\nsim=run_sim(state,syst,update,200)#1650\n#print(sim)\nplot(sim.index,sim.elk,sim.wolves)",
"_____no_output_____"
],
[
"sim.plot()\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbdcc324e9be562b96e411c6aa7666a6fc95a5a | 9,805 | ipynb | Jupyter Notebook | Gyakorlat/Blokk_5/7.feladatsor.ipynb | feipaat/NumerikusMatematika_DE | ca20df1fffd62786343d308c97fbc9d1725a5ff4 | [
"MIT"
] | null | null | null | Gyakorlat/Blokk_5/7.feladatsor.ipynb | feipaat/NumerikusMatematika_DE | ca20df1fffd62786343d308c97fbc9d1725a5ff4 | [
"MIT"
] | null | null | null | Gyakorlat/Blokk_5/7.feladatsor.ipynb | feipaat/NumerikusMatematika_DE | ca20df1fffd62786343d308c97fbc9d1725a5ff4 | [
"MIT"
] | 7 | 2020-03-25T08:54:15.000Z | 2020-05-16T09:00:07.000Z | 28.175287 | 231 | 0.519837 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecbde10dca209ae89281e2fd397bf77d23dcbcbe | 252,275 | ipynb | Jupyter Notebook | cs231n/assignment3/NetworkVisualization-PyTorch.ipynb | Rolight/cs224n | 0031771c0662426af3cf9935051e3d35d08cca20 | [
"Apache-2.0"
] | null | null | null | cs231n/assignment3/NetworkVisualization-PyTorch.ipynb | Rolight/cs224n | 0031771c0662426af3cf9935051e3d35d08cca20 | [
"Apache-2.0"
] | null | null | null | cs231n/assignment3/NetworkVisualization-PyTorch.ipynb | Rolight/cs224n | 0031771c0662426af3cf9935051e3d35d08cca20 | [
"Apache-2.0"
] | null | null | null | 372.636632 | 224,736 | 0.910923 | [
[
[
"# Network Visualization (PyTorch)\n\nIn this notebook we will explore the use of *image gradients* for generating new images.\n\nWhen training a model, we define a loss function which measures our current unhappiness with the model's performance; we then use backpropagation to compute the gradient of the loss with respect to the model parameters, and perform gradient descent on the model parameters to minimize the loss.\n\nHere we will do something slightly different. We will start from a convolutional neural network model which has been pretrained to perform image classification on the ImageNet dataset. We will use this model to define a loss function which quantifies our current unhappiness with our image, then use backpropagation to compute the gradient of this loss with respect to the pixels of the image. We will then keep the model fixed, and perform gradient descent *on the image* to synthesize a new image which minimizes the loss.\n\nIn this notebook we will explore three techniques for image generation:\n\n1. **Saliency Maps**: Saliency maps are a quick way to tell which part of the image influenced the classification decision made by the network.\n2. **Fooling Images**: We can perturb an input image so that it appears the same to humans, but will be misclassified by the pretrained network.\n3. **Class Visualization**: We can synthesize an image to maximize the classification score of a particular class; this can give us some sense of what the network is looking for when it classifies images of that class.\n\nThis notebook uses **PyTorch**; we have provided another notebook which explores the same concepts in TensorFlow. You only need to complete one of these two notebooks.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torch.autograd import Variable\nimport torchvision\nimport torchvision.transforms as T\nimport random\n\nimport numpy as np\nfrom scipy.ndimage.filters import gaussian_filter1d\nimport matplotlib.pyplot as plt\nfrom cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD\nfrom PIL import Image\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'",
"_____no_output_____"
]
],
[
[
"### Helper Functions\n\nOur pretrained model was trained on images that had been preprocessed by subtracting the per-color mean and dividing by the per-color standard deviation. We define a few helper functions for performing and undoing this preprocessing. You don't need to do anything in this cell.",
"_____no_output_____"
]
],
[
[
"def preprocess(img, size=224):\n transform = T.Compose([\n T.Scale(size),\n T.ToTensor(),\n T.Normalize(mean=SQUEEZENET_MEAN.tolist(),\n std=SQUEEZENET_STD.tolist()),\n T.Lambda(lambda x: x[None]),\n ])\n return transform(img)\n\ndef deprocess(img, should_rescale=True):\n transform = T.Compose([\n T.Lambda(lambda x: x[0]),\n T.Normalize(mean=[0, 0, 0], std=(1.0 / SQUEEZENET_STD).tolist()),\n T.Normalize(mean=(-SQUEEZENET_MEAN).tolist(), std=[1, 1, 1]),\n T.Lambda(rescale) if should_rescale else T.Lambda(lambda x: x),\n T.ToPILImage(),\n ])\n return transform(img)\n\ndef rescale(x):\n low, high = x.min(), x.max()\n x_rescaled = (x - low) / (high - low)\n return x_rescaled\n \ndef blur_image(X, sigma=1):\n X_np = X.cpu().clone().numpy()\n X_np = gaussian_filter1d(X_np, sigma, axis=2)\n X_np = gaussian_filter1d(X_np, sigma, axis=3)\n X.copy_(torch.Tensor(X_np).type_as(X))\n return X",
"_____no_output_____"
]
],
[
[
"# Pretrained Model\n\nFor all of our image generation experiments, we will start with a convolutional neural network which was pretrained to perform image classification on ImageNet. We can use any model here, but for the purposes of this assignment we will use SqueezeNet [1], which achieves accuracies comparable to AlexNet but with a significantly reduced parameter count and computational complexity.\n\nUsing SqueezeNet rather than AlexNet or VGG or ResNet means that we can easily perform all image generation experiments on CPU.\n\n[1] Iandola et al, \"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size\", arXiv 2016",
"_____no_output_____"
]
],
[
[
"# Download and load the pretrained SqueezeNet model.\nmodel = torchvision.models.squeezenet1_1(pretrained=True)\n\n# We don't want to train the model, so tell PyTorch not to compute gradients\n# with respect to model parameters.\nfor param in model.parameters():\n param.requires_grad = False",
"Downloading: \"https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth\" to /Users/loulinhui/.torch/models/squeezenet1_1-f364aa15.pth\n100.0%\n"
]
],
[
[
"## Load some ImageNet images\nWe have provided a few example images from the validation set of the ImageNet ILSVRC 2012 Classification dataset. To download these images, change to `cs231n/datasets/` and run `get_imagenet_val.sh`.\n\nSince they come from the validation set, our pretrained model did not see these images during training.\n\nRun the following cell to visualize some of these images, along with their ground-truth labels.",
"_____no_output_____"
]
],
[
[
"from cs231n.data_utils import load_imagenet_val\nX, y, class_names = load_imagenet_val(num=5)\n\nplt.figure(figsize=(12, 6))\nfor i in range(5):\n plt.subplot(1, 5, i + 1)\n plt.imshow(X[i])\n plt.title(class_names[y[i]])\n plt.axis('off')\nplt.gcf().tight_layout()",
"_____no_output_____"
]
],
[
[
"# Saliency Maps\nUsing this pretrained model, we will compute class saliency maps as described in Section 3.1 of [2].\n\nA **saliency map** tells us the degree to which each pixel in the image affects the classification score for that image. To compute it, we compute the gradient of the unnormalized score corresponding to the correct class (which is a scalar) with respect to the pixels of the image. If the image has shape `(3, H, W)` then this gradient will also have shape `(3, H, W)`; for each pixel in the image, this gradient tells us the amount by which the classification score will change if the pixel changes by a small amount. To compute the saliency map, we take the absolute value of this gradient, then take the maximum value over the 3 input channels; the final saliency map thus has shape `(H, W)` and all entries are nonnegative.\n\n[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\nImage Classification Models and Saliency Maps\", ICLR Workshop 2014.",
"_____no_output_____"
],
[
"### Hint: PyTorch `gather` method\nRecall in Assignment 1 you needed to select one element from each row of a matrix; if `s` is an numpy array of shape `(N, C)` and `y` is a numpy array of shape `(N,`) containing integers `0 <= y[i] < C`, then `s[np.arange(N), y]` is a numpy array of shape `(N,)` which selects one element from each element in `s` using the indices in `y`.\n\nIn PyTorch you can perform the same operation using the `gather()` method. If `s` is a PyTorch Tensor or Variable of shape `(N, C)` and `y` is a PyTorch Tensor or Variable of shape `(N,)` containing longs in the range `0 <= y[i] < C`, then\n\n`s.gather(1, y.view(-1, 1)).squeeze()`\n\nwill be a PyTorch Tensor (or Variable) of shape `(N,)` containing one entry from each row of `s`, selected according to the indices in `y`.\n\nrun the following cell to see an example.\n\nYou can also read the documentation for [the gather method](http://pytorch.org/docs/torch.html#torch.gather)\nand [the squeeze method](http://pytorch.org/docs/torch.html#torch.squeeze).",
"_____no_output_____"
]
],
[
[
"# Example of using gather to select one entry from each row in PyTorch\ndef gather_example():\n N, C = 4, 5\n s = torch.randn(N, C)\n y = torch.LongTensor([1, 2, 1, 3])\n print(s)\n print(y)\n print(s.gather(1, y.view(-1, 1)).squeeze())\ngather_example()",
"\n-0.2129 -0.8634 -0.2618 -1.2205 0.1274\n 0.3648 -0.3214 0.1871 -0.5713 -0.0152\n-0.0492 -0.1968 1.0963 1.1588 0.5513\n-2.2880 1.1102 -1.2491 1.2168 -1.6631\n[torch.FloatTensor of size 4x5]\n\n\n 1\n 2\n 1\n 3\n[torch.LongTensor of size 4]\n\n\n-0.8634\n 0.1871\n-0.1968\n 1.2168\n[torch.FloatTensor of size 4]\n\n"
],
[
"def compute_saliency_maps(X, y, model):\n \"\"\"\n Compute a class saliency map using the model for images X and labels y.\n\n Input:\n - X: Input images; Tensor of shape (N, 3, H, W)\n - y: Labels for X; LongTensor of shape (N,)\n - model: A pretrained CNN that will be used to compute the saliency map.\n\n Returns:\n - saliency: A Tensor of shape (N, H, W) giving the saliency maps for the input\n images.\n \"\"\"\n # Make sure the model is in \"test\" mode\n model.eval()\n \n # Wrap the input tensors in Variables\n X_var = Variable(X, requires_grad=True)\n y_var = Variable(y)\n saliency = None\n ##############################################################################\n # TODO: Implement this function. Perform a forward and backward pass through #\n # the model to compute the gradient of the correct class score with respect #\n # to each input image. You first want to compute the loss over the correct #\n # scores, and then compute the gradients with a backward pass. #\n ##############################################################################\n pass\n ##############################################################################\n # END OF YOUR CODE #\n ##############################################################################\n return saliency",
"_____no_output_____"
]
],
[
[
"Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on our example images from the ImageNet validation set:",
"_____no_output_____"
]
],
[
[
"def show_saliency_maps(X, y):\n # Convert X and y from numpy arrays to Torch Tensors\n X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)\n y_tensor = torch.LongTensor(y)\n\n # Compute saliency maps for images in X\n saliency = compute_saliency_maps(X_tensor, y_tensor, model)\n\n # Convert the saliency map from Torch Tensor to numpy array and show images\n # and saliency maps together.\n saliency = saliency.numpy()\n N = X.shape[0]\n for i in range(N):\n plt.subplot(2, N, i + 1)\n plt.imshow(X[i])\n plt.axis('off')\n plt.title(class_names[y[i]])\n plt.subplot(2, N, N + i + 1)\n plt.imshow(saliency[i], cmap=plt.cm.hot)\n plt.axis('off')\n plt.gcf().set_size_inches(12, 5)\n plt.show()\n\nshow_saliency_maps(X, y)",
"_____no_output_____"
]
],
[
[
"# Fooling Images\nWe can also use image gradients to generate \"fooling images\" as discussed in [3]. Given an image and a target class, we can perform gradient **ascent** over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.\n\n[3] Szegedy et al, \"Intriguing properties of neural networks\", ICLR 2014",
"_____no_output_____"
]
],
[
[
"def make_fooling_image(X, target_y, model):\n \"\"\"\n Generate a fooling image that is close to X, but that the model classifies\n as target_y.\n\n Inputs:\n - X: Input image; Tensor of shape (1, 3, 224, 224)\n - target_y: An integer in the range [0, 1000)\n - model: A pretrained CNN\n\n Returns:\n - X_fooling: An image that is close to X, but that is classifed as target_y\n by the model.\n \"\"\"\n # Initialize our fooling image to the input image, and wrap it in a Variable.\n X_fooling = X.clone()\n X_fooling_var = Variable(X_fooling, requires_grad=True)\n \n learning_rate = 1\n ##############################################################################\n # TODO: Generate a fooling image X_fooling that the model will classify as #\n # the class target_y. You should perform gradient ascent on the score of the #\n # target class, stopping when the model is fooled. #\n # When computing an update step, first normalize the gradient: #\n # dX = learning_rate * g / ||g||_2 #\n # #\n # You should write a training loop. #\n # #\n # HINT: For most examples, you should be able to generate a fooling image #\n # in fewer than 100 iterations of gradient ascent. #\n # You can print your progress over iterations to check your algorithm. #\n ##############################################################################\n pass\n ##############################################################################\n # END OF YOUR CODE #\n ##############################################################################\n return X_fooling",
"_____no_output_____"
]
],
[
[
"Run the following cell to generate a fooling image:",
"_____no_output_____"
]
],
[
[
"idx = 0\ntarget_y = 6\n\nX_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)\nX_fooling = make_fooling_image(X_tensor[idx:idx+1], target_y, model)\n\nscores = model(Variable(X_fooling))\nassert target_y == scores.data.max(1)[1][0, 0], 'The model is not fooled!'",
"_____no_output_____"
]
],
[
[
"After generating a fooling image, run the following cell to visualize the original image, the fooling image, as well as the difference between them.",
"_____no_output_____"
]
],
[
[
"X_fooling_np = deprocess(X_fooling.clone())\nX_fooling_np = np.asarray(X_fooling_np).astype(np.uint8)\n\nplt.subplot(1, 4, 1)\nplt.imshow(X[idx])\nplt.title(class_names[y[idx]])\nplt.axis('off')\n\nplt.subplot(1, 4, 2)\nplt.imshow(X_fooling_np)\nplt.title(class_names[target_y])\nplt.axis('off')\n\nplt.subplot(1, 4, 3)\nX_pre = preprocess(Image.fromarray(X[idx]))\ndiff = np.asarray(deprocess(X_fooling - X_pre, should_rescale=False))\nplt.imshow(diff)\nplt.title('Difference')\nplt.axis('off')\n\nplt.subplot(1, 4, 4)\ndiff = np.asarray(deprocess(10 * (X_fooling - X_pre), should_rescale=False))\nplt.imshow(diff)\nplt.title('Magnified difference (10x)')\nplt.axis('off')\n\nplt.gcf().set_size_inches(12, 5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Class visualization\nBy starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [2]; [3] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image.\n\nConcretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem\n\n$$\nI^* = \\arg\\max_I s_y(I) - R(I)\n$$\n\nwhere $R$ is a (possibly implicit) regularizer (note the sign of $R(I)$ in the argmax: we want to minimize this regularization term). We can solve this optimization problem using gradient ascent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form\n\n$$\nR(I) = \\lambda \\|I\\|_2^2\n$$\n\n**and** implicit regularization as suggested by [3] by periodically blurring the generated image. We can solve this problem using gradient ascent on the generated image.\n\nIn the cell below, complete the implementation of the `create_class_visualization` function.\n\n[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\nImage Classification Models and Saliency Maps\", ICLR Workshop 2014.\n\n[3] Yosinski et al, \"Understanding Neural Networks Through Deep Visualization\", ICML 2015 Deep Learning Workshop",
"_____no_output_____"
]
],
[
[
"def jitter(X, ox, oy):\n \"\"\"\n Helper function to randomly jitter an image.\n \n Inputs\n - X: PyTorch Tensor of shape (N, C, H, W)\n - ox, oy: Integers giving number of pixels to jitter along W and H axes\n \n Returns: A new PyTorch Tensor of shape (N, C, H, W)\n \"\"\"\n if ox != 0:\n left = X[:, :, :, :-ox]\n right = X[:, :, :, -ox:]\n X = torch.cat([right, left], dim=3)\n if oy != 0:\n top = X[:, :, :-oy]\n bottom = X[:, :, -oy:]\n X = torch.cat([bottom, top], dim=2)\n return X",
"_____no_output_____"
],
[
"def create_class_visualization(target_y, model, dtype, **kwargs):\n \"\"\"\n Generate an image to maximize the score of target_y under a pretrained model.\n \n Inputs:\n - target_y: Integer in the range [0, 1000) giving the index of the class\n - model: A pretrained CNN that will be used to generate the image\n - dtype: Torch datatype to use for computations\n \n Keyword arguments:\n - l2_reg: Strength of L2 regularization on the image\n - learning_rate: How big of a step to take\n - num_iterations: How many iterations to use\n - blur_every: How often to blur the image as an implicit regularizer\n - max_jitter: How much to gjitter the image as an implicit regularizer\n - show_every: How often to show the intermediate result\n \"\"\"\n model.type(dtype)\n l2_reg = kwargs.pop('l2_reg', 1e-3)\n learning_rate = kwargs.pop('learning_rate', 25)\n num_iterations = kwargs.pop('num_iterations', 100)\n blur_every = kwargs.pop('blur_every', 10)\n max_jitter = kwargs.pop('max_jitter', 16)\n show_every = kwargs.pop('show_every', 25)\n\n # Randomly initialize the image as a PyTorch Tensor, and also wrap it in\n # a PyTorch Variable.\n img = torch.randn(1, 3, 224, 224).mul_(1.0).type(dtype)\n img_var = Variable(img, requires_grad=True)\n\n for t in range(num_iterations):\n # Randomly jitter the image a bit; this gives slightly nicer results\n ox, oy = random.randint(0, max_jitter), random.randint(0, max_jitter)\n img.copy_(jitter(img, ox, oy))\n\n ########################################################################\n # TODO: Use the model to compute the gradient of the score for the #\n # class target_y with respect to the pixels of the image, and make a #\n # gradient step on the image using the learning rate. Don't forget the #\n # L2 regularization term! #\n # Be very careful about the signs of elements in your code. #\n ########################################################################\n pass\n ########################################################################\n # END OF YOUR CODE #\n ########################################################################\n \n # Undo the random jitter\n img.copy_(jitter(img, -ox, -oy))\n\n # As regularizer, clamp and periodically blur the image\n for c in range(3):\n lo = float(-SQUEEZENET_MEAN[c] / SQUEEZENET_STD[c])\n hi = float((1.0 - SQUEEZENET_MEAN[c]) / SQUEEZENET_STD[c])\n img[:, c].clamp_(min=lo, max=hi)\n if t % blur_every == 0:\n blur_image(img, sigma=0.5)\n \n # Periodically show the image\n if t == 0 or (t + 1) % show_every == 0 or t == num_iterations - 1:\n plt.imshow(deprocess(img.clone().cpu()))\n class_name = class_names[target_y]\n plt.title('%s\\nIteration %d / %d' % (class_name, t + 1, num_iterations))\n plt.gcf().set_size_inches(4, 4)\n plt.axis('off')\n plt.show()\n\n return deprocess(img.cpu())",
"_____no_output_____"
]
],
[
[
"Once you have completed the implementation in the cell above, run the following cell to generate an image of a Tarantula:",
"_____no_output_____"
]
],
[
[
"dtype = torch.FloatTensor\n# dtype = torch.cuda.FloatTensor # Uncomment this to use GPU\nmodel.type(dtype)\n\ntarget_y = 76 # Tarantula\n# target_y = 78 # Tick\n# target_y = 187 # Yorkshire Terrier\n# target_y = 683 # Oboe\n# target_y = 366 # Gorilla\n# target_y = 604 # Hourglass\nout = create_class_visualization(target_y, model, dtype)",
"_____no_output_____"
]
],
[
[
"Try out your class visualization on other classes! You should also feel free to play with various hyperparameters to try and improve the quality of the generated image, but this is not required.",
"_____no_output_____"
]
],
[
[
"# target_y = 78 # Tick\n# target_y = 187 # Yorkshire Terrier\n# target_y = 683 # Oboe\n# target_y = 366 # Gorilla\n# target_y = 604 # Hourglass\ntarget_y = np.random.randint(1000)\nprint(class_names[target_y])\nX = create_class_visualization(target_y, model, dtype)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbdfe056345ef5c17f178eb19556ebc6b03abe6 | 11,359 | ipynb | Jupyter Notebook | examples/demo_pipeline_hyperopt.ipynb | kant/lale | 972b4284775f1c2dfa8d1692381fdb0ddd1cfafe | [
"Apache-2.0"
] | null | null | null | examples/demo_pipeline_hyperopt.ipynb | kant/lale | 972b4284775f1c2dfa8d1692381fdb0ddd1cfafe | [
"Apache-2.0"
] | null | null | null | examples/demo_pipeline_hyperopt.ipynb | kant/lale | 972b4284775f1c2dfa8d1692381fdb0ddd1cfafe | [
"Apache-2.0"
] | null | null | null | 40.567857 | 483 | 0.534466 | [
[
[
"import warnings\nwarnings.filterwarnings(\"ignore\")\nfrom lale.lib.lale import NoOp\nfrom lale.lib.sklearn import KNeighborsClassifier\nfrom lale.lib.sklearn import LogisticRegression\nfrom lale.lib.sklearn import Nystroem\nfrom lale.lib.sklearn import PCA\nfrom lale.operators import make_union, make_choice, make_pipeline\nfrom lale.helpers import to_graphviz",
"_____no_output_____"
]
],
[
[
"#### Lale provides an `|` combinator or a function make_choice() to allow only one of its arguments to be applied at once in the overall pipeline. In this example, the first step of the pipeline is a choice between Nystroem and NoOp. This means that the data will either be transformed using Nystroem or will be left as is (NoOp is a transformer that does nothing). The second step in the pipeline is a PCA, and the third step is again a choice between two popular classifiers.",
"_____no_output_____"
]
],
[
[
"kernel_tfm_or_not = NoOp | Nystroem\n#kernel_tfm_or_not.to_graphviz()",
"_____no_output_____"
],
[
"tfm = PCA",
"_____no_output_____"
],
[
"clf = make_choice(LogisticRegression, KNeighborsClassifier)\nto_graphviz(clf)",
"_____no_output_____"
],
[
"optimizable = kernel_tfm_or_not >> tfm >> clf\nto_graphviz(optimizable)",
"_____no_output_____"
]
],
[
[
"#### Use the graph to select the best performing model for a dataset. We use Iris dataset from sklearn for this demonstration. Hyperopt is used to scan the hyperparameter search space and select the best performing path from the above graph. ",
"_____no_output_____"
]
],
[
[
"from lale.lib.lale.hyperopt_classifier import HyperoptClassifier\nfrom lale.datasets import load_iris_df\n\n(X_train, y_train), (X_test, y_test) = load_iris_df()",
"_____no_output_____"
],
[
"optimizer = HyperoptClassifier(model=optimizable, max_evals=1)\ntrained = optimizer.fit(X_train, y_train)\nto_graphviz(trained)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecbdfe274736f3156076109bc691da37b45741e4 | 85,256 | ipynb | Jupyter Notebook | _posts/ithome/2021/10.KNN/10.1.KNN(Classification-iris).ipynb | andy6804tw/andy6804tw.github.io | fb5a69d39e44548b6e6b7199d0c1702f776de339 | [
"MIT"
] | 1 | 2021-02-23T08:29:26.000Z | 2021-02-23T08:29:26.000Z | _posts/ithome/2021/10.KNN/10.1.KNN(Classification-iris).ipynb | andy6804tw/andy6804tw.github.io | fb5a69d39e44548b6e6b7199d0c1702f776de339 | [
"MIT"
] | null | null | null | _posts/ithome/2021/10.KNN/10.1.KNN(Classification-iris).ipynb | andy6804tw/andy6804tw.github.io | fb5a69d39e44548b6e6b7199d0c1702f776de339 | [
"MIT"
] | 5 | 2019-11-04T06:48:15.000Z | 2020-04-14T10:02:13.000Z | 165.224806 | 39,896 | 0.878894 | [
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.datasets import load_iris",
"_____no_output_____"
]
],
[
[
"## 1) 載入資料集",
"_____no_output_____"
]
],
[
[
"iris = load_iris()\ndf_data = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= ['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm','Species'])\ndf_data",
"_____no_output_____"
]
],
[
[
"## 2) 切割訓練集與測試集",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nX = df_data.drop(labels=['Species'],axis=1).values # 移除Species並取得剩下欄位資料\ny = df_data['Species'].values\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, stratify=y)\n\nprint('train shape:', X_train.shape)\nprint('test shape:', X_test.shape)",
"train shape: (105, 4)\ntest shape: (45, 4)\n"
]
],
[
[
"## 建立 k-nearest neighbors(KNN) 模型\nParameters:\n- n_neighbors: 設定鄰居的數量(k),選取最近的k個點,預設為5。\n- algorithm: 搜尋數演算法{'auto','ball_tree','kd_tree','brute'},可選。\n- metric: 計算距離的方式,預設為歐幾里得距離。\n\nAttributes:\n- classes_: 取得類別陣列。\n- effective_metric_: 取得計算距離的公式。\n\nMethods:\n- fit: 放入X、y進行模型擬合。\n- predict: 預測並回傳預測類別。\n- score: 預測成功的比例。",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\n\n# 建立 KNN 模型\nknnModel = KNeighborsClassifier(n_neighbors=3)\n# 使用訓練資料訓練模型\nknnModel.fit(X_train,y_train)\n# 使用訓練資料預測分類\npredicted = knnModel.predict(X_train)",
"_____no_output_____"
]
],
[
[
"## 使用Score評估模型",
"_____no_output_____"
]
],
[
[
"# 預測成功的比例\nprint('訓練集: ',knnModel.score(X_train,y_train))\nprint('測試集: ',knnModel.score(X_test,y_test))",
"訓練集: 0.9619047619047619\n測試集: 0.9555555555555556\n"
]
],
[
[
"## 測試集真實分類",
"_____no_output_____"
]
],
[
[
"# 建立測試集的 DataFrme\ndf_test=pd.DataFrame(X_test, columns= ['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm'])\ndf_test['Species'] = y_test\npred = knnModel.predict(X_test)\ndf_test['Predict'] = pred",
"_____no_output_____"
],
[
"sns.lmplot(x=\"PetalLengthCm\", y=\"PetalWidthCm\", hue='Species', data=df_test, fit_reg=False, legend=False)\nplt.legend(title='target', loc='upper left', labels=['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica'])\nplt.show()",
"/opt/anaconda3/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
]
],
[
[
"## KNN (測試集)預測結果",
"_____no_output_____"
]
],
[
[
"sns.lmplot(x=\"PetalLengthCm\", y=\"PetalWidthCm\", data=df_test, hue=\"Predict\", fit_reg=False, legend=False)\nplt.legend(title='target', loc='upper left', labels=['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica'])\nplt.show()",
"/opt/anaconda3/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
]
],
[
[
"# 進階學習\n## 查看不同的K分類結果\n為了方便視覺化我們將原有的測試集特徵使用PCA降成2維。接著觀察在不同 K 的狀況下,分類的情形為何。",
"_____no_output_____"
]
],
[
[
"from matplotlib.colors import ListedColormap\n\ndef plot_decision_regions(X, y, classifier, test_idx = None, resolution=0.02):\n # setup marker generator and color map\n markers = ('s','x','o','^','v')\n colors = ('red','blue','lightgreen','gray','cyan')\n cmap = ListedColormap(colors[:len(np.unique(y))])\n\n # plot the decision surface\n x1_min, x1_max = X[:,0].min() - 1, X[:,0].max() + 1\n x2_min, x2_max = X[:,1].min() - 1, X[:,1].max() + 1\n\n xx1, xx2 = np.meshgrid(np.arange(x1_min,x1_max,resolution),\n np.arange(x2_min,x2_max,resolution))\n\n Z = classifier.predict(np.array([xx1.ravel(),xx2.ravel()]).T)\n\n Z = Z.reshape(xx1.shape)\n\n plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)\n plt.xlim(xx1.min(),xx1.max())\n plt.ylim(xx2.min(),xx2.max())\n\n for idx, cl in enumerate(np.unique(y)):\n plt.scatter(x=X[y==cl,0], y=X[y==cl,1],\n alpha=0.8, c=[cmap(idx)], marker=markers[idx],label=cl)\n if test_idx:\n X_test, y_test = X[test_idx,:], y[test_idx]\n plt.scatter(X_test[:, 0], X_test[:,1], c='',\n alpha=1.0, linewidth=1, marker='o',\n s=55, label='test set')\n",
"_____no_output_____"
],
[
"def knn_model(plot_dict, X, y, k):\n #create model\n model = KNeighborsClassifier(n_neighbors=k)\n\n #training\n model.fit(X, y)\n\n # Plot the decision boundary. For that, we will assign a color to each\n if k in plot_dict:\n plt.subplot(plot_dict[k])\n plt.tight_layout()\n plot_decision_regions(X,y,model)\n plt.title('Plot for K: %d'%k )",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\npca = PCA(n_components=2, iterated_power=1)\ntrain_reduced = pca.fit_transform(X_train)\ntest_reduced = pca.fit_transform(X_test)",
"_____no_output_____"
]
],
[
[
"### KNN 訓練集 PCA 2 features",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8.5, 6))\n\n# 調整 K\nplot_dict = {1:231,2:232,3:233,6:234,10:235,15:236}\nfor i in plot_dict:\n knn_model(plot_dict, train_reduced, y_train, i)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbe3423300b913fcd129a9c185bcd8a6043e50f | 43,921 | ipynb | Jupyter Notebook | module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb | afroman32/DS-Unit-4-Sprint-3-Deep-Learning | f3f04e1b091e08b21f7f8d86be9a3bb1f59dc5ac | [
"MIT"
] | null | null | null | module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb | afroman32/DS-Unit-4-Sprint-3-Deep-Learning | f3f04e1b091e08b21f7f8d86be9a3bb1f59dc5ac | [
"MIT"
] | null | null | null | module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb | afroman32/DS-Unit-4-Sprint-3-Deep-Learning | f3f04e1b091e08b21f7f8d86be9a3bb1f59dc5ac | [
"MIT"
] | null | null | null | 30.995766 | 519 | 0.591357 | [
[
[
"<a href=\"https://colab.research.google.com/github/afroman32/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"\n\n## *Data Science Unit 4 Sprint 3 Assignment 2*\n# Convolutional Neural Networks (CNNs)",
"_____no_output_____"
],
[
"# Assignment\n\n- <a href=\"#p1\">Part 1:</a> Pre-Trained Model\n- <a href=\"#p2\">Part 2:</a> Custom CNN Model\n- <a href=\"#p3\">Part 3:</a> CNN with Data Augmentation\n\n\nYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). \n\n|Mountain (+)|Forest (-)|\n|---|---|\n|||\n\nThe problem is relatively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect when prototyping an image classification problem/solution at work. Get accustomed to evaluating several different possible models.",
"_____no_output_____"
],
[
"# Pre - Trained Model\n<a id=\"p1\"></a>\n\nLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:\n\n```python\nimport numpy as np\n\nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D\nfrom tensorflow.keras.models import Model # This is the functional API\n\nresnet = ResNet50(weights='imagenet', include_top=False)\n\n```\n\nThe `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. \n\n```python\nfor layer in resnet.layers:\n layer.trainable = False\n```\n\nUsing the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). \n\n```python\nx = resnet.output\nx = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten\nx = Dense(1024, activation='relu')(x)\npredictions = Dense(1, activation='sigmoid')(x)\nmodel = Model(resnet.input, predictions)\n```\n\nYour assignment is to apply the transfer learning above to classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). \n\nSteps to complete assignment: \n1. Load in Image Data into numpy arrays (`X`) \n2. Create a `y` for the labels\n3. Train your model with pre-trained layers from resnet\n4. Report your model's accuracy",
"_____no_output_____"
],
[
"-----\n\n# GPU on Colab\n\nIf you're working on Colab, you only have access to 2 processors so your model training will be slow. However, if you turn on the GPU instance that you have access to then you're model training will be faster! \n\n[**Instructions for turning on GPU on Colab**](https://colab.research.google.com/notebooks/gpu.ipynb)\n\n------",
"_____no_output_____"
],
[
"## Load in Data\n\nThis surprisingly more difficult than it seems, because you are working with directories of images instead of a single file. \n\nThis boiler plate will help you download a zipped version of the directory of images. The directory is organized into **train** and **validation** directories which you can use inside an `ImageGenerator` class to stream batches of images through your model. \n\n",
"_____no_output_____"
]
],
[
[
"from os import listdir\nfrom os.path import isfile, join\n\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\nimport os\n\nimport numpy as np\n\nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D\nfrom tensorflow.keras.models import Model # This is the functional API\n\nfrom tensorflow.keras import datasets\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten\n\nfrom keras.preprocessing.image import array_to_img, img_to_array, load_img",
"_____no_output_____"
],
[
"%matplotlib inline\n%load_ext tensorboard",
"_____no_output_____"
],
[
"# Clear any tensorboard logs from previous runs\n!rm -rf ./logs/",
"_____no_output_____"
]
],
[
[
"### Download & Summarize the Data\n\nThis step is completed for you. Just run the cells and review the results. ",
"_____no_output_____"
]
],
[
[
"# data url\n_URL = 'https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/data.zip?raw=true'\n\n# download data and save to `file_name`\nfile_name = './data.zip'\npath_to_zip = tf.keras.utils.get_file(file_name, origin=_URL, extract=True)\n\n# get absolute path to location of the data that we just downloaded\nPATH = os.path.join(os.path.dirname(path_to_zip), 'data')",
"_____no_output_____"
],
[
"# protip: go to your terminal and paste the output below and cd into it\n# explore it a bit...we'll come back to this later - muahahaha!!!\nPATH",
"_____no_output_____"
],
[
"# create train data dir path\ntrain_dir = os.path.join(PATH, 'train')\n\n# create validation data dir path\nvalidation_dir = os.path.join(PATH, 'validation')",
"_____no_output_____"
],
[
"# train directory with mountian data sub-dir \ntrain_mountain_dir = os.path.join(train_dir, 'mountain') \n\n# train directory with forest data sub-dir \ntrain_forest_dir = os.path.join(train_dir, 'forest') \n\n# validation directory with mountain data sub-dir \nvalidation_mountain_dir = os.path.join(validation_dir, 'mountain') \n\n# validation directory with forest data sub-dir \nvalidation_forest_dir = os.path.join(validation_dir, 'forest') ",
"_____no_output_____"
],
[
"# get the number of samples in each of the sub-dir \nnum_mountain_tr = len(os.listdir(train_mountain_dir))\nnum_forest_tr = len(os.listdir(train_forest_dir))\n\nnum_mountain_val = len(os.listdir(validation_mountain_dir))\nnum_forest_val = len(os.listdir(validation_forest_dir))\n\n# get the total numnber of sample for the train and validation sets\ntotal_train = num_mountain_tr + num_forest_tr\ntotal_val = num_mountain_val + num_forest_val",
"_____no_output_____"
],
[
"print('total training mountain images:', num_mountain_tr)\nprint('total training forest images:', num_forest_tr)\n\nprint('total validation mountain images:', num_mountain_val)\nprint('total validation forest images:', num_forest_val)\nprint(\"--\")\nprint(\"Total training images:\", total_train)\nprint(\"Total validation images:\", total_val)",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Keras `ImageGenerator` to Process the Data\n\nThis step is completed for you, but please review the code. The `ImageGenerator` class reads in batches of data from a directory and pass them to the model one batch at a time. Just like large text files, this method is advantageous, because it stifles the need to load a bunch of images into memory. \n\n**Check out the documentation for this class method:** [**Keras ImageGenerator Class**](https://keras.io/preprocessing/image/#imagedatagenerator-class). You'll expand it's use in the next section.",
"_____no_output_____"
]
],
[
[
"batch_size = 16\nepochs = 10 # feel free to change this value only after you've gone through the notebook once \nIMG_HEIGHT = 224\nIMG_WIDTH = 224",
"_____no_output_____"
],
[
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n\n# ImageDataGenerator can rescale data from within \nmax_pixel_val = 255.\nrescale = 1./max_pixel_val\n\n# Generator for our training data\ntrain_image_generator = ImageDataGenerator(rescale=rescale) \n \n# Generator for our validation data \nvalidation_image_generator = ImageDataGenerator(rescale=rescale) ",
"_____no_output_____"
],
[
"# Takes the path to a directory & generates batches of augmented data\ntrain_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n directory=train_dir,\n shuffle=True,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary', \n color_mode='rgb')",
"_____no_output_____"
],
[
"# explore some of `train_data_gen` attributes to get a sense of what this object can do for you",
"_____no_output_____"
],
[
"# Takes the path to a directory & generates batches of augmented data\nval_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,\n directory=validation_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary', \n color_mode='rgb')",
"_____no_output_____"
],
[
"# explore some of `val_data_gen` attributes to get a sense of what this object can do for you\n",
"_____no_output_____"
]
],
[
[
"_____\n## Instantiate Model\n\nHere your job is to take the python code in the beginning of the notebook (in the markdown cell) and turn it into working code. \n\nMost of the code that you'll need to build a model is in that markdown cell, though you'll still need to compile the model.\n\nSome pseudo-code is provided as guide. ",
"_____no_output_____"
]
],
[
[
"import numpy as np\n \nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n \nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D\nfrom tensorflow.keras.models import Model # This is the functional API\n \nresnet = ResNet50(weights='imagenet', include_top=False)",
"_____no_output_____"
],
[
"for layer in resnet.layers:\n layer.trainable = False",
"_____no_output_____"
],
[
"# take the model output layer as a starting point\nx = resnet.output\n\n# take a global average pool\nx = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten\n\n# add a trainable hidden layer with 1024 nodes \nx = Dense(1024, activation='relu')(x)\n\n# add a trainable output layer \npredictions = Dense(1, activation='sigmoid')(x)\n\n# put it all together using the Keras's Model api\nmodel = Model(resnet.input, predictions)",
"_____no_output_____"
],
[
"# the only code that is missing is the compile method \nmodel.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n",
"_____no_output_____"
]
],
[
[
"## Fit Model",
"_____no_output_____"
]
],
[
[
"# include the callback into the fit method\n# we'll launch tensorboard in the last section\nlogdir = os.path.join(\"logs\", \"resnet_model\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)",
"_____no_output_____"
],
[
"history = model.fit(\n train_data_gen,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size,\n workers=10, # num should be 1 or 2 processors less than the total number of process on your machine\n callbacks=[tensorboard_callback]\n)",
"_____no_output_____"
]
],
[
[
"## Take Away\n\nThe above task is an exercising in using a pre-trained model in the context of **Transfer Learning**. \n\n**Transfer Learning** happens when you take a model that was trained on data set $A$ and applying it on data set $B$ (you may or may not choose to re-train the model.)\n\nWe loaded in a pre-trained model (meaning the weight values have been optimized in a previous fit), and updated the value of the weights by re-training them. Note that we didn't reset the model weights, what we did was \ncontinue their training on a different dataset, our data set. \n",
"_____no_output_____"
],
[
"-----\n# Custom CNN Model\n\nIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want. \n\n**Protip:** You'll be creating a 2nd instance of this same model in the next section. Instead of copying and pasting all this code, just embed it in a function called `def create_model()` that returns a complied model. \n\nFree free to reference the custom CNN model that we built together in the guided project. ",
"_____no_output_____"
]
],
[
[
"train_data_gen[0][0][1].shape",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nplt.imshow(train_data_gen[0][0][1]);",
"_____no_output_____"
],
[
"def create_model():\n \"\"\"\n Since we'll using this model again in the next section, it's useful to create a function \n that returns a compiled model.\n \"\"\"\n \n model = Sequential([\n Conv2D(32, (3,3), activation='relu', input_shape=(224, 224, 3)),\n MaxPooling2D((2,2)),\n Conv2D(64, (3,3), activation='relu'),\n MaxPooling2D((2,2)),\n Flatten(),\n Dense(64, activation='relu'),\n Dense(2, activation='softmax')\n ])\n\n # Compile the model\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\n# model.summary()\n return model",
"_____no_output_____"
],
[
"# instantiate a model \nmodel = create_model()",
"_____no_output_____"
],
[
"model.summary()",
"_____no_output_____"
],
[
"# include this callback in your fit method\n# we'll launch tensorboard in the last section\nlogdir = os.path.join(\"logs\", \"baseline_model\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)",
"_____no_output_____"
],
[
"# Fit Model\nepochs = 10\nhistory = model.fit(\n train_data_gen,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size,\n workers=-1, # num should be 1 or 2 processors less than the total number of process on your machine \n callbacks=[tensorboard_callback]\n)",
"_____no_output_____"
]
],
[
[
"------\n# Custom CNN Model with Image Manipulations\n\nTo simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. You should be able to modify our image generator for the problem. Check out these resources to help you get started: \n\n1. [**Keras ImageGenerator Class**](https://keras.io/preprocessing/image/#imagedatagenerator-class) documentation for the tool that we need to use to augment our images.\n2. [**Building a powerful image classifier with very little data**](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) This is an essentially a tutorial on how to use the ImageGenerator class to create augmented images. You can essentially copy and paste the relevant code, though don't do that blindly! \n ",
"_____no_output_____"
],
[
"### Use ImageDataGenerator to create augmented data\n\nUse explore the parameters for ImageDataGenerator that will enable you to generate augment version of images that we already have. Here are some of the relevant parameters to help you get started. \n\n- rescale\n- shear_range\n- zoom_range\n- horizontal_flip\n\n### Only create augmented images for the training data. \n\nWe want to be able to do a comparison with the same CNN model (or models) from above. \n\nIn order to do that, we will augment the training data but not the validation data. Then we'll compare the accuracy and loss on the validation set. \n\nThat way we are comparing the performance of the same model architecture on the same test set, the only different will be the augmented training data. Therefore, we'll be in a position to determine if augmenting our training data actually helped improve our model performance. \n\nThis is an example of a controlled experiment.",
"_____no_output_____"
]
],
[
[
"batch_size = 16\n\n# ImageDataGenerator can rescale data from within \nmax_pixel_val = 255.\nrescale = 1./max_pixel_val",
"_____no_output_____"
],
[
"from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\n\ntrain_datagen_aug = ImageDataGenerator(\n# rotation_range=40,\n# width_shift_range=0.2,\n# height_shift_range=0.2,\n rescale=1./244,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest')",
"_____no_output_____"
],
[
"# call the .flow_from_directory() method - save result to `train_data_gen_aug`\n# protip: be mindful of the parameters\nbatch_size = 16\ntrain_data_gen_aug = train_datagen_aug.flow_from_directory(\n train_dir,\n target_size=(244, 244),\n batch_size=batch_size,\n class_mode='binary') ",
"_____no_output_____"
]
],
[
[
"### Augment a Single Image\n\nNow that you have instantiate `ImageDataGenerator` object that created augmented images for the training set. Let's visual those augmented images to get a sense of what augmented images actually look like! \n",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"# filename of image that we will augment\n# this is just one of 100's of training images to choose from\n# feel free to explore the training data directories and choose another image\n# this image was selected from the mountain images \nimg_to_aug = \"art1131.jpg\"\n\n# replace with YOUR home directory name \nhome_dir = \"makoa\"\n\n# create absolute file path to image file\n# path_to_single_img = os.path.normpath(\"C:/Users\\\\{0}\\\\.keras\\\\datasets\\\\data\\\\train\\\\mountain\\\\{1}\".format(home_dir, img_to_aug))\npath_to_single_img = \"./data/train/mountain/{1}\".format(home_dir, img_to_aug)\n# path_to_single_img = os.path.normpath(\"C://Users/{0}/Documents/Lambda/Unit 4/DS-Unit-4-Sprint-3-Deep-Learning/module2-convolutional-neural-networks/data/train/mountain/{1}\".format(home_dir, img_to_aug))\npath_to_single_img",
"_____no_output_____"
],
[
"# load in image from file and reshape\nimg = load_img(path_to_single_img)\nx = img_to_array(img) \nx = x.reshape((1,) + x.shape) ",
"_____no_output_____"
]
],
[
[
"Create a temporary director to store the augmented images that we will create in order to visualize. ",
"_____no_output_____"
]
],
[
[
"# this is a terminal command that we are running in the notebook by including a `!` \n# feel free to delete this temp dir after visualizing the aug images below\n# you only need to create this dir once\n!mkdir preview_img",
"_____no_output_____"
]
],
[
[
"Use the training data generator that we just created in order to create 20 augmented images of the same original image.",
"_____no_output_____"
]
],
[
[
"# the .flow() command below generates batches of randomly transformed images\n# and saves the results to the `preview_img` directory\n\n# create 20 aug images\nn_aug_imgs_to_create = 20\ni = 0\nfor batch in train_datagen_aug.flow(x, \n batch_size=1,\n save_to_dir='preview_img', \n save_prefix='art', \n save_format='jpeg'):\n i += 1\n if i > n_aug_imgs_to_create:\n break # otherwise the generator would loop indefinitely",
"_____no_output_____"
],
[
"# create list populated with augmented image filenames\nfile_names = [f for f in listdir(\"preview_img\") if isfile(join(\"preview_img\", f))]",
"_____no_output_____"
],
[
"# load and prep images into a list \naug_imgs = []\nfor filename in file_names:\n\n img = load_img(\"preview_img/{}\".format(filename)) \n a_img = img_to_array(img) \n a_img = a_img.reshape((1,) + x.shape) \n aug_imgs.append(a_img)",
"_____no_output_____"
],
[
"# notice that we are playing with a rank 5 tensor \n# we can ignore the first two numbers\na_img.shape",
"_____no_output_____"
],
[
"# the last 3 numbers have the actual image data\n# (num 1, num 2, num 3) = (img height, img width, color channels)\na_img[0][0].shape",
"_____no_output_____"
]
],
[
[
"### Visualize Augmented Images\n\nNotice that the augmented images are really just the original image with slight changes. One image might be a flipped with respect to the y-axis, or shifted along the x or y axis, or the right hand side might be clipped, or the image might be scaled up or down, or some combination of changes. ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(20,20))\n\nfor i, a_img in enumerate(aug_imgs):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(a_img[0][0]/255.)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now the real question is does any of this ultimately matter? Do these changes actually help our model's ability to learn and generalize better? Well, let's go ahead and run that experiment. \n\n-----",
"_____no_output_____"
],
[
"## Re-train your custom CNN model using the augment dataset\n\nNow that we have created a data generator that creates augmented versions of the training images (and not the validation images). We can create a new instance of our custom CNN model with the same architecture, same parameters such as batch size and epochs and see if using augmented data helps. ",
"_____no_output_____"
]
],
[
[
"aug_model = create_model()",
"_____no_output_____"
],
[
"aug_model.summary()",
"_____no_output_____"
],
[
"logdir = os.path.join(\"logs\", \"aug_model\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)",
"_____no_output_____"
],
[
"# Fit Model\n\nepochs = 10\n\nhistory = aug_model.fit(\n train_data_gen_aug,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size,\n workers=10, # num should be 1 or 2 processors less than the total number of process on your machine \n callbacks=[tensorboard_callback]\n)",
"_____no_output_____"
]
],
[
[
"-----\n\n# Compare Model Results",
"_____no_output_____"
]
],
[
[
"%tensorboard --logdir logs",
"_____no_output_____"
]
],
[
[
"------\n\n### Time for Questions \n\nTake a look at the `epoch_accuracy` plot and answer the following questions. \n\nOptionally move the `Smoothing` slider all the way to zero to view the raw scores. \n\nBy the way, your results may look different than your classmates depending on how you choose to build your custom CNN model. \n\n\n**Question 1:** Which of the three models performed the best? ",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
],
[
"**Question 2:** Did augmenting the training data help our custom CNN model improve its score? If so why, if not why not?",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
],
[
"**Question 3:** Could one or more of the three models benefit from training on more than 10 epochs? If so why, if not why not?",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
],
[
"**Question 4:** If you didn't use regularization for you custom CNN, do you think the baseline model and the aug model could improve their scores if regularization was used? If so why, if not why not?\n\nConsider reviewing your Sprint 2 Module Assginment 2 experimental results on regularization. ",
"_____no_output_____"
],
[
"YOUR ANSWER HERE",
"_____no_output_____"
],
[
"-----",
"_____no_output_____"
],
[
"# Resources and Stretch Goals\n\nStretch goals\n- Enhance your code to use classes/functions and accept terms to search and classes to look for in recognizing the downloaded images (e.g. download images of parties, recognize all that contain balloons)\n- Check out [other available pretrained networks](https://tfhub.dev), try some and compare\n- Image recognition/classification is somewhat solved, but *relationships* between entities and describing an image is not - check out some of the extended resources (e.g. [Visual Genome](https://visualgenome.org/)) on the topic\n- Transfer learning - using images you source yourself, [retrain a classifier](https://www.tensorflow.org/hub/tutorials/image_retraining) with a new category\n- (Not CNN related) Use [piexif](https://pypi.org/project/piexif/) to check out the metadata of images passed in to your system - see if they're from a national park! (Note - many images lack GPS metadata, so this won't work in most cases, but still cool)\n\nResources\n- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) - influential paper (introduced ResNet)\n- [YOLO: Real-Time Object Detection](https://pjreddie.com/darknet/yolo/) - an influential convolution based object detection system, focused on inference speed (for applications to e.g. self driving vehicles)\n- [R-CNN, Fast R-CNN, Faster R-CNN, YOLO](https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e) - comparison of object detection systems\n- [Common Objects in Context](http://cocodataset.org/) - a large-scale object detection, segmentation, and captioning dataset\n- [Visual Genome](https://visualgenome.org/) - a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecbe3aeac13e0b6a166c0e2fb6297e152ea8aa23 | 1,729 | ipynb | Jupyter Notebook | installation/install-mysql-on-mac.ipynb | dobestan/pydata-101 | 041c8f127883a0fc3348ca4027ff0c573c012bdc | [
"MIT"
] | 31 | 2016-07-19T01:31:40.000Z | 2021-11-08T09:04:07.000Z | installation/install-mysql-on-mac.ipynb | dobestan/pydata-101 | 041c8f127883a0fc3348ca4027ff0c573c012bdc | [
"MIT"
] | null | null | null | installation/install-mysql-on-mac.ipynb | dobestan/pydata-101 | 041c8f127883a0fc3348ca4027ff0c573c012bdc | [
"MIT"
] | 20 | 2016-07-19T01:31:41.000Z | 2021-11-22T07:21:05.000Z | 20.583333 | 111 | 0.491035 | [
[
[
"# MAC OSX 에서 MySQL 데이터베이스 설치하기",
"_____no_output_____"
],
[
"## 1. Install brew.sh\n\n`homebrew` ( 이하 `brew` ) 는 맥용 Package Manager ( 패키지 매니저 ) 입니다.\n( 다음의 사이트에서 자세한 정보를 확인하실 수 있습니다: http://brew.sh/ )\n\n터미널 ( `Terminal.app` ) 을 켜시고 다음의 명령어를 복사, 붙여넣기 하시면 설치하실 수 있습니다:\n```\n$ /usr/bin/ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\"\n```\n\n\n이렇게 설치한 `brew` 를 통해서, `MySQL` 뿐만이 아니라, 맥에서 사용하는 다양한 패키지들을 설치하고 이용하실 수 있습니다.",
"_____no_output_____"
],
[
"## 2. Install MySQL\n\n```\n$ brew install mysql\n```\n\n\n```\n$ mysql.server start\n$ mysql.server stop\n$ mysqld_safe --skip-grant-tables\n```\n\n다음의 명령어로 접속이 되는지 살펴봐주세요:\n\n```\n$ mysql -u root\n```",
"_____no_output_____"
],
[
"## 3. Install Sequel Pro\n\n다음의 링크에서 설치하실 수 있습니다: http://www.sequelpro.com/download",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecbe66c5ccfdce3ad7495f76759d286d5b5d1cb2 | 8,396 | ipynb | Jupyter Notebook | lastonebeforequiz2.ipynb | Zumrannain/Python-Data | 206feef7bc920657f2c853d349d1dd1aae234962 | [
"MIT"
] | null | null | null | lastonebeforequiz2.ipynb | Zumrannain/Python-Data | 206feef7bc920657f2c853d349d1dd1aae234962 | [
"MIT"
] | null | null | null | lastonebeforequiz2.ipynb | Zumrannain/Python-Data | 206feef7bc920657f2c853d349d1dd1aae234962 | [
"MIT"
] | null | null | null | 21.639175 | 92 | 0.45748 | [
[
[
"import math\nimport collections\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as pp\n\n%matplotlib inline",
"_____no_output_____"
],
[
"discography = np.load('discography.npy')",
"_____no_output_____"
],
[
"discography",
"_____no_output_____"
],
[
"discography[0]",
"_____no_output_____"
],
[
"discography[0][0]",
"_____no_output_____"
],
[
"discography[0][1]",
"_____no_output_____"
],
[
"discography[0]['title']",
"_____no_output_____"
],
[
"discography['title']",
"_____no_output_____"
],
[
"mindisco = np.zeros(len(discography), dtype = [('title', 'U16'),('release', 'M8[s]')])",
"_____no_output_____"
],
[
"mindisco['title'] = discography['title']\nmindisco['release'] = discography['release']",
"_____no_output_____"
],
[
"mindisco",
"_____no_output_____"
],
[
"np.datetime64('1969')",
"_____no_output_____"
],
[
"np.datetime64('1969-11-14')",
"_____no_output_____"
],
[
"np.datetime64('2015-02-03 12:00')",
"_____no_output_____"
],
[
"np.datetime64('2015-02-03 12:00') < np.datetime64('2015-02-03 18:00')",
"_____no_output_____"
],
[
"np.datetime64('2015-02-03 12:00') < np.datetime64('2015-02-03 12:00')",
"_____no_output_____"
],
[
"np.diff(discography['release'])",
"_____no_output_____"
],
[
"np.arange(np.datetime64('2015-02-03'), np.datetime64('2015-03-01'))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbe67c88ec15a3135695c432be5f9d2fad5ebf0 | 306,100 | ipynb | Jupyter Notebook | DistrictDataLabs/04-team5 | b1679b1b39018179fc2657c83de011f80c9fe7f9 | [
"Apache-2.0"
] | null | null | null | DistrictDataLabs/04-team5 | b1679b1b39018179fc2657c83de011f80c9fe7f9 | [
"Apache-2.0"
] | null | null | null | DistrictDataLabs/04-team5 | b1679b1b39018179fc2657c83de011f80c9fe7f9 | [
"Apache-2.0"
] | 1 | 2020-05-13T11:43:43.000Z | 2020-05-13T11:43:43.000Z | 167.910038 | 47,738 | 0.886207 | [
[
[
"### Analyzing the World Bank's Twitter Feed, Judy Yang, DAT10 Project\n### Part 4. Text Analysis\n\n1. Tokenization, word counts\n\n2. Prediction Linear Regression\n\n3. Topic Modelling\n\n4. Predict High/Low Popular Tweets\n\n5. Term Frequency\n\n6. Sentiment Analysis",
"_____no_output_____"
]
],
[
[
"pwd",
"_____no_output_____"
],
[
"from datetime import datetime\nimport time\nimport json\nimport operator \nimport preprocess\nfrom collections import Counter\n#from textblob import TextBlob\n\nimport pandas as pd\nfrom pandas import ExcelWriter\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n#% sign \n\nimport numpy as np\nimport scipy as sp\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.linear_model import LogisticRegression\nfrom gensim import corpora, models, similarities\nfrom collections import defaultdict\nfrom sklearn import metrics\nfrom textblob import TextBlob, Word\nfrom nltk.stem.snowball import SnowballStemmer\n\nfrom ttp import ttp\n\npd.options.display.max_columns = 50\npd.options.display.max_rows= 50\npd.options.display.width= 120",
"_____no_output_____"
],
[
"# Create excel to save outputs from this notebook\nwriter = ExcelWriter('./data/Project04_outputs_worldbank_21feb2016.xlsx')",
"_____no_output_____"
],
[
"wb = pd.read_pickle('./data/WorldBank_all_processed_17feb_2016')",
"_____no_output_____"
],
[
"wb = wb[(wb.is_RT==0)]\nwb.shape",
"_____no_output_____"
],
[
"wb=wb[wb.user_screen_name==\"WorldBank\"]",
"_____no_output_____"
],
[
"wb['favorite75']=np.where(wb.favorite_count>=49, 1, 0)\nwb['retweet75']=np.where(wb.retweet_count>=70, 1, 0)\nwb['favorite0']=np.where(wb.favorite_count>0, 1, 0)\nwb['retweet0']=np.where(wb.retweet_count>0, 1, 0)",
"_____no_output_____"
],
[
"wb.favorite75.describe()",
"_____no_output_____"
],
[
"wb.retweet75.describe()",
"_____no_output_____"
],
[
"wb = wb.reset_index()",
"_____no_output_____"
],
[
"list(wb.columns.values)",
"_____no_output_____"
]
],
[
[
"### Check top tweets",
"_____no_output_____"
]
],
[
[
"#Save the tweets with the top retweets and favorite counts\ntop_retweets=wb[(wb.retweet_count>=300)].sort_values(\"retweet_count\", ascending=False)\ntop_retweets.to_excel(writer,'top_retweets')\n\ntop_fav=wb[(wb.favorite_count>=300)].sort_values(\"favorite_count\", ascending=False)\ntop_fav.to_excel(writer,'top_favs')",
"_____no_output_____"
]
],
[
[
"### Calculate the likelihood of top favorite and retweet for each word/token ",
"_____no_output_____"
]
],
[
[
"# instantiate the vectorizer\nvect = CountVectorizer(stop_words='english', analyzer='word', ngram_range=(1,1))\n\n# learn the vocabulary of ALL messages and save it\nvect.fit(wb.text_clean)\n#this is a list\nall_tokens = vect.get_feature_names()\nvect",
"_____no_output_____"
],
[
"# create separate DataFrames for high retweet and low retweet\nrhi = wb[wb.retweet75==1]\nrlo = wb[wb.retweet75==0]\n\nfhi = wb[wb.favorite75==1]\nflo = wb[wb.favorite75==0]",
"_____no_output_____"
],
[
"# create document-term matrices for retweet high and low\nrhi_dtm = vect.transform(rhi.text_clean)\nrlo_dtm = vect.transform(rlo.text_clean)\n\n# count how many times EACH token appears across ALL retweet high/low messages\nrhi_counts = np.sum(rhi_dtm.toarray(), axis=0)\nrlo_counts = np.sum(rlo_dtm.toarray(), axis=0)",
"_____no_output_____"
],
[
"# create document-term matrices for favorite high and low\nfhi_dtm = vect.transform(fhi.text_clean)\nflo_dtm = vect.transform(flo.text_clean)\n\n# count how many times EACH token appears across ALL favorite high/low messages\nfhi_counts = np.sum(fhi_dtm.toarray(), axis=0)\nflo_counts = np.sum(flo_dtm.toarray(), axis=0)",
"_____no_output_____"
],
[
"# create a DataFrame of tokens with their separate favorite high and low counts\ntoken_counts = pd.DataFrame({'token':all_tokens, 'flo':flo_counts, 'fhi':fhi_counts, 'rlo':rlo_counts, 'rhi':rhi_counts})\n\n# add one to retweet/favorite high and low counts to avoid dividing by zero (in the step that follows)\n#pseudo counts\ntoken_counts['rlo'] = token_counts.rlo + 1\ntoken_counts['rhi'] = token_counts.rhi + 1\ntoken_counts['flo'] = token_counts.flo + 1\ntoken_counts['fhi'] = token_counts.fhi + 1\n\n# calculate ratio of high-low for each token\ntoken_counts['fav_ratio'] = token_counts.fhi / token_counts.flo\ntoken_counts['retweet_ratio'] = token_counts.rhi / token_counts.rlo",
"_____no_output_____"
],
[
"#export to excel\ncols=['token', 'fhi', 'flo','rhi', 'rlo', 'fav_ratio', 'retweet_ratio']\ntoken_counts=token_counts[cols]\ntoken_counts.to_excel(writer,'ratio_tokens_all_textclean_0')\ntoken_counts.sort_values(\"fav_ratio\", ascending=False).head()",
"_____no_output_____"
],
[
"# Make the PairGrid\ntoken_counts_graph=token_counts.sort_values(\"retweet_ratio\", ascending=False).head(20)\n\nsns.set(style=\"whitegrid\")\n\ng = sns.PairGrid(token_counts_graph.sort_values(\"retweet_ratio\", ascending=False),\n x_vars=token_counts_graph.columns[1:], y_vars=[\"token\"],\n size=8, aspect=.25)\n\n# Draw a dot plot using the stripplot function\ng.map(sns.stripplot, size=10, orient=\"h\",\n palette=\"Reds_r\", edgecolor=\"gray\")\n\n# Use the same x axis limits on all columns and add better labels\ng.set(xlim=(0, 20), xlabel=\"values\", ylabel=\"\")\n\n# Use semantically meaningful titles for the columns\ntitles = [\"N Favorite High\", \"N Favorite Low\", \"N Retweet High\", \"N Retweet Lo\", \"Favorite Likelihood\", \"Retweet Likelihood\"]\n\nfor ax, title in zip(g.axes.flat, titles):\n\n # Set a different title for each axes\n ax.set(title=title)\n\n # Make the grid horizontal instead of vertical\n ax.xaxis.grid(False)\n ax.yaxis.grid(True)\n\nsns.despine(left=True, bottom=True)",
"_____no_output_____"
],
[
"# Make the PairGrid\ntoken_counts_graph=token_counts.sort_values(\"rhi\", ascending=False).head(20)\n\nsns.set(style=\"whitegrid\")\n\ng = sns.PairGrid(token_counts_graph.sort_values(\"rhi\", ascending=False),\n x_vars=token_counts_graph.columns[1:], y_vars=[\"token\"],\n size=8, aspect=.25)\n\n# Draw a dot plot using the stripplot function\ng.map(sns.stripplot, size=10, orient=\"h\",\n palette=\"Reds_r\", edgecolor=\"gray\")\n\n# Use the same x axis limits on all columns and add better labels\ng.set(xlim=(0, 350), xlabel=\"values\", ylabel=\"\")\n\n# Use semantically meaningful titles for the columns\ntitles = [\"N Favorite High\", \"N Favorite Low\", \"N Retweet High\", \"N Retweet Lo\", \"Favorite Likelihood\", \"Retweet Likelihood\"]\n\nfor ax, title in zip(g.axes.flat, titles):\n\n # Set a different title for each axes\n ax.set(title=title)\n\n # Make the grid horizontal instead of vertical\n ax.xaxis.grid(False)\n ax.yaxis.grid(True)\n\nsns.despine(left=True, bottom=True)",
"_____no_output_____"
]
],
[
[
"### Term Frequency and Inverse Document Frequency:\n- **What:** Computes \"relative frequency\" that a word appears in a document compared to its frequency across all documents\n- **Why:** More useful than \"term frequency\" for identifying \"important\" words in each document (high frequency in that document, low frequency in other documents)\n- **Notes:** Used for search engine scoring, text summarization, document clustering",
"_____no_output_____"
]
],
[
[
"# create a document-term matrix using TF-IDF\nvect = TfidfVectorizer(stop_words='english',analyzer='word', ngram_range=(1,1))\ndtm = vect.fit_transform(wb.tags)\nfeatures = vect.get_feature_names()\ndtm.shape",
"_____no_output_____"
],
[
"#See the dtm matrix\n#pd.DataFrame(dtm.toarray(), columns=vect.get_feature_names())",
"_____no_output_____"
]
],
[
[
"### Topic Modelling ",
"_____no_output_____"
]
],
[
[
"X=wb.text_clean",
"_____no_output_____"
],
[
"stoplist = set(CountVectorizer(stop_words='english').get_stop_words() )",
"_____no_output_____"
],
[
"texts = [[word for word in document.lower().split() if word not in stoplist] for document in list(X)]\n\n# count up the frequency of each word\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1 \n \n# (2) remove words that only occur a small number of times, fixing a feature space that's needlessly big.\n# once in the whole corpus, not just once in a single document\n\ntexts = [[token for token in text if frequency[token] > 1] for text in texts]\n\ndictionary = corpora.Dictionary(texts)\ncorpus = [dictionary.doc2bow(text) for text in texts]\n\nlda = models.LdaModel(corpus, id2word=dictionary, num_topics=10, alpha = 'auto')\n#lda.show_topics()\n\n",
"WARNING:gensim.models.ldamodel:too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy\n"
],
[
"lda.top_topics(corpus, num_words=5)",
"_____no_output_____"
]
],
[
[
"** Topic model only the top retweet and favorite tweets **",
"_____no_output_____"
]
],
[
[
"X=wb.text_clean[wb.retweet_count>30]",
"_____no_output_____"
],
[
"texts = [[word for word in document.lower().split() if word not in stoplist] for document in list(X)]\n\n# count up the frequency of each word\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1 \n \n# remove words that only occur a small number of times, fixing a feature space that's needlessly big.\n# once in the whole corpus, not just once in a single document\ntexts = [[token for token in text if frequency[token] > 1] for text in texts]\n\n\ndictionary = corpora.Dictionary(texts)\n\ncorpus = [dictionary.doc2bow(text) for text in texts]\n\nlda = models.LdaModel(corpus, id2word=dictionary, num_topics=10, alpha = 'auto')\n\nlda.print_topics()",
"WARNING:gensim.models.ldamodel:too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy\n"
],
[
"lda.top_topics(corpus, num_words=5)",
"_____no_output_____"
]
],
[
[
"### Predictions: what determines high retweets and favorites?",
"_____no_output_____"
],
[
"** Linear Regression **\n\nHow much of retweet or favorite responses can be explained by non-text",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\nfrom sklearn.cross_validation import train_test_split",
"_____no_output_____"
],
[
"# Exercise : define a function that accepts a list of features and returns testing RMSE\ndef train_test_rmse(cols):\n \n X = wb[cols]\n y= wb.favorite_count\n \n X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=99)\n \n # instantiate and fit\n linreg = LinearRegression()\n linreg.fit(X_train, y_train)\n\n # print the coefficients\n true=y_test\n pred=linreg.predict(X_test)\n rmse=np.sqrt(metrics.mean_squared_error(true, pred))\n \n return rmse",
"_____no_output_____"
],
[
"# compare different sets of features\n#1) has_at \nprint train_test_rmse(['has_at'])\n\n#2) has a hashtag\nprint train_test_rmse(['has_ht'])\n\n#3) has a hashtag\nprint train_test_rmse(['has_link'])\n\n#4) has_at, has a hashtag, has a link, is a RT\nprint train_test_rmse(['has_at', 'has_ht', 'has_link'])",
"41.7834073067\n41.9224317737\n41.897582725\n41.7904268735\n"
]
],
[
[
"Comparing testing RMSE with null RMSE\nNull RMSE is the RMSE that could be achieved by always predicting the mean response value. It is a benchmark against which you may want to measure your regression model.",
"_____no_output_____"
]
],
[
[
"print train_test_rmse(['has_at'])",
"41.7834073067\n"
]
],
[
[
"**Logistic Regression**",
"_____no_output_____"
]
],
[
[
"# train a logistic regression model\nfrom sklearn.linear_model import LogisticRegression\nlogreg = LogisticRegression(C=1e9)\nfrom sklearn import metrics\nfrom sklearn.cross_validation import cross_val_score",
"_____no_output_____"
],
[
"cols=['has_at', 'has_ht', 'has_link']",
"_____no_output_____"
],
[
"# split the new DataFrame into training and testing sets\nX= wb[cols]\ny=wb.retweet75\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\nlogreg.fit(X_train, y_train)\n\n# make predictions for testing set\ny_pred_class = logreg.predict(X_test)\n\n# calculate testing accuracy\nprint metrics.accuracy_score(y_test, y_pred_class)",
"0.721751412429\n"
],
[
"# null accuracy\ny_test.value_counts().head(1) / len(y_test)",
"_____no_output_____"
],
[
"# predict probability of survival\ny_pred_prob = logreg.predict_proba(X_test)[:, 1]\n\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.size'] = 14\n\n# plot ROC curve\nfpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)\nplt.plot(fpr, tpr)\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')",
"_____no_output_____"
],
[
"# histogram of predicted probabilities grouped by actual response value\ndf = pd.DataFrame({'probability':y_pred_prob, 'actual':y_test})\ndf.hist(column='probability', by='actual', sharex=True, sharey=True)",
"_____no_output_____"
],
[
"# calculate AUC\nprint metrics.roc_auc_score(y_test, y_pred_prob)",
"0.582087476532\n"
],
[
"# calculate cross-validated AUC\ncross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()",
"_____no_output_____"
]
],
[
[
"** Naive Bayes, predict top retweet/favorite or not **",
"_____no_output_____"
]
],
[
[
"# split the new DataFrame into training and testing sets\nX=wb.text\n#y=wb.retweet75\ny=wb.favorite75\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)",
"_____no_output_____"
],
[
"type(y_test)",
"_____no_output_____"
],
[
"# use CountVectorizer to create document-term matrices from X_train and X_test\n# remove English stop words\nvect = CountVectorizer(stop_words='english', lowercase=True)\nX_train_dtm = vect.fit_transform(X_train)\nX_test_dtm = vect.transform(X_test) ",
"_____no_output_____"
],
[
"# train a logistic regression model\nlogreg.fit(X_train_dtm, y_train)\n\n# make predictions for testing set\ny_pred_class = logreg.predict(X_test_dtm)\ny_pred_prob = logreg.predict_proba(X_test_dtm)[:, 1]\n\n# calculate accuracy\nprint metrics.accuracy_score(y_test, y_pred_class)",
"0.771186440678\n"
],
[
"# calculate null accuracy\ny_test_binary = np.where(y_test==1, 1, 0)\nmax(y_test_binary.mean(), 1 - y_test_binary.mean())",
"_____no_output_____"
],
[
"# define a function that accepts a vectorizer and calculates the accuracy\ndef tokenize_test(vect):\n X_train_dtm = vect.fit_transform(X_train)\n print 'Features: ', X_train_dtm.shape[1]\n X_test_dtm = vect.transform(X_test)\n logreg.fit(X_train_dtm, y_train)\n y_pred_class = logreg.predict(X_test_dtm)\n print 'Accuracy: ', metrics.accuracy_score(y_test, y_pred_class)",
"_____no_output_____"
],
[
"# include 1-grams and 2-grams\nvect = CountVectorizer(ngram_range=(1, 2))\ntokenize_test(vect)",
"Features: 30392\nAccuracy: 0.778248587571\n"
],
[
"plt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.size'] = 14\n\n# plot ROC curve\nfpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)\nplt.plot(fpr, tpr)\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')",
"_____no_output_____"
],
[
"# histogram of predicted probabilities grouped by actual response value\ndf = pd.DataFrame({'probability':y_pred_prob, 'actual':y_test})\ndf.hist(column='probability', by='actual', sharex=True, sharey=True)",
"_____no_output_____"
],
[
"# calculate AUC\nprint metrics.roc_auc_score(y_test, y_pred_prob)",
"0.76968641115\n"
],
[
"# calculate cross-validated AUC\n#cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()",
"_____no_output_____"
]
],
[
[
"### Sentiment Analysis",
"_____no_output_____"
]
],
[
[
"# define a function that accepts text and returns the polarity\ndef detect_sentiment(text):\n return TextBlob(text.decode('utf-8')).sentiment.polarity",
"_____no_output_____"
],
[
"# create a new DataFrame column for sentiment (WARNING: SLOW!)\nwb['sentiment'] = wb.text_clean.apply(detect_sentiment)\nwb['ln_RT']=np.log(wb.retweet_count+1)\nwb['ln_fav']=np.log(wb.favorite_count+1)",
"_____no_output_____"
],
[
"sns.distplot(wb.sentiment)",
"_____no_output_____"
],
[
"wb.year.value_counts()",
"_____no_output_____"
],
[
"# Plot tip as a function of toal bill across days\ng = sns.lmplot(x=\"retweet_count\", y=\"sentiment\", data=wb, size=7)\n\n# Use more informative axis labels than are provided by default\ng.set_axis_labels(\"retweet counts\", \"sentiment\")",
"_____no_output_____"
],
[
"# Plot tip as a function of toal bill across days\ng = sns.lmplot(x=\"favorite_count\", y=\"sentiment\", hue=\"has_ht\", data=wb, size=7)\n\n# Use more informative axis labels than are provided by default\ng.set_axis_labels(\"favorite counts\", \"sentiment\")",
"_____no_output_____"
],
[
"# list reviews with most positive sentiment\nwb[wb.sentiment == 1].text.head()",
"_____no_output_____"
],
[
"# list reviews with most negative sentiment\nwb[wb.sentiment <0].text.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
|||
ecbe7300893f429bcdf8d0aafc7b1a4768507c0f | 10,611 | ipynb | Jupyter Notebook | week1/3_Types.ipynb | ophiry/Notebooks | 02be39016c2fc0ff487c32c49b88741b4f5dc425 | [
"CC-BY-4.0"
] | null | null | null | week1/3_Types.ipynb | ophiry/Notebooks | 02be39016c2fc0ff487c32c49b88741b4f5dc425 | [
"CC-BY-4.0"
] | null | null | null | week1/3_Types.ipynb | ophiry/Notebooks | 02be39016c2fc0ff487c32c49b88741b4f5dc425 | [
"CC-BY-4.0"
] | null | null | null | 28.447721 | 273 | 0.525021 | [
[
[
"<img alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\" src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\"/>",
"_____no_output_____"
],
[
"# <span dir=\"rtl\" style=\"align: right; direction: rtl; float: right;\">סוגי ערכים (טיפוסים, או types)</span>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">כל ערך שאנחנו כותבים בפייתון הוא מ<strong>סוג</strong> מסוים. עד כה למדנו על שלושה סוגי ערכים שקיימים בפייתון (איזה זריזים אנחנו!):</p>",
"_____no_output_____"
],
[
"<ul dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\n<li><abbr title=\"str\">מחרוזת</abbr></li>\n<li><abbr title=\"int\">מספר שלם</abbr></li>\n<li style=\"white-space: nowrap;\"><abbr style=\"display: inline\" title=\"float\">מספר עשרוני</abbr> (\"שבר\")</li>\n</ul>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">היעזרו בסוגי הערכים המוצגים למעלה ונסו לקבוע מה סוג הנתונים בכל אחת מהשורות הבאות:</p>",
"_____no_output_____"
],
[
"<ol style=\"font-size: 1.25em; list-style-type: i\">\n<li><code>\"Hello World\"</code></li>\n<li><code>3</code></li>\n<li><code>9.5</code></li>\n<li><code>\"3\"</code></li>\n<li><code>'3'</code></li>\n<li><code>'9.5'</code></li>\n<li><code>'a'</code></li>\n</ol>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">המינוח המקצועי ל\"<dfn>סוג</dfn>\" הוא \"<dfn>טיפוס</dfn>\", או באנגלית: <em>type</em>.</p>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\nטיפוסי נתונים מזכירים מצבי צבירה: כפי שניתן למצוא בטבע מים בצורות שונות (כנוזל – לשתייה, וכמוצק – קוביות קרח),<br/>\nכך בפייתון ניתן להשתמש בערך מסוים בכמה צורות.<br/>\nנניח, בערך 9.5 ניתן להשתמש גם כמספר (<code>9.5</code>) וגם כמחרוזת (<code>'9.5'</code>). השימוש בכל אחד מהם הוא למטרה אחרת.\n</p>",
"_____no_output_____"
],
[
"<div class=\"align-center\" dir=\"rtl\" style=\"display: flex; text-align: right; direction: rtl;\">\n<div style=\"display: flex; width: 10%; float: right; \" width=\"10%\">\n<img alt=\"טיפ!\" height=\"50px\" src=\"images/tip.png\" style=\"height: 50px !important;\"/>\n</div>\n<div style=\"width: 90%\" width=\"90%\">\n<p dir=\"rtl\" style=\"text-align: right; direction: rtl;\">\n בניסוח רשמי או אקדמי משתמשים במינוח \"טיפוס\" או במינוח \"טיפוס נתונים\" כדי לסווג ערכים לקבוצות שונות.<br/>\n בתעשייה וביום־יום משתמשים במינוח \"סוג\". לדוגמה: <q>מוישה, מאיזה סוג המשתנה <var>age</var> שהגדרת פה?</q><br/>\n</p>\n</div>\n</div>",
"_____no_output_____"
],
[
"## <p dir=\"rtl\" style=\"align: right; direction: rtl; float: right;\">type</p>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">כיוון שסוגי ערכים הם עניין מרכזי כל כך בפייתון, קיימת דרך לבדוק מה הוא הסוג של ערך מסוים.<br/>\nלפני שנציג לכם איך לגלות את הסוג של כל ערך (אף על פי שחשוב שתדעו לעשות את זה בעצמכם), אנחנו רוצים להציג לפניכם איך פייתון מכנה כל סוג נתונים:</p>\n<div style=\"clear: both;\"></div>\n\n| שם בפייתון | שם באנגלית | שם בעברית |\n|:----------|:--------|------:|\n| str | **str**ing | מחרוזת |\n| int | **int**eger | מספר שלם |\n| float | float | מספר עשרוני |",
"_____no_output_____"
],
[
"<span dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כדי לראות את <em>הסוג</em> של ערך נתון, נשתמש ב־<code dir=\"ltr\" style=\"direction: ltr;\">type(VALUE)</code>, כאשר במקום <code>VALUE</code> יופיע הערך אותו נרצה לבדוק.\n</span>",
"_____no_output_____"
],
[
"<div class=\"align-center\" dir=\"rtl\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n<div style=\"display: flex; width: 10%; float: right; clear: both;\" width=\"10%\">\n<img alt=\"תרגול\" height=\"50px\" src=\"images/exercise.svg\" style=\"height: 50px !important;\"/>\n</div>\n<div style=\"width: 70%\" width=\"70%\">\n<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לפניכם דוגמאות אחדות של שימוש ב־<code>type</code>.<br/>\n קבעו מה תהיה התוצאה של כל אחת מהדוגמאות, ורק לאחר מכן הריצו ובדקו אם צדקתם.<br/>\n אם טעיתם – לא נורא, ממילא מדובר פה בהימורים מושכלים.\n </p>\n</div>\n</div>",
"_____no_output_____"
]
],
[
[
"type(1)",
"_____no_output_____"
],
[
"type(-1)",
"_____no_output_____"
],
[
"type(0)",
"_____no_output_____"
],
[
"type(1.9)",
"_____no_output_____"
],
[
"type(1.0)",
"_____no_output_____"
],
[
"type('a')",
"_____no_output_____"
],
[
"type('buya!')",
"_____no_output_____"
],
[
"type('9')",
"_____no_output_____"
]
],
[
[
"## <span dir=\"rtl\" style=\"align: right; direction: rtl; float: right;\">תרגול</span>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\nבדקו מה הסוג של הערכים והביטויים הבאים:\n</p>",
"_____no_output_____"
],
[
"<ol style=\"font-size: 1.25em; list-style-type: i\">\n<li><code>'david'</code></li>\n<li><code>\"david\"</code></li>\n<li><code>-900.00</code></li>\n<li><code>\"-900.00\"</code></li>\n<li><code>3 ** 5</code></li>\n<li><code>5.0 ** 2</code></li>\n<li><code>5 / 2</code></li>\n<li><code>5 // 2</code></li>\n<li><code>'5.0 ** 2'</code></li>\n</ol>",
"_____no_output_____"
],
[
"## <p dir=\"rtl\" style=\"align: right; direction: rtl; float: right;\">שוני בין סוגי ערכים</p>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\nסוג הערכים ישפיע על התנהגותם בפועל. הריצו את שלושת קטעי הקוד הבאים ונסו לעמוד על ההבדלים ביניהם:\n</p>",
"_____no_output_____"
]
],
[
[
"1 + 1",
"_____no_output_____"
],
[
"1.0 + 1.0",
"_____no_output_____"
],
[
"\"1\" + \"1\"",
"_____no_output_____"
]
],
[
[
"<div class=\"align-center\" dir=\"rtl\" style=\"display: flex; text-align: right; direction: rtl;\">\n<div style=\"display: flex; width: 10%; float: right; \" width=\"10%\">\n<img alt=\"אזהרה!\" height=\"50px\" src=\"images/warning.png\" style=\"height: 50px !important;\"/>\n</div>\n<div style=\"width: 90%\" width=\"90%\">\n<p dir=\"rtl\" style=\"text-align: right; direction: rtl;\">\n פעולות המערבות סוגי ערכים שונים לא תמיד עובדות.<br/>\n לדוגמה, כשננסה לחבר מספר שלם ומספר עשרוני, נקבל מספר עשרוני. לעומת זאת, כשננסה לחבר מספר שלם למחרוזת, פייתון תתריע לפנינו על שגיאה.<br/>\n נמשיל לקערת קוביות קרח: נוכל לספור כמה קוביות קרח יש בה גם אם נוסיף מספר קוביות, אבל יהיה קשה לנו לתאר את תוכן הקערה אם נשפוך אליה כוס מים.<br/>\n</p>\n</div>\n</div>",
"_____no_output_____"
],
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\nדוגמה לערכים מסוגים שונים שפעולת החיבור ביניהם עובדת:\n</p>",
"_____no_output_____"
]
],
[
[
"1 + 1.5",
"_____no_output_____"
]
],
[
[
"<p dir=\"rtl\" style=\"text-align: right; direction: rtl; float: right;\">\nדוגמה לערכים מסוגים שונים שפעולת החיבור ביניהם גורמת לפייתון להתריע על שגיאה:\n</p>",
"_____no_output_____"
]
],
[
[
"\"1\" + 2",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbe756b80c5958319ec7a5a2b9718fc12690faa | 264,215 | ipynb | Jupyter Notebook | step2_0_create_windowed_dataset.ipynb | Mengzhe/FreddieMacMortgageProject | 73a2cf34d35f354c65e7509cbdd73c29dc92f6cb | [
"MIT"
] | null | null | null | step2_0_create_windowed_dataset.ipynb | Mengzhe/FreddieMacMortgageProject | 73a2cf34d35f354c65e7509cbdd73c29dc92f6cb | [
"MIT"
] | null | null | null | step2_0_create_windowed_dataset.ipynb | Mengzhe/FreddieMacMortgageProject | 73a2cf34d35f354c65e7509cbdd73c29dc92f6cb | [
"MIT"
] | null | null | null | 46.904846 | 21,150 | 0.466147 | [
[
[
"<a href=\"https://colab.research.google.com/github/Mengzhe/FreddieMacMortgageProject/blob/main/step2_create_windowed_dataset.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Import libraries",
"_____no_output_____"
]
],
[
[
"import os\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\n\n%matplotlib inline\n\nimport sklearn\nfrom sklearn.preprocessing import MinMaxScaler, LabelEncoder\nfrom sklearn.metrics import (confusion_matrix, \n roc_auc_score, \n average_precision_score)\nfrom sklearn.model_selection import cross_val_score\n\nfrom imblearn.under_sampling import RandomUnderSampler\n\nimport tensorflow as tf\nfrom tensorflow import keras\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, Activation\nfrom tensorflow.keras.constraints import max_norm\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.callbacks import EarlyStopping\nfrom tensorflow.keras.models import load_model\n\nimport lightgbm as lgb\nimport xgboost as xgb\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.linear_model import LogisticRegression\nimport joblib\nimport pickle\n\n# import shap\nimport warnings\nimport time\nwarnings.filterwarnings(\"ignore\")\n\nimport json\nfrom collections import defaultdict\n\nfrom datetime import datetime\nimport pytz\nfrom datetime import date\n\n# from bayes_opt import BayesianOptimization\n\n# trained_model_folder_path = \"/content/drive/MyDrive/Colab Notebooks/Kaggle/LendingClub/trained_models/\"",
"/usr/local/lib/python3.7/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).\n \"(https://pypi.org/project/six/).\", FutureWarning)\n/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.neighbors.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.\n warnings.warn(message, FutureWarning)\n"
],
[
"def extract_windows(array, clearing_time_index, max_time, sub_window_size):\n examples = []\n # start = clearing_time_index + 1 - sub_window_size + 1\n start = clearing_time_index - sub_window_size + 1\n\n for i in range(max_time+1):\n if start+sub_window_size+i<sub_window_size:\n example = np.zeros((sub_window_size, array.shape[1])) ## zero padding\n example[-(start+sub_window_size+i):] = array[:start+sub_window_size+i]\n else:\n example = array[start+i:start+sub_window_size+i]\n\n examples.append(np.expand_dims(example, 0))\n \n return np.vstack(examples)",
"_____no_output_____"
],
[
"def shift_yr_qtr(yr_qtr):\n year = int(yr_qtr[:2])\n q = int(yr_qtr[-1])\n q_sum = (year*4+q-1)-1\n year = q_sum//4\n q = q_sum%4+1\n return str(year)+'Q'+str(q)\n\ndef shift_yr_mth(yr_mth):\n year = int(yr_mth[:4])\n month = int(yr_mth[4:])\n mth_sum = (year*12+month-1)-1\n year = mth_sum//12\n month = mth_sum%12+1\n if month<10:\n return str(year)+'0'+str(month)\n else:\n return str(year)+str(month)\n\n",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
],
[
"##### Load data: orig, performance",
"_____no_output_____"
]
],
[
[
"dir_data = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/'\ndir_data_orig = os.path.join(dir_data, 'data_orig_2011_2019_pre_chg_20210803_2235.csv')\ndir_data_monthly = os.path.join(dir_data, 'data_monthly_2011_2019_pre_chg_20210803_2235.csv') \n\nprint('dir_data_orig', dir_data_orig)\nprint('dir_data_monthly', dir_data_monthly)",
"dir_data_orig /content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/data_orig_2011_2019_pre_chg_20210803_2235.csv\ndir_data_monthly /content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/data_monthly_2011_2019_pre_chg_20210803_2235.csv\n"
],
[
"df_orig = pd.read_csv(dir_data_orig, index_col=0)\ndf_monthly = pd.read_csv(dir_data_monthly, index_col=0)\n\nprint(\"df_orig.shape\", df_orig.shape)\nprint(\"df_monthly.shape\", df_monthly.shape)",
"df_orig.shape (20053, 19)\ndf_monthly.shape (1694602, 10)\n"
],
[
"df_orig['o_yr_qtr'] = df_orig['loan_id'].apply(lambda x: x[1:5])\n## shift year and quarter since economic factors can have a quarter lag\ndf_orig['o_yr_qtr'] = df_orig['o_yr_qtr'].apply(shift_yr_qtr) \n\ndf_orig['o_st_yr_qtr'] = df_orig['o_prop_st']+'_'+df_orig['o_yr_qtr']",
"_____no_output_____"
],
[
"df_orig.head()",
"_____no_output_____"
],
[
"df_orig['o_yr_qtr_shifted'] = df_orig['o_yr_qtr'].apply(shift_yr_qtr)\ndf_orig.sample(100).head()",
"_____no_output_____"
],
[
"us_states = set(sorted(df_orig['o_prop_st'].unique().tolist()))",
"_____no_output_____"
],
[
"## merge\ndf_combined = pd.merge(df_monthly, df_orig, on='loan_id')\ndf_combined.sort_values(by=['loan_id', 'loan_age'], inplace=True)\n\ndf_combined['rep_period'] = df_combined['rep_period'].astype(str)\n## shift rep_period \ndf_combined['shifted_rep_period'] = df_combined['rep_period'].apply(shift_yr_mth)\n\n## all date-related information are shifted by one month\n## so that the lags of economic factors are considered, i.e., one-month or one-quarter lag\ndf_combined['o_rep_period'] = df_combined.groupby('loan_id').transform('first')['shifted_rep_period']\ndf_combined['o_st_rep_period'] = df_combined['o_prop_st']+'_'+df_combined['o_rep_period'].astype(str)\n\ndf_combined['st_rep_period'] = df_combined['o_prop_st']+'_'+df_combined['shifted_rep_period'].astype(str)\ndf_combined['rep_yr_qtr'] = df_combined['rep_period'].astype(str).apply(lambda x: x[2:4]+'Q'+str(((int(x[-2:])-1)//3)+1))\ndf_combined['rep_yr_qtr'] = df_combined['rep_yr_qtr'].apply(shift_yr_qtr)\ndf_combined['st_rep_yr_qtr'] = df_combined['o_prop_st'] + '_' + df_combined['rep_yr_qtr']\n\ndf_combined.head()",
"_____no_output_____"
]
],
[
[
"##### Load data: house price index ",
"_____no_output_____"
]
],
[
[
"dir_hpi = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/house_index_history/HPI_PO_state.xls'\ndf_hpi = pd.read_excel(dir_hpi)\ndf_hpi['st_rep_yr_qtr'] = df_hpi['state']+'_'+df_hpi['yr'].apply(lambda x: str(x)[-2:])+'Q'+df_hpi['qtr'].apply(lambda x: str(x))\ndf_hpi.drop(columns=['Warning', 'index_nsa'], inplace=True)\ndf_hpi.rename(columns={'index_sa': 'hpi'}, inplace=True)\ndf_hpi.reset_index(inplace=True, drop=True)\ndf_hpi.sort_values(by=['state', 'yr', 'qtr'], inplace=True)\ndf_hpi.drop(columns=['state', 'yr', 'qtr'], inplace=True)\n\ndf_hpi",
"_____no_output_____"
]
],
[
[
"##### Load data: unemployment rate history",
"_____no_output_____"
]
],
[
[
"startyears = [2000, 2010, 2020]\nendyears = [2009, 2019, 2021]\n# dir_ue = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/monthly_state_ue_rate_2011_2020.csv'\ndf_ue_list = []\nfor startyear, endyear in zip(startyears, endyears):\n dir_ue = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/monthly_state_ue_rate_{startyear}_{endyear}.csv'.format(startyear=startyear, \n endyear=endyear)\n print(dir_ue)\n df_ue = pd.read_csv(dir_ue)\n df_ue_list.append(df_ue)\n\ndf_ue = pd.concat(df_ue_list, ignore_index=True) \ndf_ue.drop(columns=['series_id', 'year', 'month', 'st', 'rep_period'], inplace=True)\ndf_ue.head()\n",
"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/monthly_state_ue_rate_2000_2009.csv\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/monthly_state_ue_rate_2010_2019.csv\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/monthly_state_ue_rate_2020_2021.csv\n"
]
],
[
[
"##### Load data: Charge-Off Rate on Single Family Residential Mortgages, Booked in Domestic Offices, Top 100 Banks Ranked by Assets\nhttps://fred.stlouisfed.org/series/CORSFRMT100N",
"_____no_output_____"
]
],
[
[
"dir_chgoff = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/FRED_mortgage_rate_data/Charge-Off Rate on Single Family Residential Mortgages, Booked in Domestic Offices, Top 100 Banks Ranked by Assets.xlsx'\ndf_chgoff = pd.read_excel(dir_chgoff)\ndf_chgoff['observation_date'] = pd.to_datetime(df_chgoff['observation_date'])\ndf_chgoff['rep_yr_qtr'] = df_chgoff['observation_date'].apply(lambda x: str(x.year)[-2:])+'Q'+df_chgoff['observation_date'].apply(lambda x: str(x.quarter))\ndf_chgoff.rename(columns={'CORSFRMT100S': 'charge_off_rate'}, inplace=True)\nplt.plot(df_chgoff['observation_date'], df_chgoff['charge_off_rate'], label='charge_off_rate')\nplt.legend()\n\ndf_chgoff.drop(columns=['observation_date'], inplace=True)\ndf_chgoff.head()\n",
"_____no_output_____"
]
],
[
[
"##### Delinquency Rate on Single-Family Residential Mortgages, Booked in Domestic Offices, All Commercial Banks\nhttps://fred.stlouisfed.org/series/DRSFRMACBS",
"_____no_output_____"
]
],
[
[
"dir_dlq = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/FRED_mortgage_rate_data/Delinquency Rate on Single-Family Residential Mortgages, Booked in Domestic Offices, All Commercial Banks.xlsx'\ndf_dlq = pd.read_excel(dir_dlq)\ndf_dlq['observation_date'] = pd.to_datetime(df_dlq['observation_date'])\ndf_dlq['rep_yr_qtr'] = df_dlq['observation_date'].apply(lambda x: str(x.year)[-2:])+'Q'+df_dlq['observation_date'].apply(lambda x: str(x.quarter))\ndf_dlq.rename(columns={'DRSFRMACBS': 'dlq_rate'}, inplace=True)\nplt.plot(df_dlq['observation_date'], df_dlq['dlq_rate'], label='dlq_rate')\nplt.legend()\n\ndf_dlq.drop(columns=['observation_date'], inplace=True)\ndf_dlq.head()\n",
"_____no_output_____"
]
],
[
[
"##### 30-Year Fixed Rate Mortgage Average in the United States\nhttps://fred.stlouisfed.org/series/MORTGAGE30US",
"_____no_output_____"
]
],
[
[
"dir_mort_rate = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/FRED_mortgage_rate_data/30-Year Fixed Rate Mortgage Average in the United States.xlsx'\ndf_mort_rate = pd.read_excel(dir_mort_rate)\ndf_mort_rate['observation_date'] = pd.to_datetime(df_mort_rate['observation_date'])\ndf_mort_rate.sort_values(by='observation_date', inplace=True)\ndf_mort_rate['rep_period'] = df_mort_rate['observation_date'].dt.year.astype(str)+df_mort_rate['observation_date'].apply(lambda x: '0'+str(x.month) if x.month<10 else str(x.month))\ndf_mort_rate = df_mort_rate.groupby('rep_period').apply(lambda x: x.tail(1))\ndf_mort_rate.reset_index(inplace=True, drop=True)\ndf_mort_rate.rename(columns={'MORTGAGE30US': 'avg_frm'}, inplace=True)\nplt.plot(df_mort_rate['observation_date'], df_mort_rate['avg_frm'], label='avg_frm')\nplt.legend()\n\ndf_mort_rate.drop(columns=['observation_date'], inplace=True)\ndf_mort_rate.head()",
"_____no_output_____"
]
],
[
[
"##### State Leve Mortgage Performance Statistics\nhttps://www.fhfa.gov/DataTools/Downloads/Pages/National-Mortgage-Database-Aggregate-Data.aspx",
"_____no_output_____"
]
],
[
[
"dir_st_co_dlq = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/economic_factors/FRED_mortgage_rate_data/State_Level_Mortgage_Performance_Statistics.xlsx'\ndf_list = []\nfor st in us_states:\n try:\n df_st_co_dlq = pd.read_excel(dir_st_co_dlq, \n sheet_name=st, \n skiprows=4,\n header=1,\n skipfooter=2)\n df_st_co_dlq['st_rep_yr_qtr'] = st+'_'+df_st_co_dlq['Quarter'].astype(str).apply(lambda x: x[-4:])\n df_list.append(df_st_co_dlq)\n # break\n except:\n pass\n\ndf_st_co_dlq = pd.concat(df_list, ignore_index=True) \ndf_st_co_dlq.drop(columns=['Quarter', 'Percent in Forbearance'], inplace=True)\ndf_st_co_dlq.rename(columns={'Percent 30 or 60 Days Past Due Date': 'pct_3060dpd', \n 'Percent 90 to 180 Days Past Due Date': 'pct_90180dpd', \n 'Percent in the Process of Foreclosure, Bankruptcy, or Deed in Lieu': 'pct_fcl'}, \n inplace=True)\ndf_st_co_dlq.dropna(axis=0, inplace=True)\ndf_st_co_dlq.head()\n",
"_____no_output_____"
]
],
[
[
"## Merge data",
"_____no_output_____"
],
[
"#### House index history data",
"_____no_output_____"
]
],
[
[
"## hpi (quarterly updated) \ndf_combined_hpi = pd.merge(df_combined, df_hpi, left_on='st_rep_yr_qtr', right_on='st_rep_yr_qtr')\ndf_combined_hpi.sort_values(by=['loan_id', 'loan_age'], inplace=True)\n\n## hpi (origination date) \ndf_combined_hpi['o_hpi'] = df_combined_hpi.groupby('loan_id').transform('first')['hpi']\n\ndf_combined_hpi.head()",
"_____no_output_____"
],
[
"del df_combined",
"_____no_output_____"
]
],
[
[
"#### Unemployment rate",
"_____no_output_____"
]
],
[
[
"## monthly unemployment rate\ndf_combined_ue = pd.merge(df_combined_hpi, \n df_ue, \n left_on='st_rep_period', \n right_on='st_rep_period')\ndf_combined_ue.sort_values(by=['loan_id', 'loan_age'], inplace=True)\n\n## unemployment rate on origination date\ndf_combined_ue['o_ue_rate'] = df_combined_ue.groupby('loan_id').transform('first')['ue_rate']",
"_____no_output_____"
],
[
"df_combined_ue[['ue_rate', 'o_ue_rate']].describe()",
"_____no_output_____"
],
[
"del df_combined_hpi",
"_____no_output_____"
]
],
[
[
"#### Charge-Off Rate on Single Family Residential Mortgages, Booked in Domestic Offices, Top 100 Banks Ranked by Assets",
"_____no_output_____"
]
],
[
[
"df_combined_co = pd.merge(df_combined_ue, \n df_chgoff, \n left_on='rep_yr_qtr', \n right_on='rep_yr_qtr')\ndf_combined_co.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf_combined_co.head()",
"_____no_output_____"
],
[
"del df_combined_ue",
"_____no_output_____"
]
],
[
[
"#### Delinquency Rate on Single-Family Residential Mortgages, Booked in Domestic Offices, All Commercial Banks",
"_____no_output_____"
]
],
[
[
"df_combined_dlq = pd.merge(df_combined_co, \n df_dlq, \n left_on='rep_yr_qtr', \n right_on='rep_yr_qtr')\ndf_combined_dlq.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf_combined_dlq.head()",
"_____no_output_____"
],
[
"del df_combined_co",
"_____no_output_____"
]
],
[
[
"#### 30-Year Fixed Rate Mortgage Average in the United States",
"_____no_output_____"
]
],
[
[
"df_combined_frm = pd.merge(df_combined_dlq, \n df_mort_rate, \n left_on='rep_period', \n right_on='rep_period')\ndf_combined_frm.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf_combined_frm['o_avg_frm'] = df_combined_frm.groupby('loan_id').transform('first')['avg_frm']\n",
"_____no_output_____"
],
[
"df_combined_frm.head()",
"_____no_output_____"
],
[
"del df_combined_dlq",
"_____no_output_____"
]
],
[
[
"#### State Level Mortgage Performance Statistics",
"_____no_output_____"
]
],
[
[
"df_combined_co_dlq = pd.merge(df_combined_frm, \n df_st_co_dlq, \n left_on='st_rep_yr_qtr', \n right_on='st_rep_yr_qtr')\ndf_combined_co_dlq.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf_combined_co_dlq['o_pct_fcl'] = df_combined_co_dlq.groupby('loan_id').transform('first')['pct_fcl']\n",
"_____no_output_____"
],
[
"df_combined_co_dlq.head()",
"_____no_output_____"
],
[
"del df_combined_frm",
"_____no_output_____"
]
],
[
[
"## Preprocess",
"_____no_output_____"
],
[
"#### Copy data",
"_____no_output_____"
]
],
[
[
"df = df_combined_co_dlq.drop(columns=['o_yr_qtr', 'o_st_yr_qtr', 'o_yr_qtr_shifted',\n 'shifted_rep_period', 'o_rep_period',\n 'o_st_rep_period', 'st_rep_period', \n 'rep_yr_qtr', 'st_rep_yr_qtr',])\ndf.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf.head()",
"_____no_output_____"
],
[
"del df_combined_co_dlq",
"_____no_output_____"
]
],
[
[
"#### Create monthly features",
"_____no_output_____"
],
[
"since est_ltv from the dataset have many missing values, the following estimation is used: \n$$est\\_ltv = \\dfrac{upb}{o\\_upb}*\\dfrac{o\\_hpi}{hpi}*o\\_ltv$$",
"_____no_output_____"
]
],
[
[
"try:\n df['est_ltv'] = (df['actual_upb']/df['o_upb'])*(df['o_hpi']/df['hpi'])*df['o_ltv']\n df['upb_pct'] = df['actual_upb'] / df['o_upb']\n df['age_pct'] = df['loan_age'] / df['o_term']\n df.drop(columns=['o_term', 'o_pt_val_md'], inplace=True, errors='ignored') ## all mortgage are 30-year\nexcept:\n pass\n\ndf.shape",
"_____no_output_____"
],
[
"df[['upb_pct', 'est_ltv', 'age_pct']].corr()",
"_____no_output_____"
]
],
[
[
"#### Encoding categorical variables",
"_____no_output_____"
]
],
[
[
"df.columns",
"_____no_output_____"
],
[
"df = pd.get_dummies(df, columns=['o_chan', 'o_purp', \n 'o_first_flag', \n 'o_num_brwrs', 'o_units', \n 'o_occ_stat', 'o_prop_st',\n 'o_prop_type'])\ndf.shape",
"_____no_output_____"
],
[
"# df_file_name = \"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/df_combined_2011_2019\"\n# df.to_csv(df_file_name+'.csv')\n# print(df_file_name+'.csv')",
"_____no_output_____"
]
],
[
[
"#### Origination and monthly features",
"_____no_output_____"
]
],
[
[
"monthly_features = ['age_pct', 'loan_status', \n 'upb_pct', 'est_ltv', 'delq_by_disaster',\n 'hpi', 'ue_rate', 'charge_off_rate', 'dlq_rate',\n 'avg_frm', 'pct_3060dpd', 'pct_90180dpd', 'pct_fcl', \n ]\n\norig_features = [col for col in df.columns if col.startswith('o_')]\n\ntarget_col = ['zero_bal_code']",
"_____no_output_____"
],
[
"# ## removed columns\n# for col in df.columns: \n# if not col in monthly_features and not col in orig_features and not col in target_col:\n# print(col)",
"_____no_output_____"
]
],
[
[
"#### Scale data",
"_____no_output_____"
]
],
[
[
"df_scaled = df.copy()\ndf_scaled.sort_values(by=['loan_id', 'loan_age'], inplace=True)\ndf_scaled.drop(columns=[\n # 'rep_period', \n 'actual_upb', 'loan_age', \n 'remaining_months', 'int_rate', 'o_init_pay_d'], \n inplace=True)\n\nmonthly_scaler = MinMaxScaler()\norig_scaler = MinMaxScaler()\n\ndf_scaled.loc[:, monthly_features] = monthly_scaler.fit_transform(df.loc[:, monthly_features])\ndf_scaled.loc[:, orig_features] = orig_scaler.fit_transform(df.loc[:, orig_features])",
"_____no_output_____"
],
[
"df_scaled.head()",
"_____no_output_____"
],
[
"del df",
"_____no_output_____"
]
],
[
[
"## Create windowed dataset: monthly and origination features",
"_____no_output_____"
]
],
[
[
"loan_idxes = df_scaled['loan_id'].unique().tolist()\n# loan_idxes = np.random.choice(df_scaled['loan_id'].unique().tolist(), 10, replace=False)\n# loan_idxes\nassert len(loan_idxes)==len(set(loan_idxes))\nprint(\"total loans:\", len(loan_idxes))",
"total loans: 20049\n"
],
[
"%%time\nclearing_time_index = 0\nsub_window_size = 24\n\nlist_start_windowed_monthly_arrs = defaultdict(list)\nlist_start_windowed_orig_arrs = defaultdict(list)\nlist_start_windowed_target_arrs = defaultdict(list)\nlist_start_windowed_loan_id_arrs = defaultdict(list)\n\nlist_end_windowed_monthly_arrs = defaultdict(list)\nlist_end_windowed_orig_arrs = defaultdict(list)\nlist_end_windowed_target_arrs = defaultdict(list)\nlist_end_windowed_loan_id_arrs = defaultdict(list)\n\nloan_idxes_short_seq = set()\n\nfor i, loan_id in enumerate(loan_idxes):\n # print(\"loan_id\", loan_id)\n if i%2000==0:\n print('{} loans have been processed'.format(i))\n\n loan_year = loan_id[1:3]\n # print('loan_year', loan_year, loan_id)\n\n df_m = df_scaled.loc[df_scaled['loan_id']==loan_id, monthly_features]\n df_o = df_scaled.loc[df_scaled['loan_id']==loan_id, orig_features]\n ## loans that ends in two months\n if len(df_m)<2:\n loan_idxes_short_seq.add(loan_id)\n continue\n\n zero_bal_code = df_scaled.loc[df_scaled['loan_id']==loan_id, target_col].values[-1][0]\n # print(\"zero_bal_code\", zero_bal_code)\n loan_end_rep_period = df_scaled.loc[df_scaled['loan_id']==loan_id, ['rep_period']].values[-1][0]\n loan_end_year = loan_end_rep_period[2:4]\n # print(\"loan_end_rep_period\", loan_end_year, loan_end_rep_period)\n if zero_bal_code==1.0: ## paid off\n target = 0\n elif zero_bal_code in set([2.0, 3.0, 9.0]): ## third-party sale, charged off, REO Disposition\n target = 1\n else: \n raise ValueError(\"Wrong zero balance code!\")\n \n array = df_m.values\n orig_array = df_o.values\n # assert np.isclose(orig_array, orig_array[0]).all()\n windowed_monthly_arr = extract_windows(array, \n clearing_time_index=clearing_time_index, \n max_time=len(array)-2, \n sub_window_size=sub_window_size)\n windowed_orig_arr = orig_array[:windowed_monthly_arr.shape[0]]\n target_arr = np.ones(windowed_monthly_arr.shape[0]) * target\n loan_id_arr = np.repeat(np.array([loan_id]), windowed_monthly_arr.shape[0])\n\n # print('windowed_monthly_arr.shape', windowed_monthly_arr.shape)\n # print('windowed_orig_arr.shape', windowed_orig_arr.shape)\n # print('target_arr.shape', target_arr.shape)\n # print('loan_id_arr.shape', loan_id_arr.shape)\n\n ## check batch size\n assert windowed_monthly_arr.shape[0]==windowed_orig_arr.shape[0]\n assert windowed_monthly_arr.shape[0]==target_arr.shape[0]\n assert windowed_monthly_arr.shape[0]==loan_id_arr.shape[0]\n ## check window size \n assert windowed_monthly_arr.shape[1]==sub_window_size \n\n list_start_windowed_monthly_arrs[loan_year].append(windowed_monthly_arr)\n list_start_windowed_orig_arrs[loan_year].append(windowed_orig_arr)\n list_start_windowed_target_arrs[loan_year].append(target_arr)\n list_start_windowed_loan_id_arrs[loan_year].append(loan_id_arr)\n\n list_end_windowed_monthly_arrs[loan_end_year].append(windowed_monthly_arr)\n list_end_windowed_orig_arrs[loan_end_year].append(windowed_orig_arr)\n list_end_windowed_target_arrs[loan_end_year].append(target_arr)\n list_end_windowed_loan_id_arrs[loan_end_year].append(loan_id_arr)\n",
"0 loans have been processed\n2000 loans have been processed\n4000 loans have been processed\n6000 loans have been processed\n8000 loans have been processed\n10000 loans have been processed\n12000 loans have been processed\n14000 loans have been processed\n16000 loans have been processed\n18000 loans have been processed\n20000 loans have been processed\nCPU times: user 1h 13min 10s, sys: 18.5 s, total: 1h 13min 29s\nWall time: 1h 13min 16s\n"
]
],
[
[
"## Save all yearly loan data",
"_____no_output_____"
]
],
[
[
"now = datetime.now()\n# print(now)\n\nutc = pytz.utc\nutc_dt = datetime(now.year, now.month, now.day, now.hour, now.minute, now.second, tzinfo=utc)\neastern = pytz.timezone('US/Eastern')\nloc_dt = utc_dt.astimezone(eastern)\nfmt = '%Y%m%d_%H%M'\ncur_datetime = loc_dt.strftime(fmt)\ncur_datetime",
"_____no_output_____"
],
[
"save_path = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start'\nlist_start_windowed_monthly_arrs_file_name = os.path.join(save_path, 'list_start_windowed_monthly_arrs.pkl')\nlist_start_windowed_orig_arrs_file_name = os.path.join(save_path, 'list_start_windowed_orig_arrs.pkl')\nlist_start_windowed_target_arrs_file_name = os.path.join(save_path, 'list_start_windowed_target_arrs.pkl')\nlist_start_windowed_loan_id_arrs_file_name = os.path.join(save_path, 'list_start_windowed_loan_id_arrs.pkl')\n\nprint(list_start_windowed_monthly_arrs_file_name)\npickle.dump(list_start_windowed_monthly_arrs, open(list_start_windowed_monthly_arrs_file_name,'wb'))\n\nprint(list_start_windowed_orig_arrs_file_name)\npickle.dump(list_start_windowed_orig_arrs, open(list_start_windowed_orig_arrs_file_name,'wb'))\n\nprint(list_start_windowed_target_arrs_file_name)\npickle.dump(list_start_windowed_target_arrs, open(list_start_windowed_target_arrs_file_name,'wb'))\n\nprint(list_start_windowed_loan_id_arrs_file_name)\npickle.dump(list_start_windowed_loan_id_arrs, open(list_start_windowed_loan_id_arrs_file_name,'wb'))\n",
"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start/list_start_windowed_monthly_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start/list_start_windowed_orig_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start/list_start_windowed_target_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start/list_start_windowed_loan_id_arrs.pkl\n"
],
[
"save_path = '/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end'\nlist_end_windowed_monthly_arrs_file_name = os.path.join(save_path, 'list_end_windowed_monthly_arrs.pkl')\nlist_end_windowed_orig_arrs_file_name = os.path.join(save_path, 'list_end_windowed_orig_arrs.pkl')\nlist_end_windowed_target_arrs_file_name = os.path.join(save_path, 'list_end_windowed_target_arrs.pkl')\nlist_end_windowed_loan_id_arrs_file_name = os.path.join(save_path, 'list_end_windowed_loan_id_arrs.pkl')\n\nprint(list_end_windowed_monthly_arrs_file_name)\npickle.dump(list_end_windowed_monthly_arrs, open(list_end_windowed_monthly_arrs_file_name,'wb'))\n\nprint(list_end_windowed_orig_arrs_file_name)\npickle.dump(list_end_windowed_orig_arrs, open(list_end_windowed_orig_arrs_file_name,'wb'))\n\nprint(list_end_windowed_target_arrs_file_name)\npickle.dump(list_end_windowed_target_arrs, open(list_end_windowed_target_arrs_file_name,'wb'))\n\nprint(list_end_windowed_loan_id_arrs_file_name)\npickle.dump(list_end_windowed_loan_id_arrs, open(list_end_windowed_loan_id_arrs_file_name,'wb'))\n",
"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end/list_end_windowed_monthly_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end/list_end_windowed_orig_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end/list_end_windowed_target_arrs.pkl\n/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end/list_end_windowed_loan_id_arrs.pkl\n"
]
],
[
[
"## Train/test split based on loan termination date",
"_____no_output_____"
],
[
"To enusre no overlap between train and test data, the data are separated by the loan termination date. \n\n\n* Train: mortgages that ends before 2018, i.e., before 12/31/2017\n* Test: mortgages that orginated after 2018, i.e., after 01/01/2018\n",
"_____no_output_____"
]
],
[
[
"train_end_year = 17\ntest_start_year = 18",
"_____no_output_____"
]
],
[
[
"#### Train data",
"_____no_output_____"
]
],
[
[
"dir_ = \"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_end\"\nfiles = [\n \"list_end_windowed_loan_id_arrs.pkl\",\n \"list_end_windowed_monthly_arrs.pkl\",\n \"list_end_windowed_orig_arrs.pkl\",\n \"list_end_windowed_target_arrs.pkl\" \n ]\nobject_list = []\nfor file_name in files:\n with open(os.path.join(dir_, file_name),'rb') as file:\n object_list.append(pickle.load(file))\n\n(list_end_windowed_loan_id_arrs, \n list_end_windowed_monthly_arrs, \n list_end_windowed_orig_arrs, \n list_end_windowed_target_arrs) = object_list\n",
"_____no_output_____"
],
[
"list_end_windowed_loan_id_arrs.keys()",
"_____no_output_____"
],
[
"n_train_loans = 0\ntrain_orig_list = []\ntrain_monthly_list = []\ntrain_target_list = []\ntrain_loan_id_list = []\nfor year in list_end_windowed_loan_id_arrs:\n if int(year)<=train_end_year:\n n_train_loans += len(list_end_windowed_loan_id_arrs[year])\n train_monthly_list.extend(list_end_windowed_monthly_arrs[year])\n train_orig_list.extend(list_end_windowed_orig_arrs[year])\n train_target_list.extend(list_end_windowed_target_arrs[year])\n train_loan_id_list.extend(list_end_windowed_loan_id_arrs[year])\nprint(\"n_train_loans\", n_train_loans)\nassert n_train_loans==len(train_monthly_list)\nassert n_train_loans==len(train_orig_list)\nassert n_train_loans==len(train_target_list)\nassert n_train_loans==len(train_loan_id_list)\n\n",
"n_train_loans 10797\n"
],
[
"train_monthly_arr = np.concatenate(train_monthly_list)\ntrain_orig_arr = np.concatenate(train_orig_list)\ntrain_target_arr = np.concatenate(train_target_list)\ntrain_loan_id_arr = np.concatenate(train_loan_id_list)\n\nprint(\"train_monthly_arr.shape\", train_monthly_arr.shape, \"(n_samples, time_steps, feature_size)\")\nprint(\"train_orig_arr.shape\", train_orig_arr.shape, \"(n_samples, feature_size)\")\nprint(\"train_target_arr.shape\", train_target_arr.shape)\nprint(\"train_loan_id_arr.shape\", train_loan_id_arr.shape)",
"train_monthly_arr.shape (324443, 24, 13) (n_samples, time_steps, feature_size)\ntrain_orig_arr.shape (324443, 86) (n_samples, feature_size)\ntrain_target_arr.shape (324443,)\ntrain_loan_id_arr.shape (324443,)\n"
]
],
[
[
"#### Test data",
"_____no_output_____"
]
],
[
[
"dir_ = \"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/dataset_by_start\"\nfiles = [\n \"list_start_windowed_loan_id_arrs.pkl\",\n \"list_start_windowed_monthly_arrs.pkl\",\n \"list_start_windowed_orig_arrs.pkl\",\n \"list_start_windowed_target_arrs.pkl\" \n ]\nobject_list = []\nfor file_name in files:\n with open(os.path.join(dir_, file_name),'rb') as file:\n object_list.append(pickle.load(file))\n\n(list_start_windowed_loan_id_arrs, \n list_start_windowed_monthly_arrs, \n list_start_windowed_orig_arrs, \n list_start_windowed_target_arrs) = object_list\n",
"_____no_output_____"
],
[
"list_start_windowed_loan_id_arrs.keys()",
"_____no_output_____"
],
[
"n_test_loans = 0\ntest_orig_list = []\ntest_monthly_list = []\ntest_target_list = []\ntest_loan_id_list = []\nfor year in list_start_windowed_loan_id_arrs:\n if int(year)>=test_start_year:\n n_test_loans += len(list_start_windowed_loan_id_arrs[year])\n test_monthly_list.extend(list_start_windowed_monthly_arrs[year])\n test_orig_list.extend(list_start_windowed_orig_arrs[year])\n test_target_list.extend(list_start_windowed_target_arrs[year])\n test_loan_id_list.extend(list_start_windowed_loan_id_arrs[year])\nprint(\"n_test_loans\", n_test_loans)\nassert n_test_loans==len(test_monthly_list)\nassert n_test_loans==len(test_orig_list)\nassert n_test_loans==len(test_target_list)\nassert n_test_loans==len(test_loan_id_list)\n\n",
"n_test_loans 526\n"
],
[
"test_monthly_arr = np.concatenate(test_monthly_list)\ntest_orig_arr = np.concatenate(test_orig_list)\ntest_target_arr = np.concatenate(test_target_list)\ntest_loan_id_arr = np.concatenate(test_loan_id_list)\n\nprint(\"test_monthly_arr.shape\", test_monthly_arr.shape, \"(n_samples, time_steps, feature_size)\")\nprint(\"test_orig_arr.shape\", test_orig_arr.shape, \"(n_samples, feature_size)\")\nprint(\"test_target_arr.shape\", test_target_arr.shape)\nprint(\"test_loan_id_arr.shape\", test_loan_id_arr.shape)",
"test_monthly_arr.shape (10503, 24, 13) (n_samples, time_steps, feature_size)\ntest_orig_arr.shape (10503, 86) (n_samples, feature_size)\ntest_target_arr.shape (10503,)\ntest_loan_id_arr.shape (10503,)\n"
]
],
[
[
"## Save train/test data",
"_____no_output_____"
]
],
[
[
"save_dir = \"/content/drive/MyDrive/Colab Notebooks/FreddieMac/FreddieMac/dataset_full/v3/train_test_data\"",
"_____no_output_____"
],
[
"train_data_names = [[train_monthly_arr, 'train_monthly_arr'], \n [train_orig_arr, 'train_orig_arr'], \n [train_target_arr, 'train_target_arr'], \n [train_loan_id_arr, 'train_loan_id_arr']]\n\ntest_data_names = [[test_monthly_arr, 'test_monthly_arr'], \n [test_orig_arr, 'test_orig_arr'], \n [test_target_arr, 'test_target_arr'], \n [test_loan_id_arr, 'test_loan_id_arr']]\n",
"_____no_output_____"
],
[
"for arr, file_name in train_data_names+test_data_names:\n dir_path = os.path.join(save_dir, file_name+'.npy')\n np.save(dir_path, arr)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecbe9b1305183225f3bae3d154074f858a1110bb | 18,438 | ipynb | Jupyter Notebook | sandbox/Untitled-Copy1.ipynb | mori-c/cs106a | b7764d9dec1d3abb6968ded265a0c4c4d96920c5 | [
"MIT"
] | 1 | 2020-04-13T18:38:26.000Z | 2020-04-13T18:38:26.000Z | sandbox/Untitled-Copy1.ipynb | mori-c/cs106a | b7764d9dec1d3abb6968ded265a0c4c4d96920c5 | [
"MIT"
] | null | null | null | sandbox/Untitled-Copy1.ipynb | mori-c/cs106a | b7764d9dec1d3abb6968ded265a0c4c4d96920c5 | [
"MIT"
] | null | null | null | 30.476033 | 1,157 | 0.494359 | [
[
[
"\n# def keep_track():\n# global player\nplayer = {'1':[], '2':[] }\n \n \n# keep_track()\n# player()",
"_____no_output_____"
],
[
"player = {'1':[], '2':[] }\nplayer[0] = []\nplayer[1] = []\n\nprint(player[0], player[1])\n\n",
"[] []\n"
],
[
"# track = input('enter something: ')\n# player[0].append(track)\n# print(track)\n\nplayer[0].append(input('enter something: '))\nprint(player[0])\n",
"enter something: 23\n['23']\n"
],
[
"# player[0].count()\nprint(player[0])",
"['23']\n"
],
[
"print(player[0])\nprint(player[1])",
"['asdad', '23', '23', '23', '23']\n"
],
[
"# player = []\n# score = []\ntrack = int(input('Enter: '))\n\n# print(player)\n# print(score)\nprint(track)",
"_____no_output_____"
],
[
"max = 20\nmin = 0\nplayer = ['1', '2']\nwhile player in range(max, min):\n print(player)",
"_____no_output_____"
],
[
"score_statement = 0\nplayer = {'1':[]}\n\ndef score_statement():\n global score_statement\n max = 20\n min = 0\n for i in range(max, min):\n print(i)\n print('There are ' + str(score_statement) + ' stones left \\n')\n \nscore_statement()\n",
"There are <function score_statement at 0x1060a6c20> stones left \n\n"
],
[
"\ninput_answer = int(input('Would you like to remove 1 or 2 stones? ' )\nprint(input_answer)\n# player.append(input_answer)\n# print(player[0])\n\nfor i in range(2):\n print(i)\n\n\ninput_answer()",
"_____no_output_____"
],
[
"player = {'1':['one'], '2':['two'], 1:[1], 2:[2]}\nprint(player.get(1))\nprint(player[1])\nprint(player['1']) \n\n# p1a = sum(player['1'])\np1b = sum(player[1])\n# p2a = sum(player['2'])\np2b = sum(player[2])\nprint(p1a, p2a)\nprint(p1b, p2b)\n\nplayer['1'].append(2)\nplayer['2'].append(3)\nplayer[1].append(2)\nplayer[2].append(3)\n\n# print(player['1'], player['2'])\nprint(player)\n\nplayer['1'].append(21)\nplayer['2'].append(300)\nplayer[1].append(21)\nplayer[2].append(300)\n\n# print(player['1'], player['2'])\nprint(player)\nprint()\n# print('player 1 total: ', sum(player['1']), '=', player['1'])\n# print('player 2 total: ', sum(player['2']), '=', player['2'])\nprint('player 1 total: ', sum(player[1]), '=', player[1])\nprint('player 2 total: ', sum(player[2]), '=', player[2])\n\n\nprint()\nprint()\n\nfor i in range(1, 2):\n if player['1']:\n print('player[1]', player['1'])\n if player['2']:\n print('player[2]', player['2'])\n \n \nprint()\nprint()\n\nfor i in player:\n print(i)\n\nprint()\nprint()",
"[1]\n[1]\n['one']\n0 0\n1 2\n{'1': ['one', 2], '2': ['two', 3], 1: [1, 2], 2: [2, 3]}\n{'1': ['one', 2, 21], '2': ['two', 3, 300], 1: [1, 2, 21], 2: [2, 3, 300]}\n\nplayer 1 total: 24 = [1, 2, 21]\nplayer 2 total: 305 = [2, 3, 300]\n\n\nplayer[1] ['one', 2, 21]\nplayer[2] ['two', 3, 300]\n\n\n1\n2\n1\n2\n\n\n"
],
[
"if input_answer < 20:\n print('play again')\n# print(player[0].append(input_answer))\nelse:\n return(input == 20)",
"_____no_output_____"
],
[
"# count = []\nif i in range(3):\n print(i)",
"1\n"
],
[
"player = ['Player 1', 'Player 2']\nstones = [[10], [20]]\n\nkeys = dict.fromkeys(player, stones)\nprint(keys)\n\nfor i in player:\n print(i)\n \nplayer_1 = ['Player 1']\nstones_1 = [5, 2]\n\n\n\nkeys = dict.fromkeys(player_1, stones_1)\nprint(keys)\n\n\nfor i in range(2):\n print(player[i])\n \nfor i in range(stones_1):\n input = int(input('num: '))\n print(input)\n stones_1.append(input)\n print(stones_1)\n ",
"{'Player 1': [[10], [20]], 'Player 2': [[10], [20]]}\nPlayer 1\nPlayer 2\n{'Player 1': [5, 2]}\nPlayer 1\nPlayer 2\n"
],
[
"# if player[i] == player[0]:\ndef player_id():\n global player_id\n for i in range(2):\n # print(player[i])\n if player[i] == player[0]:\n print(player[0])\n else:\n print(player[1])\n \ninput_answer = str(input(player_id(), 'Would you like to remove 1 or 2 stones? ' ))\nprint(input_answer)",
"Player 1\nPlayer 2\n"
],
[
"a = str(player_id) + 'string'\nprint(a)",
"<function player_id at 0x107781170>string\n"
],
[
"def score_statement():\n global score_statement\n global max\n max = 20\n min = 0\n\n for score_statement in range(max, min):\n print(score_statement)\n print('There are' + str(score_statement) + 'stones left \\n')\n\ndef input_answer():\n global input_answer\n input_answer = player_id() + int(input('Would you like to remove 1 or 2 stones? ' ))\n print(input_answer)\n\n# validate\ndef keep_track():\n while player[0]:\n if input_answer < max:\n player[0].append(input_answer)\n print('player[0]: ', player[0])\n remainder = max - input_answer\n print('remainder :', remainder)\n else:\n while input_is_valid:\n if (input_answer == max):\n input = int(input('Please enter 1 or 2: '))\n print('Player ' + input + 'wins! \\n Game Over')\n else:\n main()\n\ndef player_id():\n for i in range(2):\n if player[i] == player[0]:\n print(player[0])\n else:\n print(player[i])",
"_____no_output_____"
],
[
"player = ['Player 1', 'Player 2']\nplayer[0] = []\nplayer[1] = []\n\n\ndef main():\n \"\"\"\n You should write your code for this program in this function.\n Make sure to delete the 'pass' line before starting to write\n your own code. You should also delete this comment and replace\n it with a better, more descriptive one.\n \"\"\"\n score_statement()\n input_answer()\n keep_track()\n # player_id()\n # validate()\n\n'''\nHelper Functions\n'''\n\ndef score_statement():\n global score_statement\n global max\n max = 20\n min = 0\n\n for score_statement in range(max, min):\n print(score_statement)\n print('There are' + str(score_statement) + 'stones left \\n')\n\ndef input_answer():\n global input_answer\n for i in range(2):\n if player[i] == player[0]:\n print(player[0])\n input_answer = player[0] + int(input('Would you like to remove 1 or 2 stones? '))\n print(input_answer)\n else:\n print(player[i])\n input_answer = player[i] + int(input('Would you like to remove 1 or 2 stones? ' ))\n print(input_answer)\n\n# validate\ndef keep_track():\n if input_answer < max:\n player[0].append(input_answer)\n print('player[0]: ', player[0])\n remainder = max - input_answer\n print('remainder :', remainder)\n else:\n while input_is_valid:\n if (input_answer == max):\n input = int(input('Please enter 1 or 2: '))\n print('Player ' + input + 'wins! \\n Game Over')\n else:\n main()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbeab23426fdca91046f67e5b17dafa6223b7d2 | 399,585 | ipynb | Jupyter Notebook | Section-04-Undersampling/04-03-Tomek-Links.ipynb | bkiselgof/machine-learning-imbalanced-data | a5a4b8613411e42c041c103b72394b53c9fa0d62 | [
"BSD-3-Clause"
] | null | null | null | Section-04-Undersampling/04-03-Tomek-Links.ipynb | bkiselgof/machine-learning-imbalanced-data | a5a4b8613411e42c041c103b72394b53c9fa0d62 | [
"BSD-3-Clause"
] | null | null | null | Section-04-Undersampling/04-03-Tomek-Links.ipynb | bkiselgof/machine-learning-imbalanced-data | a5a4b8613411e42c041c103b72394b53c9fa0d62 | [
"BSD-3-Clause"
] | null | null | null | 397.993028 | 68,460 | 0.93518 | [
[
[
"# Tomek Links\n\n\nTomek links are 2 samples from a different class, which are nearest neighbours to each other. In other words, if 2 observations are nearest neighbours, and from a different class, they are Tomek Links.\n\nThis procedures removes either the sample from the majority class if it is a Tomek Link, or alternatively, both observations, the one from the majority and the one from the minority class.\n\n====\n\n- **Criteria for data exclusion**: Samples are Tomek Links\n- **Final Dataset size**: varies",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.model_selection import train_test_split\n\nfrom imblearn.under_sampling import TomekLinks",
"_____no_output_____"
]
],
[
[
"## Create data\n\nWe will create data where the classes have different degrees of separateness.\n\nhttps://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html",
"_____no_output_____"
]
],
[
[
"def make_data(sep):\n \n # returns arrays\n X, y = make_classification(n_samples=1000,\n n_features=2,\n n_redundant=0,\n n_clusters_per_class=1,\n weights=[0.99],\n class_sep=sep,# how separate the classes are\n random_state=1)\n \n # trasform arrays into pandas df and series\n X = pd.DataFrame(X, columns =['varA', 'varB'])\n y = pd.Series(y)\n \n return X, y",
"_____no_output_____"
]
],
[
[
"## Undersample with Tomek Links\n\nhttps://imbalanced-learn.org/stable/generated/imblearn.under_sampling.TomekLinks.html\n\n### Well separated classes",
"_____no_output_____"
]
],
[
[
"# create data\n\nX, y = make_data(sep=2)\n\n# set up Tomek Links\n\ntl = TomekLinks(\n sampling_strategy='auto', # undersamples only the majority class\n n_jobs=4) # I have 4 cores in my laptop\n\nX_resampled, y_resampled = tl.fit_resample(X, y)",
"_____no_output_____"
],
[
"# size of original data\n\nX.shape, y.shape",
"_____no_output_____"
],
[
"# size of undersampled data\n\nX_resampled.shape, y_resampled.shape",
"_____no_output_____"
],
[
"# number of minority class observations\n\ny.value_counts()",
"_____no_output_____"
],
[
"sns.scatterplot(\n data=X, x=\"varA\", y=\"varB\", hue=y\n)\n\nplt.title('Original dataset')\nplt.show()",
"_____no_output_____"
],
[
"# plot undersampled data\n\nsns.scatterplot(\n data=X_resampled, x=\"varA\", y=\"varB\", hue=y_resampled\n)\n\nplt.title('Undersampled dataset')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Partially separated classes",
"_____no_output_____"
]
],
[
[
"# create data\nX, y = make_data(sep=0.5)\n\n# set up Tomek Links\n\ntl = TomekLinks(\n sampling_strategy='auto', # undersamples only the majority class\n n_jobs=4) # I have 4 cores in my laptop\n\nX_resampled, y_resampled = tl.fit_resample(X, y)",
"_____no_output_____"
],
[
"# original data\n\nX.shape, y.shape",
"_____no_output_____"
],
[
"# undersampled data\n\nX_resampled.shape, y_resampled.shape",
"_____no_output_____"
]
],
[
[
"Note that more samples were excluded in the final training set, compared to the previous case where classes were more separated. This is because there are more Tomek Links, as the classes are now not so separated.",
"_____no_output_____"
]
],
[
[
"sns.scatterplot(\n data=X, x=\"varA\", y=\"varB\", hue=y\n)\n\nplt.title('Original dataset')\nplt.show()",
"_____no_output_____"
],
[
"# plot undersampled data\n\nsns.scatterplot(\n data=X_resampled, x=\"varA\", y=\"varB\", hue=y_resampled\n)\n\nplt.title('Undersampled dataset')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**HOMEWORK**\n\n- Remove both observations from the Tomek Link and compare the sizes of the undersampled datasets and the distribution of the observations in the plots.\n\n===\n\n\n## Tomek Links\n\n### Real data - Performance comparison\n\nDoes it work well with real datasets? \n\nWell, it will depend on the dataset, so we need to try and compare the models built on the whole dataset, and that built on the undersampled dataset.",
"_____no_output_____"
]
],
[
[
"# load data\n# only a few observations to speed the computaton\n\ndata = pd.read_csv('../kdd2004.csv').sample(10000)\n\ndata.head()",
"_____no_output_____"
],
[
"# imbalanced target\ndata.target.value_counts() / len(data)",
"_____no_output_____"
],
[
"# separate dataset into train and test\n\nX_train, X_test, y_train, y_test = train_test_split(\n data.drop(labels=['target'], axis=1), # drop the target\n data['target'], # just the target\n test_size=0.3,\n random_state=0)\n\nX_train.shape, X_test.shape",
"_____no_output_____"
],
[
"# set up Tomek Links\n\ntl = TomekLinks(\n sampling_strategy='auto', # undersamples only the majority class\n n_jobs=4) # I have 4 cores in my laptop\n\nX_resampled, y_resampled = tl.fit_resample(X_train, y_train)",
"_____no_output_____"
],
[
"# size of undersampled data\n\nX_resampled.shape, y_resampled.shape",
"_____no_output_____"
]
],
[
[
"The under-sampled data set is very similar to the original dataset, only 5 observations were removed. So there is no real point in testing the performance. The difference in performance will most likely be driven by the randomness of Random Forests than by the difference in the datasets.",
"_____no_output_____"
]
],
[
[
"# number of positive class in original dataset\ny_train.value_counts()",
"_____no_output_____"
]
],
[
[
"## Plot data\n\nLet's compare how the data looks before and after the undersampling.",
"_____no_output_____"
]
],
[
[
"# original data\n\nsns.scatterplot(data=X_train,\n x=\"0\",\n y=\"1\",\n hue=y_train)\n\nplt.title('Original data')",
"_____no_output_____"
],
[
"# undersampled data\n\nsns.scatterplot(data=X_resampled,\n x=\"0\",\n y=\"1\",\n hue=y_resampled)\n\nplt.title('Undersampled data')",
"_____no_output_____"
],
[
"# original data\n\nsns.scatterplot(data=X_train,\n x=\"4\",\n y=\"5\",\n hue=y_train)\n\nplt.title('Original data')",
"_____no_output_____"
],
[
"sns.scatterplot(data=X_resampled,\n x=\"4\",\n y=\"5\",\n hue=y_resampled)\n\nplt.title('Undersampled data')",
"_____no_output_____"
]
],
[
[
"## Machine learning performance comparison\n\nLet's compare model performance with and without undersampling.",
"_____no_output_____"
]
],
[
[
"# function to train random forests and evaluate the performance\n\ndef run_randomForests(X_train, X_test, y_train, y_test):\n \n rf = RandomForestClassifier(n_estimators=200, random_state=39, max_depth=4)\n rf.fit(X_train, y_train)\n\n print('Train set')\n pred = rf.predict_proba(X_train)\n print('Random Forests roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))\n \n print('Test set')\n pred = rf.predict_proba(X_test)\n print('Random Forests roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))",
"_____no_output_____"
],
[
"# evaluate performance of algorithm built\n# using imbalanced dataset\n\nrun_randomForests(X_train,\n X_test,\n y_train,\n y_test)",
"Train set\nRandom Forests roc-auc: 0.9969960286480432\nTest set\nRandom Forests roc-auc: 0.9688281099788503\n"
],
[
"# evaluate performance of algorithm built\n# using undersampled dataset\n\nrun_randomForests(X_resampled,\n X_test,\n y_resampled,\n y_test)",
"Train set\nRandom Forests roc-auc: 0.9982419187715514\nTest set\nRandom Forests roc-auc: 0.9560060565275909\n"
]
],
[
[
"Removing Tomek Links did not seem to improve performance.\n\n**HOMEWORK**\n\n- Try removing both members of the Tomek Link. Compare final dataset size, model performance and the distributions of the observations before and after the undersampling.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecbead025a3de3ce0ab5110d169e5bc3e879c790 | 490,991 | ipynb | Jupyter Notebook | lec_03_statistical_graphs_mpl_default.ipynb | exploreshaifali/content | 5670dd6c0107d2ec7364bb7dbe4fccea58589dc8 | [
"MIT"
] | 2 | 2015-04-27T13:59:17.000Z | 2016-10-20T10:17:38.000Z | lec_03_statistical_graphs_mpl_default.ipynb | exploreshaifali/content | 5670dd6c0107d2ec7364bb7dbe4fccea58589dc8 | [
"MIT"
] | null | null | null | lec_03_statistical_graphs_mpl_default.ipynb | exploreshaifali/content | 5670dd6c0107d2ec7364bb7dbe4fccea58589dc8 | [
"MIT"
] | null | null | null | 739.444277 | 119,181 | 0.938255 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecbeb108f7996514df8565ed2c0e2cda457c01c1 | 306,783 | ipynb | Jupyter Notebook | MNIST-2/V2.ipynb | 100rab-S/Fun-with-MNIST | 5a8da1805cf3feae990188fa7741fe12bcc3a50e | [
"MIT"
] | null | null | null | MNIST-2/V2.ipynb | 100rab-S/Fun-with-MNIST | 5a8da1805cf3feae990188fa7741fe12bcc3a50e | [
"MIT"
] | null | null | null | MNIST-2/V2.ipynb | 100rab-S/Fun-with-MNIST | 5a8da1805cf3feae990188fa7741fe12bcc3a50e | [
"MIT"
] | 1 | 2021-11-05T12:01:37.000Z | 2021-11-05T12:01:37.000Z | 306,783 | 306,783 | 0.922633 | [
[
[
"cd /content/drive/MyDrive/Fun With MNIST/MNIST-2 (Binary Label Classification)",
"/content/drive/MyDrive/Fun With MNIST/MNIST-2 (Binary Label Classification)\n"
],
[
"import tensorflow as tf\nimport numpy as np\nimport pandas as pd\nimport cv2\nfrom google.colab.patches import cv2_imshow\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras.utils import to_categorical\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MultiLabelBinarizer\nfrom tensorflow.keras.utils import plot_model\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nfrom tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Input, Flatten, Dropout, Add, Activation\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.optimizers import Adam, SGD",
"_____no_output_____"
],
[
"def create_dataset(dataset_size):\n '''\n Function to create dataset for multi label classification by horizontally stacking two images.\n Parameters:\n dataset_size = size of the dataset to be created\n Returns : New dataset with two classes in one image and size of the returned dataset will not match with dataset_size variable, since we drop few images.\n '''\n mnist = tf.keras.datasets.mnist\n (x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n x = np.concatenate((x_train, x_test), axis = 0) #contatenating both train and test dataset to create one large dataset.\n y = np.concatenate((y_train, y_test))\n print('Shape of the dataset after concatinating:')\n print(x.shape, y.shape)\n\n x_new = []\n y_new = []\n no_of_removes = 0\n for _ in range(dataset_size):\n indices = np.random.randint(0, 70000, size = 2) #randomly selecting two indices for stacking.\n\n ans1, ans2 = y[indices[0]], y[indices[1]]\n\n if ans1 == ans2: # check if both the images have same target, if yes then skip that example and donot add it to the dataset. Although this\n # should not effect the model's performance but still for sanity check.\n no_of_removes +=1\n pass\n else:\n new_image = np.concatenate((x[indices[0]], x[indices[1]]), axis = 1)\n x_new.append(new_image)\n\n\n # new_y = [1 if z == ans1 or z == ans2 else 0 for z in range(10)]\n y_new.append((ans1, ans2))\n\n print(f'No of examples removed from dataset: {no_of_removes}')\n return x_new, y_new",
"_____no_output_____"
],
[
"dataset_size = 20000\nx, y = create_dataset(dataset_size)",
"Shape of the dataset after concatinating:\n(70000, 28, 28) (70000,)\nNo of examples removed from dataset: 2087\n"
],
[
"# Randomly display an example from the new dataset formed.\nrandom = np.random.randint(dataset_size)\nplt.imshow(x[random], cmap = 'gray')\nprint(y[random])",
"(6, 7)\n"
],
[
"def ml_split(x, y):\n '''\n Multi hot encode the target variable and divide the data into train, validation and test data.\n '''\n # ml = MultiLabelBinarizer()\n # y = ml.fit_transform(y)\n\n X_train, X_valid, y_train, y_valid = train_test_split(x, y, test_size=0.20, random_state=42)\n X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, test_size = 0.2, random_state = 42)\n\n ml = MultiLabelBinarizer()\n y_train = ml.fit_transform(y_train)\n y_valid = ml.transform(y_valid)\n y_test = ml.transform(y_test)\n\n return X_train, y_train, X_valid, y_valid, X_test, y_test",
"_____no_output_____"
],
[
"X_train, y_train, X_valid, y_valid, X_test, y_test = ml_split(x, y)",
"_____no_output_____"
],
[
"def print_shapes():\n print('Shapes of dataset:')\n print('Training dataset:')\n print(X_train.shape, y_train.shape)\n print('\\nValidation dataset:')\n print(X_valid.shape, y_valid.shape)\n print('\\nTesting dataset:')\n print(X_test.shape, y_test.shape)",
"_____no_output_____"
],
[
"def format_input(features, labels):\n '''\n convert the numpy array (images), labels to tensor objects for training, add the channel dimension to the images.\n '''\n features = tf.convert_to_tensor(features)\n features = tf.expand_dims(features, axis = -1)\n\n labels = tf.convert_to_tensor(labels)\n\n return features, labels",
"_____no_output_____"
],
[
"X_train, y_train = format_input(X_train, y_train)\nX_valid, y_valid = format_input(X_valid, y_valid)\nX_test, y_test = format_input(X_test, y_test)\n\nprint_shapes()",
"Shapes of dataset:\nTraining dataset:\n(11464, 28, 56, 1) (11464, 10)\n\nValidation dataset:\n(3583, 28, 56, 1) (3583, 10)\n\nTesting dataset:\n(2866, 28, 56, 1) (2866, 10)\n"
],
[
"def create_generator(bath_size):\n '''\n Creating generators for augmenting (none is mentioned right now), reshaping the data and for easy flow of data to the model.\n '''\n train_datagen = ImageDataGenerator(rescale = 1.0/255.0, dtype = 'float')\n\n valid_datagen = ImageDataGenerator(rescale = 1.0/255., dtype = 'float')\n\n train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size, shuffle = True, seed = 42)\n valid_generator = valid_datagen.flow(X_valid, y_valid, batch_size=batch_size, seed = 42)\n test_generator = valid_datagen.flow(X_test, y_test, batch_size=batch_size, seed = 42)\n\n return train_generator, valid_generator, test_generator",
"_____no_output_____"
],
[
"batch_size = 128\ntrain_generator, valid_generator, test_generator = create_generator(batch_size)",
"_____no_output_____"
],
[
"# define cnn model\ndef define_model(shape=(28, 56, 1), num_classes=10):\n '''\n Function to create model.\n '''\n model = tf.keras.models.Sequential()\n model.add(Conv2D(32, (3, 3), padding='same', input_shape=shape))\n model.add(Activation('relu'))\n model.add(Conv2D(32, (3, 3), padding='same'))\n model.add(MaxPooling2D((2, 2)))\n\t# model.add(Dropout(0.2))\n model.add(Activation('relu'))\n model.add(Conv2D(64, (3, 3), padding='same'))\n model.add(Conv2D(64, (3, 3), padding='same'))\n model.add(MaxPooling2D((2, 2)))\n model.add(Activation('relu'))\n\t# model.add(Dropout(0.2))\n model.add(Conv2D(128, (3, 3), padding='same'))\n model.add(Conv2D(128, (3, 3), padding='same'))\n model.add(MaxPooling2D((2, 2)))\n model.add(Activation('relu'))\n\t# model.add(Dropout(0.2))\n model.add(Conv2D(256, (3, 3), padding='same'))\n model.add(Conv2D(256, (3, 3), padding='same'))\n model.add(MaxPooling2D((2, 2)))\n model.add(Activation('relu'))\n \n model.add(Flatten())\n # model.add(Dense(512, kernel_initializer='he_uniform'))\n # model.add(Activation('relu'))\n # model.add(Dense(256, kernel_initializer='he_uniform'))\n # model.add(Activation('relu'))\n model.add(Dense(128, activation = 'relu'))\n model.add(Dense(64, activation = 'relu'))\n\t# model.add(Dropout(0.5))\n model.add(Dense(num_classes, activation='sigmoid'))\n \n return model\n \nmodel = define_model()",
"_____no_output_____"
],
[
"model.summary() # display model summary",
"_____no_output_____"
],
[
"plot_model(model, show_shapes = True) # plot model",
"_____no_output_____"
],
[
"# Define callbacks for the model, so that we can stop the training when required conditions are met.\n\nclass myCallback(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n if(logs.get('accuracy')>0.75):\n print(\"\\nReached 75% accuracy so cancelling training!\")\n self.model.stop_training = True\n\nes = tf.keras.callbacks.EarlyStopping(monitor = 'accuracy', patience = 25, verbose = 1, restore_best_weights=True,\n min_delta = 0.02)\ncallbacks = myCallback()\n\n# lr_schedule = tf.keras.callbacks.LearningRateScheduler(\n# lambda epoch: 1e-8 * 10**(epoch / 20))",
"_____no_output_____"
],
[
"opt = tf.keras.optimizers.RMSprop()\nmodel.compile(loss = 'binary_crossentropy', optimizer = opt, metrics = ['accuracy'])",
"_____no_output_____"
],
[
"epochs = 200\n\nhist = model.fit(train_generator, epochs = epochs, validation_data = valid_generator, callbacks=[callbacks])",
"Epoch 1/200\n 6/90 [=>............................] - ETA: 5s - loss: 0.0133 - accuracy: 0.5651WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0318s vs `on_train_batch_end` time: 0.0373s). Check your callbacks.\n90/90 [==============================] - 5s 60ms/step - loss: 0.0182 - accuracy: 0.5524 - val_loss: 0.0203 - val_accuracy: 0.5504\nEpoch 2/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0119 - accuracy: 0.5599 - val_loss: 0.0264 - val_accuracy: 0.6095\nEpoch 3/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0132 - accuracy: 0.5514 - val_loss: 0.0215 - val_accuracy: 0.4937\nEpoch 4/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0082 - accuracy: 0.5178 - val_loss: 0.0210 - val_accuracy: 0.4546\nEpoch 5/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0075 - accuracy: 0.5313 - val_loss: 0.0262 - val_accuracy: 0.4061\nEpoch 6/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0080 - accuracy: 0.5347 - val_loss: 0.0208 - val_accuracy: 0.4722\nEpoch 7/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0064 - accuracy: 0.5031 - val_loss: 0.0212 - val_accuracy: 0.3801\nEpoch 8/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0061 - accuracy: 0.4826 - val_loss: 0.0183 - val_accuracy: 0.4786\nEpoch 9/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0056 - accuracy: 0.5149 - val_loss: 0.0199 - val_accuracy: 0.5523\nEpoch 10/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0064 - accuracy: 0.5467 - val_loss: 0.0194 - val_accuracy: 0.5786\nEpoch 11/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0059 - accuracy: 0.5494 - val_loss: 0.0274 - val_accuracy: 0.6327\nEpoch 12/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0049 - accuracy: 0.5206 - val_loss: 0.0468 - val_accuracy: 0.4519\nEpoch 13/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0051 - accuracy: 0.5255 - val_loss: 0.0272 - val_accuracy: 0.5347\nEpoch 14/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0049 - accuracy: 0.5166 - val_loss: 0.0171 - val_accuracy: 0.4725\nEpoch 15/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0045 - accuracy: 0.5475 - val_loss: 0.0271 - val_accuracy: 0.4050\nEpoch 16/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0040 - accuracy: 0.5153 - val_loss: 0.0232 - val_accuracy: 0.5599\nEpoch 17/200\n90/90 [==============================] - 5s 58ms/step - loss: 0.0044 - accuracy: 0.5689 - val_loss: 0.0207 - val_accuracy: 0.5314\nEpoch 18/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0050 - accuracy: 0.5262 - val_loss: 0.0258 - val_accuracy: 0.5607\nEpoch 19/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0043 - accuracy: 0.5376 - val_loss: 0.0263 - val_accuracy: 0.5241\nEpoch 20/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0043 - accuracy: 0.5654 - val_loss: 0.0247 - val_accuracy: 0.5855\nEpoch 21/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0041 - accuracy: 0.5791 - val_loss: 0.0278 - val_accuracy: 0.5808\nEpoch 22/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0043 - accuracy: 0.5351 - val_loss: 0.0279 - val_accuracy: 0.4870\nEpoch 23/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0042 - accuracy: 0.5202 - val_loss: 0.0553 - val_accuracy: 0.4845\nEpoch 24/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0041 - accuracy: 0.5406 - val_loss: 0.0269 - val_accuracy: 0.5928\nEpoch 25/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0039 - accuracy: 0.5495 - val_loss: 0.0209 - val_accuracy: 0.5289\nEpoch 26/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0034 - accuracy: 0.5586 - val_loss: 0.0439 - val_accuracy: 0.4538\nEpoch 27/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0034 - accuracy: 0.5222 - val_loss: 0.0191 - val_accuracy: 0.5306\nEpoch 28/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0039 - accuracy: 0.5147 - val_loss: 0.0365 - val_accuracy: 0.5336\nEpoch 29/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0034 - accuracy: 0.5554 - val_loss: 0.0250 - val_accuracy: 0.5244\nEpoch 30/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0033 - accuracy: 0.5959 - val_loss: 0.0277 - val_accuracy: 0.6026\nEpoch 31/200\n90/90 [==============================] - 5s 58ms/step - loss: 0.0049 - accuracy: 0.5651 - val_loss: 0.0619 - val_accuracy: 0.5096\nEpoch 32/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0032 - accuracy: 0.5174 - val_loss: 0.0416 - val_accuracy: 0.5057\nEpoch 33/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0041 - accuracy: 0.5489 - val_loss: 0.0328 - val_accuracy: 0.5975\nEpoch 34/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0029 - accuracy: 0.5679 - val_loss: 0.0489 - val_accuracy: 0.5501\nEpoch 35/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0030 - accuracy: 0.5664 - val_loss: 0.0403 - val_accuracy: 0.5412\nEpoch 36/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0041 - accuracy: 0.5370 - val_loss: 0.0231 - val_accuracy: 0.5018\nEpoch 37/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0025 - accuracy: 0.5317 - val_loss: 0.0704 - val_accuracy: 0.4463\nEpoch 38/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0037 - accuracy: 0.5693 - val_loss: 0.0386 - val_accuracy: 0.6316\nEpoch 39/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0041 - accuracy: 0.5544 - val_loss: 0.0285 - val_accuracy: 0.5454\nEpoch 40/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0036 - accuracy: 0.5648 - val_loss: 0.0370 - val_accuracy: 0.4865\nEpoch 41/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0035 - accuracy: 0.5688 - val_loss: 0.0369 - val_accuracy: 0.4279\nEpoch 42/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0040 - accuracy: 0.5690 - val_loss: 0.0294 - val_accuracy: 0.6084\nEpoch 43/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0027 - accuracy: 0.5961 - val_loss: 0.0447 - val_accuracy: 0.6006\nEpoch 44/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0035 - accuracy: 0.5707 - val_loss: 0.0405 - val_accuracy: 0.5755\nEpoch 45/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0038 - accuracy: 0.6272 - val_loss: 0.0321 - val_accuracy: 0.6227\nEpoch 46/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0032 - accuracy: 0.6035 - val_loss: 0.0324 - val_accuracy: 0.6221\nEpoch 47/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0041 - accuracy: 0.5742 - val_loss: 0.0335 - val_accuracy: 0.5961\nEpoch 48/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0023 - accuracy: 0.5760 - val_loss: 0.0380 - val_accuracy: 0.6056\nEpoch 49/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0041 - accuracy: 0.5686 - val_loss: 0.0425 - val_accuracy: 0.5688\nEpoch 50/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0039 - accuracy: 0.5738 - val_loss: 0.0330 - val_accuracy: 0.5562\nEpoch 51/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0033 - accuracy: 0.5447 - val_loss: 0.0301 - val_accuracy: 0.6294\nEpoch 52/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0032 - accuracy: 0.5708 - val_loss: 0.0491 - val_accuracy: 0.5691\nEpoch 53/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0045 - accuracy: 0.5427 - val_loss: 0.0422 - val_accuracy: 0.5721\nEpoch 54/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0029 - accuracy: 0.5731 - val_loss: 0.0475 - val_accuracy: 0.5727\nEpoch 55/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0041 - accuracy: 0.5724 - val_loss: 0.0488 - val_accuracy: 0.4895\nEpoch 56/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0033 - accuracy: 0.5731 - val_loss: 0.0474 - val_accuracy: 0.5677\nEpoch 57/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0032 - accuracy: 0.5755 - val_loss: 0.0335 - val_accuracy: 0.6322\nEpoch 58/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0042 - accuracy: 0.5877 - val_loss: 0.0428 - val_accuracy: 0.5074\nEpoch 59/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0039 - accuracy: 0.5316 - val_loss: 0.0412 - val_accuracy: 0.4747\nEpoch 60/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0042 - accuracy: 0.5444 - val_loss: 0.0419 - val_accuracy: 0.5853\nEpoch 61/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0039 - accuracy: 0.5467 - val_loss: 0.0572 - val_accuracy: 0.5721\nEpoch 62/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0028 - accuracy: 0.5278 - val_loss: 0.0619 - val_accuracy: 0.5306\nEpoch 63/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0030 - accuracy: 0.5101 - val_loss: 0.0515 - val_accuracy: 0.4549\nEpoch 64/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0034 - accuracy: 0.5296 - val_loss: 0.0841 - val_accuracy: 0.6115\nEpoch 65/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0045 - accuracy: 0.6018 - val_loss: 0.0818 - val_accuracy: 0.6204\nEpoch 66/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0047 - accuracy: 0.5637 - val_loss: 0.0750 - val_accuracy: 0.5579\nEpoch 67/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0042 - accuracy: 0.5875 - val_loss: 0.0492 - val_accuracy: 0.5950\nEpoch 68/200\n90/90 [==============================] - 6s 62ms/step - loss: 0.0038 - accuracy: 0.5904 - val_loss: 0.0366 - val_accuracy: 0.5727\nEpoch 69/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0032 - accuracy: 0.6153 - val_loss: 0.0776 - val_accuracy: 0.5783\nEpoch 70/200\n90/90 [==============================] - 6s 62ms/step - loss: 0.0032 - accuracy: 0.5858 - val_loss: 0.0608 - val_accuracy: 0.5864\nEpoch 71/200\n90/90 [==============================] - 6s 61ms/step - loss: 0.0046 - accuracy: 0.5939 - val_loss: 0.0333 - val_accuracy: 0.6031\nEpoch 72/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0025 - accuracy: 0.5762 - val_loss: 0.0692 - val_accuracy: 0.5571\nEpoch 73/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0044 - accuracy: 0.5291 - val_loss: 0.0514 - val_accuracy: 0.5211\nEpoch 74/200\n90/90 [==============================] - 6s 62ms/step - loss: 0.0039 - accuracy: 0.5509 - val_loss: 0.0555 - val_accuracy: 0.5317\nEpoch 75/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0040 - accuracy: 0.5564 - val_loss: 0.0545 - val_accuracy: 0.5085\nEpoch 76/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0031 - accuracy: 0.5760 - val_loss: 0.1342 - val_accuracy: 0.4533\nEpoch 77/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0046 - accuracy: 0.5793 - val_loss: 0.0677 - val_accuracy: 0.6433\nEpoch 78/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0029 - accuracy: 0.6379 - val_loss: 0.0553 - val_accuracy: 0.5998\nEpoch 79/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0035 - accuracy: 0.6338 - val_loss: 0.0734 - val_accuracy: 0.5975\nEpoch 80/200\n90/90 [==============================] - 6s 61ms/step - loss: 0.0049 - accuracy: 0.5875 - val_loss: 0.0672 - val_accuracy: 0.5777\nEpoch 81/200\n90/90 [==============================] - 6s 61ms/step - loss: 0.0040 - accuracy: 0.5923 - val_loss: 0.0761 - val_accuracy: 0.5557\nEpoch 82/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0042 - accuracy: 0.5567 - val_loss: 0.0484 - val_accuracy: 0.6202\nEpoch 83/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0026 - accuracy: 0.6001 - val_loss: 0.0739 - val_accuracy: 0.5858\nEpoch 84/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0068 - accuracy: 0.6233 - val_loss: 0.0424 - val_accuracy: 0.5627\nEpoch 85/200\n90/90 [==============================] - 6s 61ms/step - loss: 0.0019 - accuracy: 0.5830 - val_loss: 0.0578 - val_accuracy: 0.5587\nEpoch 86/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0048 - accuracy: 0.5668 - val_loss: 0.0755 - val_accuracy: 0.5641\nEpoch 87/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0037 - accuracy: 0.5539 - val_loss: 0.0923 - val_accuracy: 0.5719\nEpoch 88/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0043 - accuracy: 0.6239 - val_loss: 0.0626 - val_accuracy: 0.6843\nEpoch 89/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0032 - accuracy: 0.6260 - val_loss: 0.0592 - val_accuracy: 0.6266\nEpoch 90/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0041 - accuracy: 0.6131 - val_loss: 0.0935 - val_accuracy: 0.5654\nEpoch 91/200\n90/90 [==============================] - 6s 61ms/step - loss: 0.0054 - accuracy: 0.6130 - val_loss: 0.0663 - val_accuracy: 0.6994\nEpoch 92/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0039 - accuracy: 0.6333 - val_loss: 0.0487 - val_accuracy: 0.5766\nEpoch 93/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0039 - accuracy: 0.5957 - val_loss: 0.0741 - val_accuracy: 0.6361\nEpoch 94/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0043 - accuracy: 0.5914 - val_loss: 0.0891 - val_accuracy: 0.5303\nEpoch 95/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0027 - accuracy: 0.6315 - val_loss: 0.1222 - val_accuracy: 0.5850\nEpoch 96/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0037 - accuracy: 0.5655 - val_loss: 0.1063 - val_accuracy: 0.5454\nEpoch 97/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0028 - accuracy: 0.6084 - val_loss: 0.0995 - val_accuracy: 0.6416\nEpoch 98/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0057 - accuracy: 0.6227 - val_loss: 0.0947 - val_accuracy: 0.5992\nEpoch 99/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0036 - accuracy: 0.6378 - val_loss: 0.1209 - val_accuracy: 0.6358\nEpoch 100/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0069 - accuracy: 0.5674 - val_loss: 0.0853 - val_accuracy: 0.5758\nEpoch 101/200\n90/90 [==============================] - 6s 62ms/step - loss: 0.0037 - accuracy: 0.6281 - val_loss: 0.0552 - val_accuracy: 0.6188\nEpoch 102/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0045 - accuracy: 0.6570 - val_loss: 0.0758 - val_accuracy: 0.6282\nEpoch 103/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0044 - accuracy: 0.6245 - val_loss: 0.0605 - val_accuracy: 0.6333\nEpoch 104/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0037 - accuracy: 0.6323 - val_loss: 0.0997 - val_accuracy: 0.6606\nEpoch 105/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0032 - accuracy: 0.6243 - val_loss: 0.0842 - val_accuracy: 0.6221\nEpoch 106/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0047 - accuracy: 0.6089 - val_loss: 0.0968 - val_accuracy: 0.6107\nEpoch 107/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0069 - accuracy: 0.6281 - val_loss: 0.1226 - val_accuracy: 0.6112\nEpoch 108/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0047 - accuracy: 0.6282 - val_loss: 0.1634 - val_accuracy: 0.5855\nEpoch 109/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0070 - accuracy: 0.6333 - val_loss: 0.0964 - val_accuracy: 0.6369\nEpoch 110/200\n90/90 [==============================] - 5s 59ms/step - loss: 0.0037 - accuracy: 0.6338 - val_loss: 0.1069 - val_accuracy: 0.6151\nEpoch 111/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0039 - accuracy: 0.6194 - val_loss: 0.1295 - val_accuracy: 0.6564\nEpoch 112/200\n90/90 [==============================] - 5s 60ms/step - loss: 0.0062 - accuracy: 0.5905 - val_loss: 0.1315 - val_accuracy: 0.6536\nEpoch 113/200\n90/90 [==============================] - 5s 61ms/step - loss: 0.0053 - accuracy: 0.6144 - val_loss: 0.1024 - val_accuracy: 0.5995\nEpoch 114/200\n12/90 [===>..........................] - ETA: 4s - loss: 0.0043 - accuracy: 0.5553"
],
[
"# outputLabels = np.unique(y_train)\n\n# from sklearn.utils import compute_class_weight\n# classWeight = compute_class_weight('balanced', outputLabels, y_train) \n# classWeight = dict(enumerate(classWeight))\n# model.fit(train_generator, epochs = epochs, validation_data = (valid_generator), class_weight=classWeight, callbacks=[callbacks, es])",
"_____no_output_____"
],
[
"train_acc = hist.history['accuracy']\ntrain_loss = hist.history['loss']\n\nvalid_acc = hist.history['val_accuracy']\nvalid_loss = hist.history['val_loss']\nepochs = range(len(train_acc))",
"_____no_output_____"
],
[
"plt.plot(epochs, train_acc, 'r', label = 'Train Accuracy')\nplt.plot(epochs, valid_acc, 'b', label = 'Validation Accuracy')\nplt.legend()\nplt.title('Accuracy')",
"_____no_output_____"
],
[
"plt.plot(epochs, train_loss, 'r', label = 'Train Loss')\nplt.plot(epochs, valid_loss, 'b', label = 'Validation Loss')\nplt.legend()\nplt.title('Loss')",
"_____no_output_____"
],
[
"loss, accuracy = model.evaluate(test_generator, batch_size = batch_size)\nprint(loss)\nprint(int(accuracy * 100), '%')",
"23/23 [==============================] - 1s 50ms/step - loss: 0.5003 - accuracy: 0.2024\n0.5003483295440674\n20 %\n"
],
[
"# model.save('mnist-2-2.h5')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbec528867b34bd94b486117a76263818494e8d | 2,452 | ipynb | Jupyter Notebook | Day-6 Lets Upgrade.ipynb | Suraj-08/LetsUpgrade-Python-B7 | 2f86ba5cd64de127d680363187780cc631bf1b52 | [
"Apache-2.0"
] | null | null | null | Day-6 Lets Upgrade.ipynb | Suraj-08/LetsUpgrade-Python-B7 | 2f86ba5cd64de127d680363187780cc631bf1b52 | [
"Apache-2.0"
] | null | null | null | Day-6 Lets Upgrade.ipynb | Suraj-08/LetsUpgrade-Python-B7 | 2f86ba5cd64de127d680363187780cc631bf1b52 | [
"Apache-2.0"
] | null | null | null | 24.767677 | 68 | 0.501631 | [
[
[
"\n# OOPS",
"_____no_output_____"
]
],
[
[
"class Bank_account():\n def __init__(self,OwnerName,Amount):\n self.Balance = 0\n self.OwnerName = OwnerName\n self.Amount = Amount\n \n def deposit(self):\n Amount = int(input(\"Enter Deposit amount:-\"))\n Balance = int(input(\"Enter Amount in Bank:-\"))\n print(\"Name:\",self.OwnerName)\n print(\"Remaining Balance =\",Balance + Amount)\n \n def withdraw(self):\n Amount = int(input(\"Enter Withdrawl amount:-\"))\n Balance = int(input(\"Enter Amount in Bank:-\"))\n print(\"Name:\",self.OwnerName)\n print(\"Amount after withdrawl =\",Balance - Amount)\n\nOwner = Bank_account(\"Siva\",5000)\nprint(Owner.deposit())\nWithdraw = Bank_account(\"Malik\",9000)\nprint(Withdraw.withdraw())",
"_____no_output_____"
]
],
[
[
"# Cone",
"_____no_output_____"
]
],
[
[
"import math\nradius = float(input('Enter the Radius of Cone:- '))\nheight = float(input('Enter the Height of Cone:- '))\n\nlength = math.sqrt(radius * radius + height * height)\n\nSurface_area = math.pi * radius * (radius + length)\n\nVolume = (1.0/3) * math.pi * radius * radius * height\n\nprint(\"Length of a Side = %.2f\" %length)\nprint(\"Surface Area = %.2f \" %Surface_area)\nprint(\"Volume = %.2f\" %Volume);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbec6afc405c29cd0f60e237a8e286478aa1d33 | 49,821 | ipynb | Jupyter Notebook | black_swan_2_poc_lstm.ipynb | black-swan-2/PoC_LSTM_RNN | 8b45decfd177360b51384281a95c1da2e0221d3b | [
"MIT"
] | null | null | null | black_swan_2_poc_lstm.ipynb | black-swan-2/PoC_LSTM_RNN | 8b45decfd177360b51384281a95c1da2e0221d3b | [
"MIT"
] | null | null | null | black_swan_2_poc_lstm.ipynb | black-swan-2/PoC_LSTM_RNN | 8b45decfd177360b51384281a95c1da2e0221d3b | [
"MIT"
] | null | null | null | 212.910256 | 42,026 | 0.886935 | [
[
[
"from keras.layers.core import Dense, Activation, Dropout\nfrom keras.layers.recurrent import LSTM\nfrom keras.models import Sequential\nimport lstm, time #helper libraries",
"_____no_output_____"
],
[
"#Load and TTS data\nX_train, y_train, X_test, y_test = lstm.load_data('sp500.csv', 50, True)",
"_____no_output_____"
],
[
"#Building LSTM using linear activation\nmodel = Sequential()\n\nmodel.add(LSTM(\n input_dim=1,\n output_dim=50,\n return_sequences=True))\nmodel.add(Dropout(0.2))\n\nmodel.add(LSTM(\n 100,\n return_sequences=False))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(\n output_dim=1))\nmodel.add(Activation('linear'))\n\nstart = time.time()\nmodel.compile(loss='mse', optimizer='rmsprop')\nprint 'compilation time : ', time.time() - start",
"WARNING: Logging before flag parsing goes to stderr.\nW0625 10:39:07.469749 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nW0625 10:39:07.520582 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0625 10:39:07.529637 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nW0625 10:39:07.897356 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nW0625 10:39:07.908713 140070833497984 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nW0625 10:39:08.132405 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\n"
],
[
"#Training Model\nmodel.fit(\n X_train,\n y_train,\n batch_size=512,\n nb_epoch=1,\n validation_split=0.05)",
"W0625 10:39:16.167864 140070833497984 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_grad.py:1250: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0625 10:39:17.385895 140070833497984 deprecation_wrapper.py:119] From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n\n"
],
[
"# Output, not sure what I am looking at\npredictions = lstm.predict_sequences_multiple(model, X_test, 50, 50)\nlstm.plot_results_multiple(predictions, y_test, 50)",
"yo\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbed44d46399bf58a9bf01ccb67dd3750ec959b | 488,876 | ipynb | Jupyter Notebook | CameraCalibration.ipynb | anichrome/SelfDrivingCarND-AdvancedLaneFinding | fc394301a983c75b5fe838be8c79d568f928c606 | [
"MIT"
] | null | null | null | CameraCalibration.ipynb | anichrome/SelfDrivingCarND-AdvancedLaneFinding | fc394301a983c75b5fe838be8c79d568f928c606 | [
"MIT"
] | null | null | null | CameraCalibration.ipynb | anichrome/SelfDrivingCarND-AdvancedLaneFinding | fc394301a983c75b5fe838be8c79d568f928c606 | [
"MIT"
] | null | null | null | 3,621.303704 | 249,572 | 0.963089 | [
[
[
"# Camera Calibration for Advanced Lane Detection\nThe steps below calculates camera calibration parameters and uses them to correct undistort the distorted images.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport cv2\nimport glob\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\nhorizontalCorners = 9\nverticalCorners = 6\n\n\nfinalObjectPoints = [] # 3D points in real world space\nfinalImagePoints = [] # 2D points in image plane\n\n# Prepare object points \nobjectPoints = np.zeros((horizontalCorners * verticalCorners, 3), np.float32)\n\n# generate coordinates\nobjectPoints[:,:2] = np.mgrid[0:horizontalCorners, 0:verticalCorners].T.reshape(-1,2)\n\ncalibrationImagesFolder = 'camera_cal/calibration*.jpg'\ncalibrationImagesFileNames = glob.glob(calibrationImagesFolder)\n\nfor calibrationImageFileName in calibrationImagesFileNames:\n calibrationImage = mpimg.imread(calibrationImageFileName)\n \n grayCalibrationImage = cv2.cvtColor(calibrationImage, cv2.COLOR_RGB2GRAY)\n \n ret, corners = cv2.findChessboardCorners(\n grayCalibrationImage, (horizontalCorners, verticalCorners), None)\n \n if ret == True:\n finalImagePoints.append(corners)\n finalObjectPoints.append(objectPoints)\n \n calibrationImage = cv2.drawChessboardCorners(\n calibrationImage, (horizontalCorners, verticalCorners), corners, ret)\n ",
"_____no_output_____"
],
[
"import pickle\n\n# read a distorted image\ndistortedImage = mpimg.imread('camera_cal/calibration1.jpg')\nplt.figure(figsize=(10,8))\nplt.title('Distorted Image')\nplt.imshow(distortedImage)\n\n# calibrate camera\nret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(finalObjectPoints, finalImagePoints, distortedImage.shape[1::-1], None, None)\n\n# undistort\nundistortedImage = cv2.undistort(distortedImage, mtx, dist, None, mtx)\nplt.figure(figsize=(10,8))\nplt.title('Undistorted Image')\nplt.imshow(undistortedImage)\n\n# save distortion coefficients and the camera matrix\ncalibrationData = {\"mtx\": mtx, \"dist\" : dist }\npickle.dump(calibrationData, open(\"calibratedCameraData/cameraCalibrationData.p\", \"wb\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
ecbedc5a5f1b8ce22302176222dae3452796494d | 75,398 | ipynb | Jupyter Notebook | data_model.ipynb | AlaaALatif/bjorn | 3bc4c2b5b5f6b18c93513721f5df96c47ba68ec8 | [
"MIT"
] | null | null | null | data_model.ipynb | AlaaALatif/bjorn | 3bc4c2b5b5f6b18c93513721f5df96c47ba68ec8 | [
"MIT"
] | null | null | null | data_model.ipynb | AlaaALatif/bjorn | 3bc4c2b5b5f6b18c93513721f5df96c47ba68ec8 | [
"MIT"
] | null | null | null | 33.795607 | 10,187 | 0.391602 | [
[
[
"import sys\nimport time\nimport json\nimport numpy as np\nimport pandas as pd\n\nimport bjorn_support as bs\nimport onion_trees as ot\nimport mutations as bm\nimport visualize as bv\nimport reports as br\nimport data as bd",
"_____no_output_____"
],
[
"import plotly\nimport plotly.express as px\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots",
"_____no_output_____"
],
[
"date = '2021-01-26'\ncountries_fp = '/home/al/data/geojsons/countries.geo.json'\nstates_fp = '/home/al/data/geojsons/us-states.json'\nsubs = pd.read_csv('/home/al/analysis/gisaid/subs_long_2021-01-25.csv.gz', \n compression='gzip')\ndels = pd.read_csv('/home/al/analysis/gisaid/dels_long_2021-01-25.csv.gz', \n compression='gzip')",
"_____no_output_____"
],
[
"(dels.groupby(['mutation', 'absolute_coords', 'del_len', 'del_seq'])\n .agg(num_samples=('idx', 'nunique'))\n .reset_index()\n .nlargest(50, 'num_samples'))",
"_____no_output_____"
],
[
"cols = ['mutation', 'strain', 'country', 'division', 'location', 'date', 'absolute_coords', 'del_len']",
"_____no_output_____"
],
[
"dels['pos'] = dels['absolute_coords'].apply(lambda x: int(x.split(':')[0]))\ndels['ref_codon'] = dels['del_seq'].copy()",
"_____no_output_____"
],
[
"print(subs.shape)\nprint(dels.shape)\nsubs['type'] = 'substitution'\nmuts = pd.concat([subs, dels])\nprint(muts.shape)",
"(6328749, 38)\n(117950, 44)\n(6446699, 47)\n"
],
[
"with open(countries_fp) as f:\n countries = json.load(f)\ncountry_map = {x['properties']['name']: x['id'] for x in countries['features']}\nmuts['country_id'] = muts['country'].apply(lambda x: country_map.get(x, 'NA'))\nwith open(states_fp) as f:\n states = json.load(f)\nstate_map = {x['properties']['name']: x['id'] for x in states['features']}\nmuts['division_id'] = muts['division'].apply(lambda x: state_map.get(x, 'NA'))",
"_____no_output_____"
],
[
"muts.rename(columns={\n 'date': 'date_collected',\n 'GISAID_clade': 'gisaid_clade',\n 'Nextstrain_clade': 'nextstrain_clade',\n 'del_len': 'change_length_nt'\n }, inplace=True)",
"_____no_output_____"
],
[
"muts.columns",
"_____no_output_____"
],
[
"def compute_acc_nt_pos(x, gene2pos):\n s = gene2pos.get(x['gene'], 0)\n return s + x['pos']\nmuts['nt_map_coords'] = muts[['gene', 'pos']].apply(compute_acc_nt_pos, \n args=(bd.GENE2NTCOORDS,), \n axis=1)",
"_____no_output_____"
],
[
"def compute_acc_aa_pos(x, gene2pos):\n s = gene2pos.get(x['gene'], 0)\n return s + x['codon_num']\nmuts['aa_map_coords'] = muts[['gene', 'codon_num']].apply(compute_acc_aa_pos, \n args=(bd.GENE2AACOORDS,), \n axis=1)",
"_____no_output_____"
],
[
"muts['date_modified'] = date",
"_____no_output_____"
],
[
"muts.columns",
"_____no_output_____"
],
[
"muts['is_synonymous'] = False\nmuts.loc[muts['ref_aa']==muts['alt_aa'], 'is_synonymous'] = True",
"_____no_output_____"
],
[
"meta_info = ['strain', 'date_modified',\n 'date_collected','date_submitted',\n 'country_id', 'country', \n 'division_id', 'division', 'location', \n 'submitting_lab', 'originating_lab',\n 'authors', 'pangolin_lineage', \n 'gisaid_clade', 'nextstrain_clade',\n 'gisaid_epi_isl', 'genbank_accession',\n 'purpose_of_sequencing']\n\nmuts_info = ['type', 'mutation', 'gene', \n 'ref_codon', 'pos', 'alt_codon', \n 'is_synonymous', \n 'ref_aa', 'codon_num', 'alt_aa', \n 'absolute_coords', \n 'change_length_nt', \n 'nt_map_coords', 'aa_map_coords']",
"_____no_output_____"
],
[
"muts.loc[muts['location']=='unk', 'location'] = 'NA'\nmuts.loc[muts['purpose_of_sequencing']=='?', 'purpose_of_sequencing'] = 'NA'\nmuts.loc[muts['genbank_accession']=='?', 'genbank_accession'] = 'NA'",
"_____no_output_____"
],
[
"muts.fillna('NA', inplace=True)",
"_____no_output_____"
],
[
"sample_ids = muts[['strain']].drop_duplicates().sample(10)['strain'].unique()\ntest = muts[muts['strain'].isin(sample_ids)]",
"_____no_output_____"
],
[
"# test['genbank_accession']",
"_____no_output_____"
],
[
"# test",
"_____no_output_____"
],
[
"start = time.time()\n(muts.groupby(meta_info, as_index=True)\n .apply(lambda x: x[muts_info].to_dict('records'))\n .reset_index()\n .rename(columns={0:'mutations'})\n .to_json('test_data/data_model_2021-01-26.json.gz', \n orient='records',\n compression='gzip'))\nend = time.time()\nprint(f'Execution time: {end - start} seconds')",
"Execution time: 674.5737869739532 seconds\n"
],
[
"cois = [141, 142, 143, 144, 145]\nmy_dels = dels[(dels['codon_num'].isin(cois)) & (dels['gene']=='S')][cols]",
"_____no_output_____"
],
[
"my_dels.shape",
"_____no_output_____"
],
[
"(my_dels.groupby(['mutation', 'country'])\n .agg(num_samples=('strain', 'nunique'),\n first_detected=('date', 'min'),\n last_detected=('date', 'max'))\n# .reset_index()\n .sort_values('num_samples', ascending=False))",
"_____no_output_____"
],
[
"my_dels['mutation'].value_counts()",
"_____no_output_____"
],
[
"my_dels['mutation'] = my_dels['mutation'].apply(lambda x: x.split('.')[0])",
"_____no_output_____"
],
[
"counts = (my_dels['mutation']\n .value_counts()\n .to_frame()\n .reset_index()\n .rename(columns={'index': 'deletion', 'mutation': 'num_samples'}))\ncounts['pct_samples'] = counts['num_samples'] / counts['num_samples'].sum()\nfig = go.Figure(go.Bar(\n y=counts['deletion'], x=counts['num_samples'], orientation='h',\n text=counts['pct_samples'],\n textposition='outside'\n ))\n# fig.for_each_xaxis(lambda axis: axis.title.update(font=dict(color = 'blue', size=8)))\nfig.update_traces(texttemplate='%{text:.2p}')\nfig.update_yaxes(title_text=\"Deletion\")\nfig.update_xaxes(title_text=\"Number of Sequences\")\nfig.update_layout(title=f\"[Undesignated]\", \n template='plotly_white', showlegend=False,\n margin={\"r\":0})\nfig.write_html('s14x_deletion_histogram.html')\nfig.show()",
"_____no_output_____"
],
[
"(my_dels.loc[(my_dels['mutation']=='S:DEL141/144') \n & (my_dels['country'].str.contains('America'))\n & (my_dels['division']=='California')]\n .groupby(['country', 'division', 'location'])\n .agg(num_samples=('strain', 'nunique'),\n first_detected=('date', 'min'),\n last_detected=('date', 'max'))\n# .reset_index()\n .sort_values('num_samples', ascending=False))",
"_____no_output_____"
],
[
"test['absolute_coords']",
"_____no_output_____"
],
[
"test.loc[test['location']=='NA', 'location'] = np.nan\ntest.loc[test['location'].isna(), 'location']",
"/home/al/anaconda3/envs/bjorn/lib/python3.8/site-packages/pandas/core/indexing.py:1719: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self._setitem_single_column(loc, value, pi)\n"
],
[
"test[meta_info]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbedcf925f49a5a7084d0bb514d260eea315767 | 40,680 | ipynb | Jupyter Notebook | funnynet-8char.ipynb | thedch/funnynet | 348d732c3f14255078cbbd0de5ee37bfa1bd16cf | [
"MIT"
] | null | null | null | funnynet-8char.ipynb | thedch/funnynet | 348d732c3f14255078cbbd0de5ee37bfa1bd16cf | [
"MIT"
] | null | null | null | funnynet-8char.ipynb | thedch/funnynet | 348d732c3f14255078cbbd0de5ee37bfa1bd16cf | [
"MIT"
] | 1 | 2018-04-16T20:34:02.000Z | 2018-04-16T20:34:02.000Z | 26.043534 | 133 | 0.488963 | [
[
[
"# Put these at the top of every notebook, to get automatic reloading and inline plotting\n%reload_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Funnynet - 8 Character Model\n\nSpecial thanks to taivop for providing the [dataset](https://github.com/taivop/joke-dataset).\n\nThis notebook is heavily inspired by [fastai NLP work](https://github.com/fastai/fastai/blob/master/courses/dl2/imdb.ipynb).\n\nIn this notebook we build a model which considers the previous eight characters at a time to predict the next.",
"_____no_output_____"
]
],
[
[
"import pdb\nimport json\nfrom pathlib import Path\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math, random\nimport preprocessing as pp\n\n# These libraries require some setup, try the pip install git+https.github.com/... trick\nfrom fastai.io import *\nfrom fastai.conv_learner import *\n\nfrom fastai.column_data import *",
"_____no_output_____"
],
[
"#idx, char_indices, indices_char, chars, vocab_size = pp.save_data_to_pickle(\"data/800000char.pickle\", 800000)\nidx, char_indices, indices_char, chars, vocab_size = pp.save_data_to_pickle(\"data/800000char.pickle\", 800000)\nembeddings_sz = 42\nn_hidden = 256\n\n\ncs = 3\nc1_data = c2_data = c3_data = c4_data = []\nfor i in range(0, len(idx)-cs, cs):\n c1_data.append(idx[i])\n c2_data.append(idx[i+1])\n c3_data.append(idx[i+2])\n c4_data.append(idx[i+3])\nx1 = np.stack(c1_data)\nx2 = np.stack(c2_data)\nx3 = np.stack(c3_data)\ny = np.stack(c4_data)\nmd = ColumnarModelData.from_arrays('.', [-1], np.stack([x1,x2,x3], axis=1), y, bs=512)",
"_____no_output_____"
]
],
[
[
"### Let's create a bigger RNN!",
"_____no_output_____"
]
],
[
[
"rnn_len=8",
"_____no_output_____"
],
[
"# char_in_jokes = [[idx[i+j] for i in range(rnn_len)] for j in range(len(idx)-rnn_len)]\n\nchar_input = []\nfor j in range(len(idx)-rnn_len):\n tmp = []\n for i in range(rnn_len):\n tmp.append(idx[i+j])\n char_input.append(tmp)",
"_____no_output_____"
],
[
"char_output = []\nfor j in range(len(idx)-rnn_len):\n char_output.append(idx[j+rnn_len])",
"_____no_output_____"
],
[
"len(char_input)",
"_____no_output_____"
],
[
"xs = np.stack(char_input, axis=0)",
"_____no_output_____"
],
[
"xs.shape",
"_____no_output_____"
],
[
"y = np.stack(char_output)\ny.shape",
"_____no_output_____"
],
[
"xs[:rnn_len,:rnn_len]",
"_____no_output_____"
],
[
"len(y[:rnn_len])",
"_____no_output_____"
],
[
"val_idx = get_cv_idxs(len(idx)-rnn_len-1)\n# val_idx.shape\nmodel_data = ColumnarModelData.from_arrays('.', val_idx, xs, y, bs=512)",
"_____no_output_____"
],
[
"class CharLoopModel(nn.Module):\n def __init__(self, vocab_size, embeddings_sz):\n super().__init__()\n self.e = nn.Embedding(vocab_size, embeddings_sz)\n self.l_in = nn.Linear(embeddings_sz, n_hidden)\n self.l_hidden = nn.Linear(n_hidden, n_hidden)\n self.l_out = nn.Linear(n_hidden, vocab_size)\n \n def forward(self, *rnn_len):\n# pdb.set_trace()\n bs = rnn_len[0].size(0)\n h = V(torch.zeros(bs, n_hidden).cuda())\n for c in rnn_len:\n inp = F.relu(self.l_in(self.e(c)))\n h = F.tanh(self.l_hidden(h+inp))\n \n return F.log_softmax(self.l_out(h), dim=-1)",
"_____no_output_____"
],
[
"model = CharLoopModel(vocab_size, embeddings_sz).cuda()\nopt = optim.Adam(model.parameters(), 1e-2)",
"_____no_output_____"
],
[
"fit(model, model_data, n_epochs=1, opt=opt, crit=F.nll_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 0.001)",
"_____no_output_____"
],
[
"fit(model, model_data, n_epochs=1, opt=opt, crit=F.nll_loss)",
"_____no_output_____"
],
[
"def get_next(inp):\n idxs = T(np.array([char_indices[c] for c in inp]))\n p = model(*VV(idxs))\n i = np.argmax(to_np(p))\n return chars[i]",
"_____no_output_____"
],
[
"get_next('for thos')",
"_____no_output_____"
],
[
"get_next(' a blond')",
"_____no_output_____"
],
[
"get_next('into a b')",
"_____no_output_____"
],
[
"def get_next_n(inp, n):\n res = inp\n for i in range(n):\n c = get_next(inp)\n res += c\n inp = inp[1:]+c\n return res",
"_____no_output_____"
],
[
"get_next_n('into a b', 40)",
"_____no_output_____"
],
[
"class CharLoopConcatModel(nn.Module):\n def __init__(self, vocab_size, embeddings_sz):\n super().__init__()\n self.e = nn.Embedding(vocab_size, embeddings_sz)\n self.l_in = nn.Linear(embeddings_sz+n_hidden, n_hidden)\n self.l_hidden = nn.Linear(n_hidden, n_hidden)\n self.l_out = nn.Linear(n_hidden, vocab_size)\n \n def forward(self, *rnn_len):\n bs = rnn_len[0].size(0)\n h = V(torch.zeros(bs, n_hidden).cuda())\n for c in rnn_len:\n inp = torch.cat((h, self.e(c)), 1)\n inp = F.relu(self.l_in(inp))\n h = F.tanh(self.l_hidden(inp))\n \n return F.log_softmax(self.l_out(h), dim=-1)",
"_____no_output_____"
],
[
"m = CharLoopConcatModel(vocab_size, embeddings_sz).cuda()\nopt = optim.Adam(m.parameters(), 1e-3)",
"_____no_output_____"
],
[
"it = iter(md.trn_dl)\n*xs,yt = next(it)\nt = m(*V(xs))",
"_____no_output_____"
],
[
"fit(m, md, 1, opt, F.nll_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-4)",
"_____no_output_____"
],
[
"fit(m, md, 1, opt, F.nll_loss)",
"_____no_output_____"
],
[
"def get_next(inp):\n idxs = T(np.array([char_indices[c] for c in inp]))\n p = m(*VV(idxs))\n i = np.argmax(to_np(p))\n return chars[i]",
"_____no_output_____"
],
[
"get_next('wome')",
"_____no_output_____"
],
[
"get_next('beca')",
"_____no_output_____"
],
[
"get_next('char')",
"_____no_output_____"
]
],
[
[
"### Model with Multiple Outputs",
"_____no_output_____"
]
],
[
[
"class CharRnn(nn.Module):\n def __init__(self, vocab_size, embeddings_sz):\n super().__init__()\n self.e = nn.Embedding(vocab_size, embeddings_sz)\n self.rnn = nn.RNN(embeddings_sz, n_hidden)\n self.l_out = nn.Linear(n_hidden, vocab_size)\n \n def forward(self, *rnn_len):\n #print(\"rnn_len: \"+str(rnn_len))\n bs = rnn_len[0].size(0)\n h = V(torch.zeros(1, bs, n_hidden))\n inp = self.e(torch.stack(rnn_len))\n outp,h = self.rnn(inp, h)\n \n return F.log_softmax(self.l_out(outp[-1]), dim=-1)",
"_____no_output_____"
],
[
"m = CharRnn(vocab_size, embeddings_sz).cuda()\nopt = optim.Adam(m.parameters(), 1e-3)",
"_____no_output_____"
],
[
"it = iter(md.trn_dl)\n*xs,yt = next(it)",
"_____no_output_____"
],
[
"t = m.e(V(torch.stack(xs)))\nt.size()",
"_____no_output_____"
],
[
"ht = V(torch.zeros(1, 512,n_hidden))\noutp, hn = m.rnn(t, ht)\noutp.size(), hn.size()",
"_____no_output_____"
],
[
"t = m(*V(xs)); t.size()",
"_____no_output_____"
],
[
"fit(m, md, 4, opt, F.nll_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-4)",
"_____no_output_____"
],
[
"fit(m, md, 2, opt, F.nll_loss)",
"_____no_output_____"
],
[
"def get_next(inp):\n idxs = T(np.array([char_indices[c] for c in inp]))\n p = m(*VV(idxs))\n i = np.argmax(to_np(p))\n return chars[i]",
"_____no_output_____"
],
[
"get_next('for thos')",
"_____no_output_____"
],
[
"def get_next_n(inp, n):\n res = inp\n for i in range(n):\n c = get_next(inp)\n res += c\n inp = inp[1:]+c\n return res",
"_____no_output_____"
],
[
"get_next_n('for thos', 40)",
"_____no_output_____"
],
[
"c_in_dat = [[idx[i+j] for i in range(rnn_len)] for j in range(0, len(idx)-rnn_len-1, rnn_len)]",
"_____no_output_____"
],
[
"c_out_dat = [[idx[i+j] for i in range(rnn_len)] for j in range(1, len(idx)-rnn_len, rnn_len)]",
"_____no_output_____"
],
[
"xs = np.stack(c_in_dat)\nxs.shape",
"_____no_output_____"
],
[
"xs[:rnn_len,:rnn_len]",
"_____no_output_____"
],
[
"ys = np.stack(c_out_dat)\nys.shape",
"_____no_output_____"
],
[
"ys[:rnn_len,:rnn_len]",
"_____no_output_____"
],
[
"val_idx = get_cv_idxs(len(xs)-rnn_len-1)",
"_____no_output_____"
],
[
"md = ColumnarModelData.from_arrays('.', val_idx, xs, ys, bs=512)",
"_____no_output_____"
],
[
"class CharSeqRnn(nn.Module):\n def __init__(self, vocab_size, embeddings_sz):\n super().__init__()\n self.e = nn.Embedding(vocab_size, embeddings_sz)\n self.rnn = nn.RNN(embeddings_sz, n_hidden)\n self.l_out = nn.Linear(n_hidden, vocab_size)\n \n def forward(self, *rnn_len):\n bs = rnn_len[0].size(0)\n h = V(torch.zeros(1, bs, n_hidden))\n inp = self.e(torch.stack(rnn_len))\n outp,h = self.rnn(inp, h)\n return F.log_softmax(self.l_out(outp), dim=-1)",
"_____no_output_____"
],
[
"m = CharSeqRnn(vocab_size, embeddings_sz).cuda()\nopt = optim.Adam(m.parameters(), 1e-3)",
"_____no_output_____"
],
[
"it = iter(md.trn_dl)\n*xst,yt = next(it)\n#t = m(*V(xs))",
"_____no_output_____"
],
[
"def nll_loss_seq(inp, targ):\n sl,bs,nh = inp.size()\n targ = targ.transpose(0,1).contiguous().view(-1)\n return F.nll_loss(inp.view(-1,nh), targ)",
"_____no_output_____"
],
[
"fit(m, md, 4, opt, nll_loss_seq)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-4)",
"_____no_output_____"
],
[
"fit(m, md, 1, opt, nll_loss_seq)",
"_____no_output_____"
],
[
"m = CharSeqRnn(vocab_size, embeddings_sz).cuda()\nopt = optim.Adam(m.parameters(), 1e-2)",
"_____no_output_____"
],
[
"m.rnn.weight_hh_l0.data.copy_(torch.eye(n_hidden))",
"_____no_output_____"
],
[
"fit(m, md, 4, opt, nll_loss_seq)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-3)",
"_____no_output_____"
],
[
"fit(m, md, 4, opt, nll_loss_seq)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbeed3a585e5d17581765bd6b6add6dfcde484d | 142,698 | ipynb | Jupyter Notebook | Notebooks/4_Multiclass_classification.ipynb | Isabelle-Dr/MidTerm-Project | 93f654aca8c6ddf22c84ee143a4fe64951434b0e | [
"MIT"
] | null | null | null | Notebooks/4_Multiclass_classification.ipynb | Isabelle-Dr/MidTerm-Project | 93f654aca8c6ddf22c84ee143a4fe64951434b0e | [
"MIT"
] | null | null | null | Notebooks/4_Multiclass_classification.ipynb | Isabelle-Dr/MidTerm-Project | 93f654aca8c6ddf22c84ee143a4fe64951434b0e | [
"MIT"
] | null | null | null | 135.258768 | 38,644 | 0.862002 | [
[
[
"**INPUTS** \ndb_multiclass_sample.csv \\\nflights_scaled.csv \\\n**OUTPUTS**\\\ndb_multiclass_data.csv\\\nMulticlass.sav",
"_____no_output_____"
],
[
"# 2. Multiclass classification\n**Goal:** predict the type of delay for delayed flights \n\n**Description**: This notebooks was used to create a ML algorythm that predicts the type of delay on delayed flights (arrival delays >= 15min). The input data comes from the file *flights_scaled.csv* that had the following transformations\n- drop rows where `DELAY_TYPE` (value is 0 for flights <15min)\n- drop column `ARR_DELAY` \n- drop column ` CANCELLED` \n\n**Target variable:**\\\n`DELAY_TYPE`\n\nThis feature contaings values from 1 to 5 corresponding to the following delay types and representtions in the dataset:\n- 1: `CARRIER_DELAY` 26.7%\n- 2: `WEATHER_DELAY` 3.3%\n- 3: `NAS_DELAY` 30%\n- 4: `SECURITY_DELAY` 0.17%\n- 5: `LATE_AIRCRAFT_DELAY` 39%\n\n**Notes**: \nWe can use a custom threshold on predicting probabilities. \nWe could look at an oversampler and Undersampler method: SMOTE combined with RandomUnderSampler on the training datasets in this step [Article on how to perform SMOTE](https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/) \nWe could a [calibration algo on the predicted prbobabilities](https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/) \n\n**Steps**:\n1. Pick evaluation metrics and models to try\n2. Spot check different algos based on **a sample data** using cross validate\n3. Select a model\n4. Split training/testing data on **the whole dataset**\n5. Tune hyperparameters on selected algo on **training data** using grid search\n6. Train selected algo & parameters on the **training data**\n7. Test performance on **testing data**\n8. Save with pickle",
"_____no_output_____"
],
[
"## 1. Pick evaluation metrics and models to try\n**Evaluation metrics for multi-class classification** \n- F1 score (weighted) \n- Balanced accuracy \n- Auc score\n\n**Models to try**\n- Naive algorythm (used as a baseline model): \nPredict the majority class for all\n\n- Linear algorythms: \nLogistic Regression \nLDA \nPolynomial regression \n\n- Support Vector Machines algorythms: \nSupport Vector Classification\n\n- Ensemble algorythms: \nRandom forest \nStochastic Gradient Boosting \nXGBoost",
"_____no_output_____"
],
[
"## 2. Spot check different algos based on **a sample data** using cross validate",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"# get data from the samples made with flghts_scaled. (10 000 data points)\ndata_sample = pd.read_csv('db_multiclass_sample.csv', index_col = 0)\nprint(data_sample.shape)\nprint(data_sample.columns)",
"(1723, 18)\nIndex(['branded_code_share', 'origin', 'dest', 'crs_dep_time', 'crs_arr_time',\n 'crs_elapsed_time', 'air_time', 'distance', 'fl_month',\n 'fl_day_of_week', 'fl_week_of_month', 'mkt_op_combo', 'fl_type',\n 'm_hist_dep_delay', 'med_hist_dep_delay', 'm_hist_arr_delay',\n 'med_hist_arr_delay', 'delay_type'],\n dtype='object')\n"
],
[
"# get class representations from sample (the ones listed in the intro are from the whole dataframe)\nclass_names = {0: 'unknown or unreported cause', 1: 'carrier_delay', 2 :'weather_delay', 3:'nas_delay', 4:'security_delay', 5: 'late_aircraft_delay'}\nfor c in data_sample.delay_type.unique():\n print(f'{c} ({class_names[c]}): {len(data_sample[data_sample.delay_type == c])} values, {round(len(data_sample[data_sample.delay_type == c])/len(data_sample)*100,2)} %')",
"3 (nas_delay): 522 values, 30.3 %\n1 (carrier_delay): 441 values, 25.59 %\n5 (late_aircraft_delay): 696 values, 40.39 %\n2 (weather_delay): 60 values, 3.48 %\n4 (security_delay): 4 values, 0.23 %\n"
],
[
"y_sample = data_sample.delay_type\nX_sample = data_sample.drop('delay_type', axis = 1)\nprint(y_sample.shape)\nprint(X_sample.shape)",
"(1723,)\n(1723, 17)\n"
],
[
"# this function creates two dataframe outputs: AVG scoring per model and STD scoring per model\n# make sure you import the models outside of this function and that the scorings are supported by them\ndef perform_cross_validate(models, X, y, scoring, n_folds, plot = False):\n \n # import cross validate\n from sklearn.model_selection import cross_validate\n import pandas as pd\n import numpy as np\n \n # create dataframe\n results_mean = pd.DataFrame(columns = list(models.keys()), index=list(scoring))\n results_std = pd.DataFrame(columns = list(models.keys()), index=list(scoring))\n \n\n # perform cross validate and store desults in the dataframe\n for key in models:\n model_name = key\n model = models[key] # this is the model's placeholder\n cv = cross_validate(estimator=model, X=X, y=y, cv=n_folds, scoring=scoring)\n \n # adds values for each scoring in the dataframes\n results_mean[model_name] = [cv['test_'+scoring[i]].mean() for i in range(len(scoring))]\n results_std[model_name] = [cv['test_'+scoring[i]].std() for i in range(len(scoring))]\n \n \n if plot:\n #setup\n to_plot = {0:311, 1:312, 2:313, 3:314, 4:315, 5:316}\n import matplotlib.pyplot as plt\n %matplotlib inline\n from matplotlib.pylab import rcParams\n rcParams['figure.figsize'] = 20, 15\n \n # plot\n for i, score in enumerate(scoring):\n plt.subplot(to_plot[i])\n plt.bar(list(results_mean.columns), results_mean.iloc[i].values)\n plt.title(f'Plot for: {score}')\n \n return results_mean, results_std",
"_____no_output_____"
],
[
"# import models to try\nfrom sklearn.dummy import DummyClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.svm import SVC\nimport xgboost as xgb",
"_____no_output_____"
]
],
[
[
"### Cross validate using db_multiclass_sample, 3 folds",
"_____no_output_____"
]
],
[
[
"# params\nmodels = {\n 'Dummy Classifier': DummyClassifier(strategy = \"most_frequent\"), \\\n 'Logistic Regression': LogisticRegression(max_iter = 10000), \\\n 'LDA': LinearDiscriminantAnalysis(), \\\n 'Polynomial Regression': make_pipeline(PolynomialFeatures(2),LogisticRegression(max_iter = 10000)), \\\n 'SVC': SVC(), \\\n 'Random Forest': RandomForestClassifier(), \\\n 'Gradient Boosting': GradientBoostingClassifier(), \\\n 'XGBoost': xgb.XGBClassifier(scale_pos_weight=100) \\\n }\n\nscoring = ('balanced_accuracy', 'f1_weighted')\n\nresults_mean, results_std = perform_cross_validate(models=models, X=X_sample, y=y_sample, scoring=scoring, n_folds=3, plot=True)",
"_____no_output_____"
],
[
"results_mean",
"_____no_output_____"
]
],
[
[
"## 3. Select a model\n**LDA and Logistic** Regression are performing well here, we will try running parameters for both and compare them.",
"_____no_output_____"
],
[
"## 4. Split training/and validation data on **the whole dataset**",
"_____no_output_____"
]
],
[
[
"# get data, the whole dataset that has been transformed and scaled\n\ndata = pd.read_csv('flights_scaled.csv', low_memory = False) # change this when I get the data\nprint(data.shape)",
"(15768083, 20)\n"
],
[
"# drop target variables used in other models and use only flight with an arrival delay > 15min (delay type is unreported)\n\ndata = data[data['delay_type'] != 0].drop(['arr_delay', 'cancelled'], axis=1)",
"_____no_output_____"
],
[
"# get class representations from sample (the ones listed in the intro are from the whole dataframe)\n\nclass_names = {0: 'unknown or unreported cause', 1: 'carrier_delay', 2 :'weather_delay', 3:'nas_delay', 4:'security_delay', 5: 'late_aircraft_delay'}\nfor c in data.delay_type.unique():\n print(f'{c} ({class_names[c]}): {len(data[data.delay_type == c])} values, {round(len(data[data.delay_type == c])/len(data)*100,2)} %')",
"5 (late_aircraft_delay): 1135502 values, 39.19 %\n1 (carrier_delay): 763726 values, 26.36 %\n3 (nas_delay): 902057 values, 31.13 %\n2 (weather_delay): 91120 values, 3.14 %\n4 (security_delay): 5039 values, 0.17 %\n"
],
[
"# export the file used for the model \ndata.to_csv('db_multiclass_data.csv')",
"_____no_output_____"
],
[
"# Split between target variable and dependant variables\n\ny = data.delay_type\nX = data.drop('delay_type', axis = 1)b\nprint(y.shape)\nprint(X.shape)",
"(2897444,)\n(2897444, 17)\n"
],
[
"# split training and testing data set\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split (X, y, test_size=0.20, random_state=42)",
"_____no_output_____"
],
[
"print(y_train.shape)\nprint(X_train.shape)\nprint(y_test.shape)\nprint(X_test.shape)",
"(2317955,)\n(2317955, 17)\n(579489,)\n(579489, 17)\n"
]
],
[
[
"<h1><center>LDA</center></h1>",
"_____no_output_____"
],
[
"## 5. Tune hyperparameters on selected algo on **training data** using grid search",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"# samples used to try code\nX_train_sample, X_test_sample, y_train_sample, y_test_sample = train_test_split (X_sample, y_sample, test_size=0.20, random_state=42)",
"_____no_output_____"
],
[
"# try running model - on sample\n\nfrom sklearn.metrics import f1_score # evaluation metric\nfrom sklearn.metrics import balanced_accuracy_score # evaluation metric\nfrom sklearn.metrics import roc_auc_score # evaluation metric\n\nmodel = LinearDiscriminantAnalysis() # placeholder\nmodel.fit(X_train_sample, y_train_sample) # fit\ny_pred = model.predict(X_test_sample) # predict\ny_pred_prob = model.predict_proba(X_test_sample) # predict probabilities\n\nf1 = f1_score(y_test_sample, y_pred, average='weighted')\naccuracy_balanced = balanced_accuracy_score(y_test_sample, y_pred)\nauc = roc_auc_score(y_test_sample, y_pred_prob, multi_class = 'ovr')\n\nprint(f'f1: {f1}\\n balanced accuracy: {accuracy_balanced} \\n auc: {auc}')",
"f1: 0.43241964680435846\n balanced accuracy: 0.25920617386234623 \n auc: 0.7040026827609015\n"
],
[
"# try running model - on whole dataset - 2min\n\nfrom sklearn.metrics import f1_score # evaluation metric\nfrom sklearn.metrics import balanced_accuracy_score # evaluation metric\nfrom sklearn.metrics import roc_auc_score # evaluation metric\n\nmodel = LinearDiscriminantAnalysis() # placeholder\nmodel.fit(X_train, y_train) # fit\ny_pred = model.predict(X_test) # predict\ny_pred_prob = model.predict_proba(X_test) # predict probabilities\n\nf1 = f1_score(y_test, y_pred, average='weighted')\naccuracy_balanced = balanced_accuracy_score(y_test, y_pred)\nauc = roc_auc_score(y_test, y_pred_prob, multi_class = 'ovr')\n\nprint(f'f1: {f1}\\n balanced accuracy: {accuracy_balanced} \\n auc: {auc}')",
"f1: 0.45908662278743256\n balanced accuracy: 0.2845439066062655 \n auc: 0.6509309140548126\n"
],
[
"# perform grid search on whole dataset\n# this takes 10 min\n\nfrom sklearn.metrics import classification_report\n\n# params for grid search\nparams = [\n {\"solver\": ['svd', 'lsqr', 'eigen'],\n \"n_components\" : [ 3, 4, 5, 6] }]\n\nmodel = LinearDiscriminantAnalysis()\ngsc = GridSearchCV(estimator=model, param_grid=params, n_jobs=-1)\n\ngsc.fit(X_train, y_train)\n\nprint(gsc.best_params_)\nbest_model = gsc.best_estimator_\nprint(best_model)",
"{'n_components': 3, 'solver': 'svd'}\nLinearDiscriminantAnalysis(n_components=3)\n"
]
],
[
[
"## 6. Train selected algo & parameters on the **training data**",
"_____no_output_____"
]
],
[
[
"# try model with best parameters from grid search (grid search was based on sample data)\n\nmodel_LDA = best_model\nmodel_LDA.fit(X_train, y_train) # fit",
"_____no_output_____"
]
],
[
[
"## 7. Test performance on **testing data**",
"_____no_output_____"
]
],
[
[
"y_pred = model_LDA.predict(X_test) # predict\ny_pred_prob = model_LDA.predict_proba(X_test) # predict probabilities\n\nf1 = f1_score(y_test, y_pred, average='weighted')\naccuracy_balanced = balanced_accuracy_score(y_test, y_pred)\nauc = roc_auc_score(y_test, y_pred_prob, multi_class = 'ovr')\n\nprint(f'f1: {f1}\\n balanced accuracy: {accuracy_balanced} \\n auc: {auc}')",
"f1: 0.45908662278743256\n balanced accuracy: 0.2845439066062655 \n auc: 0.6509309140548126\n"
],
[
"# get class predicted by the model\npd.DataFrame(y_pred)[0].unique()",
"_____no_output_____"
]
],
[
[
"### Notes\nThe model only predicts classes 3, 5, 1 (representing 95% of the delays) \n- nas_delay\n- late_aircraft_delay\n- carrier_delay",
"_____no_output_____"
]
],
[
[
"# print confusion matrix\n\n# imports\nfrom sklearn.metrics import confusion_matrix\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 10, 5\n\n# confusionn matrix\ncnf_matrix_LDA = confusion_matrix(y_test, y_pred)\n\n# create heatmap\nfig, ax = plt.subplots()\nsns.heatmap(pd.DataFrame(cnf_matrix_LDA), annot=True, cmap=\"YlGnBu\" ,fmt='g')\n\n# custom\nclass_names=['late_aircraft_delay','weather_delay','nas_delay','security_delay','carrier_delay']\ntick_marks = np.arange(len(class_names))\nplt.xticks(tick_marks, class_names)\nplt.yticks(tick_marks, class_names)\nplt.yticks(rotation=0)\nplt.title('Confusion matrix with LDA', y=1.1)\nplt.ylabel('Actual label')\nplt.xlabel('Predicted label')\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"### Notes: \nIt looks like the confusion matrix doesn't take into accoun the whole dataset. let's confirm by looking at values in y_pred, y_test and the ones given by the matrix.",
"_____no_output_____"
]
],
[
[
"# length of y_pred and y_test\nprint(len(y_pred))\nprint(len(y_test))",
"579489\n579489\n"
],
[
"cnf_matrix_LDA[:,0].sum()",
"_____no_output_____"
]
],
[
[
"<h1><center>Logistic Regression</center></h1>",
"_____no_output_____"
],
[
"## 5. Tune hyperparameters on selected algo on **training data** using grid search",
"_____no_output_____"
]
],
[
[
"params = [\n {\"penalty\": ['l2', 'none'],\n \"C\": [0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]}]\n\nmodel = LogisticRegression(max_iter = 10000)\ngsc = GridSearchCV(estimator=LogisticRegression(max_iter = 10000), param_grid=params, n_jobs=-1)\n\ngsc.fit(X_train_sample, y_train_sample)\n\nprint(gsc.best_params_)\nbest_model = gsc.best_estimator_\nprint(best_model)",
"C:\\Users\\derob\\anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:672: UserWarning: The least populated class in y has only 3 members, which is less than n_splits=5.\n % (min_groups, self.n_splits)), UserWarning)\n"
]
],
[
[
"## 6. Train selected algo & parameters on the **training data**",
"_____no_output_____"
]
],
[
[
"model_lr = best_model\nmodel_lr.fit(X_train, y_train) # fit",
"_____no_output_____"
]
],
[
[
"## 7. Test performance on **testing data**",
"_____no_output_____"
]
],
[
[
"y_pred = model_lr.predict(X_test) # predict\ny_pred_prob = model_lr.predict_proba(X_test) # predict probabilities\n\nf1 = f1_score(y_test, y_pred, average='weighted')\naccuracy_balanced = balanced_accuracy_score(y_test, y_pred)\nauc = roc_auc_score(y_test, y_pred_prob, multi_class = 'ovr')\n\nprint(f'f1: {f1}\\n balanced accuracy: {accuracy_balanced} \\n auc: {auc}')",
"f1: 0.4582718517542493\n balanced accuracy: 0.2843076957680032 \n auc: 0.6517652714817764\n"
],
[
"# print confusion matrix\n\ncnf_matrix_lr = confusion_matrix(y_test, y_pred)\n\n# create heatmap\nfig, ax = plt.subplots()\nsns.heatmap(pd.DataFrame(cnf_matrix_lr), annot=True, cmap=\"YlGnBu\" ,fmt='g')\n\n# custom\nclass_names=['carrier_delay','weather_delay','nas_delay','security_delay','late_aircraft_delay']\ntick_marks = np.arange(len(class_names))\nplt.xticks(tick_marks, class_names)\nplt.yticks(tick_marks, class_names)\nplt.yticks(rotation=0)\nplt.title('Confusion matrix with Linear Regression', y=1.1)\nplt.ylabel('Actual label')\nplt.xlabel('Predicted label')\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## 8. Save with pickle\nSaving LDA model",
"_____no_output_____"
]
],
[
[
"import pickle",
"_____no_output_____"
],
[
"filename = 'Multiclass.sav'\npickle.dump(model_LDA, open(filename, 'wb'))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecbef479975611f34adf41567a4b868a0a92def7 | 42,861 | ipynb | Jupyter Notebook | Diabetes/Algorithms after feature selection.ipynb | NortheasternUniversityADS/Final-Project | f88e4f01fe6e6abe8290f3ac311b221e100530bd | [
"Apache-2.0"
] | null | null | null | Diabetes/Algorithms after feature selection.ipynb | NortheasternUniversityADS/Final-Project | f88e4f01fe6e6abe8290f3ac311b221e100530bd | [
"Apache-2.0"
] | null | null | null | Diabetes/Algorithms after feature selection.ipynb | NortheasternUniversityADS/Final-Project | f88e4f01fe6e6abe8290f3ac311b221e100530bd | [
"Apache-2.0"
] | 1 | 2018-04-08T22:49:50.000Z | 2018-04-08T22:49:50.000Z | 28.593062 | 438 | 0.470941 | [
[
[
"# Algorithms before feature selection",
"_____no_output_____"
]
],
[
[
"## Import the required libraries\nimport pandas as pd\nimport numpy as np\nimport imblearn\nfrom imblearn.pipeline import make_pipeline as make_pipeline_imbfinal \nfrom imblearn.over_sampling import SMOTE\nfrom imblearn.metrics import classification_report_imbalanced\nfrom sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, accuracy_score, classification_report\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import BernoulliNB \n#from sklearn import svm\nfrom sklearn.metrics import *\nimport pickle\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"df = pd.read_csv('diabetes1.csv')\ndf.head()",
"_____no_output_____"
],
[
"col_list=['Pregnancies','Glucose','BloodPressure','SkinThickness','BMI','DiabetesPedigreeFunction','Age']\ndf_train,df_test = train_test_split(df,train_size=0.7,random_state=42)\nx_train=df_train[col_list]\ny_train=df_train['Outcome']\nscaler.fit(x_train)\nx_train_sc=scaler.transform(x_train)\nx_test=df_test[col_list]\ny_test=df_test['Outcome']\nscaler.fit(x_test)\nx_test_sc=scaler.transform(x_test)\nprint(x_train.shape,y_train.shape)",
"(417, 7) (417,)\n"
],
[
"accuracy =[]\nmodel_name =[]\ndataset=[]\nf1score = []\nprecision = []\nrecall = []\ntrue_positive =[]\nfalse_positive =[]\ntrue_negative =[]\nfalse_negative =[]",
"_____no_output_____"
]
],
[
[
"# Logistic Regression",
"_____no_output_____"
]
],
[
[
"logreg = LogisticRegression()\n## fitiing the model\nlogreg.fit(x_train_sc, y_train)\nfilename = 'logReg_model.sav'\npickle.dump(logreg,open(filename,'wb'))\nlogreg",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = logreg.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\ndataset.append('Training')\nmodel_name.append('Logistic Regression')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = logreg.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Logistc Regression')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Bernoulli Naive Bayes",
"_____no_output_____"
]
],
[
[
"NB = BernoulliNB()\n## fitiing the model\nNB.fit(x_train_sc, y_train)\nfilename = 'BernoulliNB_model.sav'\npickle.dump(NB,open(filename,'wb'))\nNB",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = NB.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Bernoulli Naive Bayes')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = NB.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Bernoulli Naive Bayes')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Random Forest Classifier",
"_____no_output_____"
]
],
[
[
"rfc = RandomForestClassifier(n_estimators=50,random_state=0)\n## fitiing the model\nrfc.fit(x_train_sc, y_train)\nfilename = 'RFC_model.sav'\npickle.dump(rfc,open(filename,'wb'))\nrfc",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = rfc.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Random Forest Classifier')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = rfc.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Random Forest Classifier')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# k Nearest Neighbour",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier(n_neighbors = 2)\nknn.fit(x_train_sc, y_train)\nfilename = 'KNN_model.sav'\npickle.dump(knn,open(filename,'wb'))\nknn",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = knn.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('k nearest Neighbour')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = knn.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('k nearest Neighbour')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Support Vector Classifier",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC\nsvc = SVC (kernel = 'linear' , C = 0.025 , random_state = 42)\nsvc.fit(x_train_sc, y_train)\nfilename = 'SVC_model.sav'\npickle.dump(svc,open(filename,'wb'))\nsvc",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = svc.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Support Vector Classifier')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = svc.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Support Vector Classifier')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Stochastic Gradient Decent",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import SGDClassifier\nsgd = SGDClassifier (loss = 'modified_huber' , shuffle = True , random_state = 42)\nsgd.fit(x_train_sc, y_train)\nfilename = 'SGD_model.sav'\npickle.dump(sgd,open(filename,'wb'))\nsgd",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDClassifier'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\n"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = sgd.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Stochastic Gradient Decent')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = sgd.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Stochastic Gradient Decent')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Gradient Naive Bayes",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\ngnb = GaussianNB()\ngnb.fit(x_train_sc, y_train)\nfilename = 'GNB_model.sav'\npickle.dump(gnb,open(filename,'wb'))\ngnb",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"prediction = gnb.predict(x_train_sc)\nf1 = f1_score(y_train, prediction)\np = precision_score(y_train, prediction)\nr = recall_score(y_train, prediction)\na = accuracy_score(y_train, prediction)\ncm = confusion_matrix(y_train, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Gradient Naive Bayes')\ndataset.append('Training')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"Testing",
"_____no_output_____"
]
],
[
[
"prediction = gnb.predict(x_test_sc)\nf1 = f1_score(y_test, prediction)\np = precision_score(y_test, prediction)\nr = recall_score(y_test, prediction)\na = accuracy_score(y_test, prediction)\ncm = confusion_matrix(y_test, prediction)\ntp = cm[0][0]\nfp = cm[0][1]\nfn = cm[1][0]\ntn = cm[1][1]\nmodel_name.append('Gradient Naive Bayes')\ndataset.append('Testing')\nf1score.append(f1)\nprecision.append(p)\nrecall.append(r)\naccuracy.append(a)\ntrue_positive.append(tp) \nfalse_positive.append(fp)\ntrue_negative.append(tn) \nfalse_negative.append(fn)\ncm",
"_____no_output_____"
]
],
[
[
"# Writing summary metrics",
"_____no_output_____"
]
],
[
[
"Summary = model_name,dataset,f1score,precision,recall,accuracy,true_positive,false_positive,true_negative,false_negative\n#Summary",
"_____no_output_____"
],
[
"## Making a dataframe of the accuracy and error metrics\ndescribe1 = pd.DataFrame(Summary[0],columns = {\"Model_Name \"})\ndescribe2 = pd.DataFrame(Summary[1],columns = {\"Dataset\"})\ndescribe3 = pd.DataFrame(Summary[2],columns = {\"F1_score\"})\ndescribe4 = pd.DataFrame(Summary[3],columns = {\"Precision_score\"})\ndescribe5 = pd.DataFrame(Summary[4],columns = {\"Recall_score\"})\ndescribe6 = pd.DataFrame(Summary[5], columns ={\"Accuracy_score\"})\ndescribe7 = pd.DataFrame(Summary[6], columns ={\"True_Positive\"})\ndescribe8 = pd.DataFrame(Summary[7], columns ={\"False_Positive\"})\ndescribe9 = pd.DataFrame(Summary[8], columns ={\"True_Negative\"})\ndescribe10 = pd.DataFrame(Summary[9], columns ={\"False_Negative\"})\ndes = describe1.merge(describe2, left_index=True, right_index=True, how='inner')\ndes = des.merge(describe3,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe4,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe5,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe6,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe7,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe8,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe9,left_index=True, right_index=True, how='inner')\ndes = des.merge(describe10,left_index=True, right_index=True, how='inner')\n#des = des.merge(describe9,left_index=True, right_index=True, how='inner')\nSummary_csv = des.sort_values(ascending=True,by=\"False_Negative\").reset_index(drop = True)\nSummary_csv",
"_____no_output_____"
],
[
"Summary_csv.to_csv('Summary.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecbef8ad3b7c08c7ca52c95e00ab0379da605368 | 81,413 | ipynb | Jupyter Notebook | Python-Data-Science-Handbook/notebooks/04.05-Histograms-and-Binnings.ipynb | Little-Potato-1990/learn_python | 9e54d150ef73e4bf53f8cd9b28a2a8bc65593fe1 | [
"Apache-2.0"
] | null | null | null | Python-Data-Science-Handbook/notebooks/04.05-Histograms-and-Binnings.ipynb | Little-Potato-1990/learn_python | 9e54d150ef73e4bf53f8cd9b28a2a8bc65593fe1 | [
"Apache-2.0"
] | null | null | null | Python-Data-Science-Handbook/notebooks/04.05-Histograms-and-Binnings.ipynb | Little-Potato-1990/learn_python | 9e54d150ef73e4bf53f8cd9b28a2a8bc65593fe1 | [
"Apache-2.0"
] | 1 | 2022-01-14T13:18:51.000Z | 2022-01-14T13:18:51.000Z | 196.175904 | 29,848 | 0.910813 | [
[
[
"<!--NAVIGATION-->\n< [密度和轮廓图](04.04-Density-and-Contour-Plots.ipynb) | [目录](Index.ipynb) | [自定义图表图例](04.06-Customizing-Legends.ipynb) >\n\n<a href=\"https://colab.research.google.com/github/wangyingsm/Python-Data-Science-Handbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>\n",
"_____no_output_____"
],
[
"# Histograms, Binnings, and Density\n\n# 直方图,分桶和密度",
"_____no_output_____"
],
[
"> A simple histogram can be a great first step in understanding a dataset.\nEarlier, we saw a preview of Matplotlib's histogram function (see [Comparisons, Masks, and Boolean Logic](02.06-Boolean-Arrays-and-Masks.ipynb)), which creates a basic histogram in one line, once the normal boiler-plate imports are done:\n\n一个简单的直方图可以是我们开始理解数据集的第一步。前面我们看到了Matplotlib的直方图函数(参见[比较,遮盖和布尔逻辑](02.06-Boolean-Arrays-and-Masks.ipynb)),我们可以用一行代码绘制基础的直方图,当然首先需要将需要用的包导入notebook:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-white')\n\ndata = np.random.randn(1000)",
"_____no_output_____"
],
[
"plt.hist(data);",
"_____no_output_____"
]
],
[
[
"> The ``hist()`` function has many options to tune both the calculation and the display; \nhere's an example of a more customized histogram:\n\n`hist()`函数有很多的参数可以用来调整运算和展示;下面又一个更加个性化的直方图展示:\n\n译者注:normed参数已经过时,此处对代码进行了相应修改,使用了替代的density参数。",
"_____no_output_____"
]
],
[
[
"plt.hist(data, bins=30, density=True, alpha=0.5,\n histtype='stepfilled', color='steelblue',\n edgecolor='none');",
"_____no_output_____"
]
],
[
[
"> The ``plt.hist`` docstring has more information on other customization options available.\nI find this combination of ``histtype='stepfilled'`` along with some transparency ``alpha`` to be very useful when comparing histograms of several distributions:\n\n`plt.hist`文档中有更多关于个性化参数的信息。作者发现联合使用`histtype='stepfilled'`和`alpha`参数设置透明度在对不同分布的数据集进行比较展示时很有用:",
"_____no_output_____"
]
],
[
[
"x1 = np.random.normal(0, 0.8, 1000)\nx2 = np.random.normal(-2, 1, 1000)\nx3 = np.random.normal(3, 2, 1000)\n\nkwargs = dict(histtype='stepfilled', alpha=0.3, density=True, bins=40)\n\nplt.hist(x1, **kwargs)\nplt.hist(x2, **kwargs)\nplt.hist(x3, **kwargs);",
"_____no_output_____"
]
],
[
[
"> If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the ``np.histogram()`` function is available:\n\n如果你只是需要计算直方图的数值(即每个桶的数据点数量)而不是展示图像,`np.histogram()`函数可以完成这个目标:",
"_____no_output_____"
]
],
[
[
"counts, bin_edges = np.histogram(data, bins=5)\nprint(counts)",
"[ 23 192 446 295 44]\n"
]
],
[
[
"## Two-Dimensional Histograms and Binnings\n\n## 二维直方图和分桶\n\n> Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.\nWe'll take a brief look at several ways to do this here.\nWe'll start by defining some data—an ``x`` and ``y`` array drawn from a multivariate Gaussian distribution:\n\n正如前面我们可以在一维上使用数值对应的直线划分桶一样,我们也可以在二维上使用数据对应的点来划分桶。本节我们介绍几种实现的方法。首先定义数据集,从多元高斯分布中获得`x`和`y`数组:",
"_____no_output_____"
]
],
[
[
"mean = [0, 0]\ncov = [[1, 1], [1, 2]]\nx, y = np.random.multivariate_normal(mean, cov, 10000).T",
"_____no_output_____"
]
],
[
[
"### ``plt.hist2d``: Two-dimensional histogram\n\n### `plt.hist2d`:二维直方图\n\n> One straightforward way to plot a two-dimensional histogram is to use Matplotlib's ``plt.hist2d`` function:\n\n绘制二维直方图最直接的方法是使用Matplotlib的`plt.hist2d`函数:",
"_____no_output_____"
]
],
[
[
"plt.hist2d(x, y, bins=30, cmap='Blues')\ncb = plt.colorbar()\ncb.set_label('counts in bin')",
"_____no_output_____"
]
],
[
[
"> Just as with ``plt.hist``, ``plt.hist2d`` has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.\nFurther, just as ``plt.hist`` has a counterpart in ``np.histogram``, ``plt.hist2d`` has a counterpart in ``np.histogram2d``, which can be used as follows:\n\n类似`plt.hist`,`plt.hist2d`有许多额外的参数来调整分桶计算和图表展示,可以通过文档了解更多信息。而且,`plt.hist`有`np.histogram`,`plt.hist2d`也有其对应的函数`np.histogram2d`。如下例:",
"_____no_output_____"
]
],
[
[
"counts, xedges, yedges = np.histogram2d(x, y, bins=30)",
"_____no_output_____"
]
],
[
[
"> For the generalization of this histogram binning in dimensions higher than two, see the ``np.histogramdd`` function.\n\n如果要获得更高维度的分桶结果,参见`np.histogramdd`函数文档。",
"_____no_output_____"
],
[
"### ``plt.hexbin``: Hexagonal binnings\n\n### ``plt.hexbin`:六角形分桶\n\n> The two-dimensional histogram creates a tesselation of squares across the axes.\nAnother natural shape for such a tesselation is the regular hexagon.\nFor this purpose, Matplotlib provides the ``plt.hexbin`` routine, which will represents a two-dimensional dataset binned within a grid of hexagons:\n\n刚才的二维分桶是沿着坐标轴将每个桶分为正方形。另一个很自然的分桶形状就是正六边形。对于这个需求,Matplotlib提供了`plt.hexbin`函数,它也是在二维平面上分桶展示,不过每个桶(即图表上的每个数据格)将会是六边形:",
"_____no_output_____"
]
],
[
[
"plt.hexbin(x, y, gridsize=30, cmap='Blues')\ncb = plt.colorbar(label='count in bin')",
"_____no_output_____"
]
],
[
[
"> ``plt.hexbin`` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).\n\n`plt.hexbin`有许多有趣的参数,包括能对每个点设置权重和将每个桶的输出数据结果改为任意的NumPy聚合结果(带权重的平均值,带权重的标准差等)。",
"_____no_output_____"
],
[
"### Kernel density estimation\n\n### 核密度估计\n\n> Another common method of evaluating densities in multiple dimensions is *kernel density estimation* (KDE).\nThis will be discussed more fully in [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb), but for now we'll simply mention that KDE can be thought of as a way to \"smear out\" the points in space and add up the result to obtain a smooth function.\nOne extremely quick and simple KDE implementation exists in the ``scipy.stats`` package.\nHere is a quick example of using the KDE on this data:\n\n另外一个常用来统计多维数据密度的工具是*核密度估计*(KDE)。这部分内容将在[深入:核密度估计](05.13-Kernel-Density-Estimation.ipynb)一节中详细介绍。目前我们只需要知道KDE被认为是一种可以用来填补数据的空隙并补充上平滑变化数据的方法就足够了。快速和简单的KDE算法已经在`scipy.stats`模块中有了成熟的实现。下面我们就一个简单的例子来说明如何使用KDE和绘制相应的二维直方图:",
"_____no_output_____"
]
],
[
[
"from scipy.stats import gaussian_kde\n\n# 产生和处理数据,初始化KDE\ndata = np.vstack([x, y])\nkde = gaussian_kde(data)\n\n# 在通用的网格中计算得到Z的值\nxgrid = np.linspace(-3.5, 3.5, 40)\nygrid = np.linspace(-6, 6, 40)\nXgrid, Ygrid = np.meshgrid(xgrid, ygrid)\nZ = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()]))\n\n# 将图表绘制成一张图像\nplt.imshow(Z.reshape(Xgrid.shape),\n origin='lower', aspect='auto',\n extent=[-3.5, 3.5, -6, 6],\n cmap='Blues')\ncb = plt.colorbar()\ncb.set_label(\"density\")",
"_____no_output_____"
]
],
[
[
"> KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off).\nThe literature on choosing an appropriate smoothing length is vast: ``gaussian_kde`` uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data.\n\nKDE有着光滑的长度,可以在细节和光滑度中有效的进行调节(一个例子是方差偏差权衡)。这方面有大量的文献介绍:高斯核密度估计`gaussian_kde`使用了经验法则来寻找输入数据附近的优化光滑长度值。\n\n> Other KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example, ``sklearn.neighbors.KernelDensity`` and ``statsmodels.nonparametric.kernel_density.KDEMultivariate``.\nFor visualizations based on KDE, using Matplotlib tends to be overly verbose.\nThe Seaborn library, discussed in [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb), provides a much more terse API for creating KDE-based visualizations.\n\n其他的KDE实现也可以在SciPy中找到,每一种都有它的优点和缺点;参见``sklearn.neighbors.KernelDensity``和``statsmodels.nonparametric.kernel_density.KDEMultivariate``。要绘制基于KDE进行可视化的图表,Matplotlib写出的代码会比较冗长。我们将在[使用Seaborn进行可视化](04.14-Visualization-With-Seaborn.ipynb)一节中介绍Seaborn库,它提供了更加简洁的方式用来绘制KDE图表。",
"_____no_output_____"
],
[
"<!--NAVIGATION-->\n< [密度和轮廓图](04.04-Density-and-Contour-Plots.ipynb) | [目录](Index.ipynb) | [自定义图表图例](04.06-Customizing-Legends.ipynb) >\n\n<a href=\"https://colab.research.google.com/github/wangyingsm/Python-Data-Science-Handbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecbf0c71bde7d066d7e14996ed20a905d5937100 | 6,988 | ipynb | Jupyter Notebook | ipynb/example.ipynb | Equaraphanus/py2nb | dad98c5d1ad1ba3d9ea9b6f45d67e7bdb909a30c | [
"MIT"
] | null | null | null | ipynb/example.ipynb | Equaraphanus/py2nb | dad98c5d1ad1ba3d9ea9b6f45d67e7bdb909a30c | [
"MIT"
] | null | null | null | ipynb/example.ipynb | Equaraphanus/py2nb | dad98c5d1ad1ba3d9ea9b6f45d67e7bdb909a30c | [
"MIT"
] | null | null | null | 22.986842 | 149 | 0.520464 | [
[
[
"# Example\nThis is an example file to show the capabilities of the format.\n",
"_____no_output_____"
],
[
"## The basics\nThis section shows the basics of the format.\n",
"_____no_output_____"
],
[
"### Code cells\nYou can write regular python source in here:\n",
"_____no_output_____"
]
],
[
[
"print('Hello, World!')\n",
"Hello, World!\n"
]
],
[
[
"As you can see, the code gets converted into notebook code cells.\nThe output of the cell is also included if you run the converter with the `-e` flag.\n",
"_____no_output_____"
],
[
"And you have probably already noticed that you can insert Markdown cells.\n",
"_____no_output_____"
],
[
"### Markdown cells\nTo create a Markdown cell, you need to write a special comment starting with `#: `.\n\n #: ### This will be a Markdown header\n print('Hello from a code cell!')\n",
"_____no_output_____"
],
[
"### Regular comments\nYou can still write regular comments as well, just don't put the `:` right after the `#` sign:\n",
"_____no_output_____"
]
],
[
[
"# This will be a regular comment inside a code cell!\n",
"_____no_output_____"
]
],
[
[
"### Big code cells\nMultiple lines of code in a row are merged into one code cell, even including empty lines!\n",
"_____no_output_____"
]
],
[
[
"print('This is the first line')\nprint('Hello from the second line!')\n\nprint('This is the line #4 from the same code cell')\n",
"This is the first line\nHello from the second line!\nThis is the line #4 from the same code cell\n"
]
],
[
[
"### Several code cells in a row\nIf you need to break a big chunk of code up into multiple cells,\nyou can do so by putting a special \"separator\" comment `#=`:\n",
"_____no_output_____"
]
],
[
[
"# This is a cell\n",
"_____no_output_____"
],
[
"# And this is another one\n",
"_____no_output_____"
],
[
"# And this is the third one\n",
"_____no_output_____"
]
],
[
[
"## The Markdown\nYou can do many things with markdown, but this section does not aim to show every existing or even supported feature.\nInstead, this section describes some important things to note when writing markdown comments in this format.\n",
"_____no_output_____"
],
[
"### Headers\nThis one may seem obvious, but you should put a space after the sequence of `#`s, otherwise the header may be interpreted as something else.\n\n #: #### Right\n #: ####Wrong\n",
"_____no_output_____"
],
[
"### Tables\nTables require an empty line before the first row if there is some text above in the same cell,\notherwise they are not interpreted correctly and things get messy:\n\n #: Right:\n #:\n #: | A | B | C |\n #: |:--|:-:|--:|\n #: | 1 | 2 | 3 |\n\n #: Wrong:\n #: | A | B | C |\n #: |:--|:-:|--:|\n #: | 1 | 2 | 3 |\n",
"_____no_output_____"
],
[
"### Monospace blocks\nAs with tables, the multiline code blocks require nothing or an empty line right above,\notherwise they will be treated as inline ones.\n\n #: Right:\n #:\n #: Line one\n #: Line two\n #: Line three\n\n #: Wrong:\n #: Line one\n #: Line two\n #: Line three\n",
"_____no_output_____"
],
[
"### TODO: This section should be expanded.\n",
"_____no_output_____"
],
[
"## Evaluation\n### The basics\nCode cells are evaluated one-by-one in a sequential order.\nOutput of every executed code cell gets printed right after the cell itself.\n",
"_____no_output_____"
],
[
"### Exception handling\nIf an exception occurs during the execution of a cell, the traceback is printed to its `stderr` output, and the execution is aborted.\nNo subsequent cells get evaluated after that.\n",
"_____no_output_____"
]
],
[
[
"raise Exception('This is the end')\n",
"Traceback (most recent call last):\n File \"<cell>\", line 1, in <module>\nException: This is the end\n"
],
[
"print('But maybe not? Please...')\n",
"_____no_output_____"
]
],
[
[
"And currently this behaviour cannot be altered.\n",
"_____no_output_____"
],
[
"The `exit()` calls are treated almost the same way,\nbut instead of traceback the `[Interpreter finished execution with code {code}]` gets printed.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecbf2db7fd4a081c146764a37eb32cd42678bcf6 | 1,904 | ipynb | Jupyter Notebook | tanmay/leetcode/remove-element.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | null | null | null | tanmay/leetcode/remove-element.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | null | null | null | tanmay/leetcode/remove-element.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | 2 | 2020-10-07T13:48:02.000Z | 2022-03-31T16:10:36.000Z | 20.473118 | 117 | 0.495273 | [
[
[
"# https://leetcode.com/problems/remove-element/\n# Given nums = [0,1,2,2,3,0,4,2], val = 2,\n\n# Your function should return length = 5, with the first five elements of nums containing 0, 1, 3, 0, and 4.\n\n# Note that the order of those five elements can be arbitrary.\n\n# It doesn't matter what values are set beyond the returned length.",
"_____no_output_____"
],
[
"class Solution:\n def removeElement(self, nums, val: int) -> int:\n l = len(nums)\n c = nums.count(val)\n for i in range(c):\n nums.remove(val)\n return l-c",
"_____no_output_____"
],
[
"s = Solution()",
"_____no_output_____"
],
[
"s.removeElement([0,1,2,2,3,0,4,2],2)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ecbf35b4d3d67add9856de7da8e96f7a57ebfc15 | 189,943 | ipynb | Jupyter Notebook | notebooks/DenseFusion_Bingham_Distributions.ipynb | Mostafa-Mansour/se3_distributions | 3c1c2c754e9102a031ae6ff14b703cee0163c413 | [
"MIT"
] | 1 | 2021-09-19T18:35:42.000Z | 2021-09-19T18:35:42.000Z | notebooks/DenseFusion_Bingham_Distributions.ipynb | Mostafa-Mansour/se3_distributions | 3c1c2c754e9102a031ae6ff14b703cee0163c413 | [
"MIT"
] | null | null | null | notebooks/DenseFusion_Bingham_Distributions.ipynb | Mostafa-Mansour/se3_distributions | 3c1c2c754e9102a031ae6ff14b703cee0163c413 | [
"MIT"
] | 3 | 2021-11-07T12:51:20.000Z | 2022-01-07T10:37:07.000Z | 485.787724 | 110,292 | 0.943551 | [
[
[
"%matplotlib inline\n%env CUDA_DEVICE_ORDER=PCI_BUS_ID\n%env CUDA_VISIBLE_DEVICES=1\n\nimport os\nimport numpy as np\nimport torch\nfrom object_pose_utils.utils import to_np, to_var\n\nimport matplotlib.pyplot as plt\nimport pylab\npylab.rcParams['figure.figsize'] = 20, 12\nfrom mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"env: CUDA_DEVICE_ORDER=PCI_BUS_ID\nenv: CUDA_VISIBLE_DEVICES=1\n"
]
],
[
[
"## Set location and object set for YCB Dataset\n### YCB Object Indices\n\n| Object Indices |[]()|[]()|\n|---|---|---|\n| __1.__ 002_master_chef_can | __8.__ 009_gelatin_box | __15.__ 035_power_drill |\n| __2.__ 003_cracker_box | __9.__ 010_potted_meat_can | __16.__ 036_wood_block |\n| __3.__ 004_sugar_box | __10.__ 011_banana | __17.__ 037_scissors |\n| __4.__ 005_tomato_soup_can | __11.__ 019_pitcher_base | __18.__ 040_large_marker |\n| __5.__ 006_mustard_bottle | __12.__ 021_bleach_cleanser | __19.__ 051_large_clamp |\n| __6.__ 007_tuna_fish_can | __13.__ 024_bowl | __20.__ 052_extra_large_clamp |\n| __7.__ 008_pudding_box | __14.__ 025_mug | __21.__ 061_foam_brick |",
"_____no_output_____"
]
],
[
[
"### Set this to the root of your YCB Dataset\ndataset_root = '/media/DataDrive/ycb/YCB_Video_Dataset'\n\n### If you want individual objects, change this to\n### a list of the indices you want (see above).\nobject_list = list(range(1,22))\n\n### Set this to the dataset subset you want\nmode = 'test'",
"_____no_output_____"
]
],
[
[
"## Initalize YCB Dataset",
"_____no_output_____"
]
],
[
[
"from object_pose_utils.datasets.ycb_dataset import YcbDataset as YCBDataset\nfrom object_pose_utils.datasets.image_processing import ImageNormalizer\n\nfrom object_pose_utils.datasets.pose_dataset import OutputTypes as otypes\n\n\noutput_format = [otypes.OBJECT_LABEL,\n otypes.QUATERNION, \n otypes.TRANSLATION, \n otypes.IMAGE_CROPPED,\n otypes.DEPTH_POINTS_MASKED_AND_INDEXES,\n ]\n\ndataset = YCBDataset(dataset_root, mode=mode,\n object_list = object_list,\n output_data = output_format,\n resample_on_error = False,\n add_syn_background = False,\n add_syn_noise = False,\n use_posecnn_data = True,\n postprocessors = [ImageNormalizer()],\n image_size = [640, 480], num_points=1000)\n",
"_____no_output_____"
]
],
[
[
"## Initialize Dense Fusion Pose Estimator",
"_____no_output_____"
]
],
[
[
"from dense_fusion.network import PoseNet\n\ndf_weights = '/home/bokorn/src/DenseFusion/trained_checkpoints/ycb/pose_model_26_0.012863246640872631.pth'\ndf_estimator = PoseNet(num_points = 1000, num_obj = 21)\ndf_estimator.load_state_dict(torch.load(df_weights, map_location=torch.device('cpu')))\ndf_estimator.cuda();\ndf_estimator.eval();",
"_____no_output_____"
]
],
[
[
"## Set Iso Bingham Regression File Paths",
"_____no_output_____"
]
],
[
[
"### Set this to your iso bingham network checkpoint file path\niso_model_checkpoint = '../weights/bingham_iso_df.pth'",
"_____no_output_____"
]
],
[
[
"## Initialize the Iso Bingham Regression Network",
"_____no_output_____"
]
],
[
[
"from se3_distributions.models.bingham_networks import IsoBingham\nfrom se3_distributions.losses.bingham_loss import isoLikelihood\n\nfeature_size = 1408\n\niso_estimator = IsoBingham(feature_size, len(object_list))\niso_estimator.load_state_dict(torch.load(iso_model_checkpoint))\niso_estimator.eval()\niso_estimator.cuda()\n\ndef bingham_iso(img, points, choose, obj):\n q_est, t_est, feat = evaluateDenseFusion(df_estimator, img, points, choose, obj, use_global_feat=False)\n\n #feat = torch.Tensor(feat).unsqueeze(0).cuda()\n mean_est = torch.Tensor(q_est).unsqueeze(0).cuda()\n df_obj = torch.LongTensor(obj-1).unsqueeze(0).cuda()\n sig_est = iso_estimator(feat.unsqueeze(0).cuda(), df_obj)\n lik_est = isoLikelihood(mean_q=mean_est[0], \n sig=sig_est[0,0])\n\n return lik_est, q_est, t_est",
"_____no_output_____"
]
],
[
[
"## Full Bingham Regression File Paths",
"_____no_output_____"
]
],
[
[
"### Set this to your full bingham network checkpoint file path\nfull_model_checkpoint = '../weights/bingham_full_df.pth'",
"_____no_output_____"
]
],
[
[
"## Initialize the Full Bingham Regression Network",
"_____no_output_____"
]
],
[
[
"from se3_distributions.models.bingham_networks import DuelBingham\nfrom se3_distributions.losses.bingham_loss import duelLikelihood\n\nfull_estimator = DuelBingham(feature_size, len(object_list))\nfull_estimator.load_state_dict(torch.load(full_model_checkpoint))\nfull_estimator.eval()\nfull_estimator.cuda()\n\ndef bingham_full(img, points, choose, obj):\n q_est, t_est, feat = evaluateDenseFusion(df_estimator, img, points, choose, obj, use_global_feat=False)\n\n #feat = torch.Tensor(feat).unsqueeze(0).cuda()\n mean_est = torch.Tensor(q_est).unsqueeze(0).cuda()\n df_obj = torch.LongTensor(obj-1).unsqueeze(0).cuda()\n\n duel_est, z_est = full_estimator(feat, df_obj)\n\n lik_est = duelLikelihood(mean_q=mean_est[0], \n duel_q = duel_est[0,0],\n z=z_est[0])\n\n return lik_est, q_est, t_est",
"_____no_output_____"
]
],
[
[
"## Sample Dataset and Estimate Likelihood Distribution",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \nfrom object_pose_utils.utils.display import torch2Img\nfrom se3_distributions.utils.evaluation_utils import evaluateDenseFusion\n\nindex = np.random.randint(len(dataset))\n\nobj, quat, trans, img, points, choose = dataset[index]\nlik_est, q_est, t_est = bingham_iso(img, points, choose, obj)\n\nlik_gt = lik_est(quat.unsqueeze(0).cuda()).item()\nlik_out = lik_est(torch.from_numpy(q_est).unsqueeze(0).cuda()).item()\n\nprint('Likelihood of Ground Truth: {:0.5f}'.format(lik_gt))\nprint('Likelihood of Estimate: {:0.5f}'.format(lik_out))\nplt.imshow(torch2Img(img, normalized=True))\nplt.axis('off')\nplt.show()",
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
]
],
[
[
"## Build Orientaion Gridding for Visualization",
"_____no_output_____"
]
],
[
[
"from object_pose_utils.bbTrans.discretized4dSphere import S3Grid\ngrid = S3Grid(2)\ngrid.Simplify()\ngrid_vertices = grid.vertices",
"_____no_output_____"
]
],
[
[
"## Visualize Distribution",
"_____no_output_____"
]
],
[
[
"#%matplotlib notebook \nfrom object_pose_utils.utils.display import scatterSO3, quats2Point\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1, projection='3d')\n\ngrid_lik = lik_est(torch.from_numpy(grid_vertices).float().cuda()).flatten()\nscatterSO3(grid_vertices, to_np(grid_lik), ax=ax, alims = [0,.25], s=10)\ngt_pt = quats2Point([to_np(quat)])\nest_pt = quats2Point([q_est])\nax.scatter(gt_pt[:,0], gt_pt[:,1], gt_pt[:,2], c='g',s=100, marker='x')\nax.scatter(est_pt[:,0], est_pt[:,1], est_pt[:,2], c='b',s=100, marker='+')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbf36f03cf0b4e1722afc5249e561c5c0afaac1 | 7,060 | ipynb | Jupyter Notebook | test/python_basic.ipynb | Alro10/roadmap-data-scientist | 6188334b37ec8759099c34262ba5538bbf80eac5 | [
"Apache-2.0"
] | 11 | 2020-06-28T21:06:46.000Z | 2022-01-29T06:46:48.000Z | python_basic.ipynb | Kushal334/roadmap-data-scientist-master | 3b26c32c6bcf280151ee60f560e583336115064f | [
"Apache-2.0"
] | null | null | null | python_basic.ipynb | Kushal334/roadmap-data-scientist-master | 3b26c32c6bcf280151ee60f560e583336115064f | [
"Apache-2.0"
] | 2 | 2021-11-12T17:38:39.000Z | 2022-02-08T14:31:51.000Z | 18.627968 | 114 | 0.483569 | [
[
[
"# This is a basic python test for DS mentoring",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"# Structures",
"_____no_output_____"
],
[
"## List",
"_____no_output_____"
],
[
"### Create list\nThere are two very well-known ways of creating a list",
"_____no_output_____"
]
],
[
[
"a = []\nfor i in range(10):\n a.append(i)\nprint(a)",
"_____no_output_____"
]
],
[
[
"### List Comprehension",
"_____no_output_____"
]
],
[
[
"a = [i for i in range(10)]\nprint(a)",
"_____no_output_____"
],
[
"m_list = ['r', 'i', 'c', 'a', 'r', 'd', 'o']\nnew_list = []\nfor k,i in enumerate(m_list):\n new_list.append(str(k)+i)\nprint(new_list)",
"_____no_output_____"
]
],
[
[
"\n- Explain what enumerate method does in code above\n- Try to do the same but using list comprehension:",
"_____no_output_____"
]
],
[
[
"# implement list comprehension\nn_list = []",
"_____no_output_____"
]
],
[
[
"## Tuples",
"_____no_output_____"
]
],
[
[
"b = tuple(a)\nprint(b)",
"_____no_output_____"
]
],
[
[
"### What is the principal difference between lists and tuples? ",
"_____no_output_____"
],
[
"## Dict",
"_____no_output_____"
]
],
[
[
"car = {\n \"brand\": \"Ford\",\n \"model\": \"Mustang\",\n \"year\": 1964\n}\n\nx = car.get(\"model\")\n\nprint(x)",
"_____no_output_____"
]
],
[
[
"### If the key does not exit, how can get a default value for instance: False ?",
"_____no_output_____"
],
[
"# Conditionals",
"_____no_output_____"
]
],
[
[
"x = 10\nif x % 2 == 0:\n print(\"par\")\nelse:\n print(\"impar\")",
"_____no_output_____"
]
],
[
[
"### How can we change the conditionals above to conditional expression? (ternário)",
"_____no_output_____"
],
[
"# Functions",
"_____no_output_____"
],
[
"Create a basis functions for calculate the average from array",
"_____no_output_____"
]
],
[
[
"data = np.array([13, 63, 5, 378, 58, 40])",
"_____no_output_____"
]
],
[
[
"# Classes",
"_____no_output_____"
],
[
"Define class MyAvg, create two variables: id (shared), d is init variable and avg method",
"_____no_output_____"
]
],
[
[
"class MyAvg:\n \n def __init__(self,data):\n pass # init variable asocciated two all instances \n \n def avg(self): # method for estimating the average\n pass",
"_____no_output_____"
]
],
[
[
"a and b are instances of MyAvg class. To instantiate a class is to initialize it by calling the init method:",
"_____no_output_____"
]
],
[
[
"# Object b should have 2*data as init variable\na =\nb =",
"_____no_output_____"
]
],
[
[
"# Class Inheritance",
"_____no_output_____"
]
],
[
[
"\nclass MyAvgStd(MyAvg):\n def var(self): # method for estimating variance\n u = self.avg()\n return np.sqrt(np.sum((self.d - u)**2)/len(self.d))",
"_____no_output_____"
]
],
[
[
"Instantiate object c with class MyAvgStd, get and print avg and var output methods",
"_____no_output_____"
]
],
[
[
"c =",
"_____no_output_____"
]
],
[
[
"# Callable Object",
"_____no_output_____"
],
[
"Example",
"_____no_output_____"
]
],
[
[
"class Scale():\n def __init__(self, w):\n self._w = w\n def __call__(self, x):\n return x * self._w",
"_____no_output_____"
],
[
"s = Scale(100.)\nprint(s(5))",
"_____no_output_____"
]
],
[
[
"Exercise: Define an inheritance class of Scale class that allows to modify the variable self._w",
"_____no_output_____"
]
],
[
[
"class TuneWeights(Scale):\n def wset():\n pass",
"_____no_output_____"
],
[
"ap = TuneWeights(100.)\nprint(ap(5))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecbf3ecf500cdb63cce1d49564473d3b7c509e9f | 28,524 | ipynb | Jupyter Notebook | Playing backward video/playing a video backwards.ipynb | akkinasrikar/Opencv | c095275fc2f146278b86f53ee9bf7c701f804f59 | [
"MIT"
] | 1 | 2020-09-08T03:21:58.000Z | 2020-09-08T03:21:58.000Z | Playing backward video/playing a video backwards.ipynb | akkinasrikar/Opencv | c095275fc2f146278b86f53ee9bf7c701f804f59 | [
"MIT"
] | null | null | null | Playing backward video/playing a video backwards.ipynb | akkinasrikar/Opencv | c095275fc2f146278b86f53ee9bf7c701f804f59 | [
"MIT"
] | null | null | null | 32.937644 | 65 | 0.561667 | [
[
[
"# Tutorial 5 (:-})",
"_____no_output_____"
],
[
"## playing a video backwards ",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\n\ncapture=cv2.VideoCapture('cars.avi')\n\nif capture.isOpened() is False:\n print(\"error while opening the video\")\n \nframe_index=capture.get(cv2.CAP_PROP_FRAME_COUNT)-1\n\nprint(f'starting frame is {frame_index}')\n\nwhile capture.isOpened() and frame_index>=0:\n \n capture.set(cv2.CAP_PROP_POS_FRAMES,frame_index)\n \n ret,frame=capture.read()\n \n if ret is True:\n \n cv2.imshow(\"inputframe\",frame)\n grayframe=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)\n cv2.imshow(\"gray frame\",grayframe)\n frame_index -=1\n print(f\"upcoming frame is {frame_index}\")\n \n else:\n break\n \n if cv2.waitKey(10) & 0xFF==ord('m'):\n break\n \ncapture.release()\ncv2.destroyAllWindows()",
"starting frame is 749.0\nupcoming frame is 748.0\nupcoming frame is 747.0\nupcoming frame is 746.0\nupcoming frame is 745.0\nupcoming frame is 744.0\nupcoming frame is 743.0\nupcoming frame is 742.0\nupcoming frame is 741.0\nupcoming frame is 740.0\nupcoming frame is 739.0\nupcoming frame is 738.0\nupcoming frame is 737.0\nupcoming frame is 736.0\nupcoming frame is 735.0\nupcoming frame is 734.0\nupcoming frame is 733.0\nupcoming frame is 732.0\nupcoming frame is 731.0\nupcoming frame is 730.0\nupcoming frame is 729.0\nupcoming frame is 728.0\nupcoming frame is 727.0\nupcoming frame is 726.0\nupcoming frame is 725.0\nupcoming frame is 724.0\nupcoming frame is 723.0\nupcoming frame is 722.0\nupcoming frame is 721.0\nupcoming frame is 720.0\nupcoming frame is 719.0\nupcoming frame is 718.0\nupcoming frame is 717.0\nupcoming frame is 716.0\nupcoming frame is 715.0\nupcoming frame is 714.0\nupcoming frame is 713.0\nupcoming frame is 712.0\nupcoming frame is 711.0\nupcoming frame is 710.0\nupcoming frame is 709.0\nupcoming frame is 708.0\nupcoming frame is 707.0\nupcoming frame is 706.0\nupcoming frame is 705.0\nupcoming frame is 704.0\nupcoming frame is 703.0\nupcoming frame is 702.0\nupcoming frame is 701.0\nupcoming frame is 700.0\nupcoming frame is 699.0\nupcoming frame is 698.0\nupcoming frame is 697.0\nupcoming frame is 696.0\nupcoming frame is 695.0\nupcoming frame is 694.0\nupcoming frame is 693.0\nupcoming frame is 692.0\nupcoming frame is 691.0\nupcoming frame is 690.0\nupcoming frame is 689.0\nupcoming frame is 688.0\nupcoming frame is 687.0\nupcoming frame is 686.0\nupcoming frame is 685.0\nupcoming frame is 684.0\nupcoming frame is 683.0\nupcoming frame is 682.0\nupcoming frame is 681.0\nupcoming frame is 680.0\nupcoming frame is 679.0\nupcoming frame is 678.0\nupcoming frame is 677.0\nupcoming frame is 676.0\nupcoming frame is 675.0\nupcoming frame is 674.0\nupcoming frame is 673.0\nupcoming frame is 672.0\nupcoming frame is 671.0\nupcoming frame is 670.0\nupcoming frame is 669.0\nupcoming frame is 668.0\nupcoming frame is 667.0\nupcoming frame is 666.0\nupcoming frame is 665.0\nupcoming frame is 664.0\nupcoming frame is 663.0\nupcoming frame is 662.0\nupcoming frame is 661.0\nupcoming frame is 660.0\nupcoming frame is 659.0\nupcoming frame is 658.0\nupcoming frame is 657.0\nupcoming frame is 656.0\nupcoming frame is 655.0\nupcoming frame is 654.0\nupcoming frame is 653.0\nupcoming frame is 652.0\nupcoming frame is 651.0\nupcoming frame is 650.0\nupcoming frame is 649.0\nupcoming frame is 648.0\nupcoming frame is 647.0\nupcoming frame is 646.0\nupcoming frame is 645.0\nupcoming frame is 644.0\nupcoming frame is 643.0\nupcoming frame is 642.0\nupcoming frame is 641.0\nupcoming frame is 640.0\nupcoming frame is 639.0\nupcoming frame is 638.0\nupcoming frame is 637.0\nupcoming frame is 636.0\nupcoming frame is 635.0\nupcoming frame is 634.0\nupcoming frame is 633.0\nupcoming frame is 632.0\nupcoming frame is 631.0\nupcoming frame is 630.0\nupcoming frame is 629.0\nupcoming frame is 628.0\nupcoming frame is 627.0\nupcoming frame is 626.0\nupcoming frame is 625.0\nupcoming frame is 624.0\nupcoming frame is 623.0\nupcoming frame is 622.0\nupcoming frame is 621.0\nupcoming frame is 620.0\nupcoming frame is 619.0\nupcoming frame is 618.0\nupcoming frame is 617.0\nupcoming frame is 616.0\nupcoming frame is 615.0\nupcoming frame is 614.0\nupcoming frame is 613.0\nupcoming frame is 612.0\nupcoming frame is 611.0\nupcoming frame is 610.0\nupcoming frame is 609.0\nupcoming frame is 608.0\nupcoming frame is 607.0\nupcoming frame is 606.0\nupcoming frame is 605.0\nupcoming frame is 604.0\nupcoming frame is 603.0\nupcoming frame is 602.0\nupcoming frame is 601.0\nupcoming frame is 600.0\nupcoming frame is 599.0\nupcoming frame is 598.0\nupcoming frame is 597.0\nupcoming frame is 596.0\nupcoming frame is 595.0\nupcoming frame is 594.0\nupcoming frame is 593.0\nupcoming frame is 592.0\nupcoming frame is 591.0\nupcoming frame is 590.0\nupcoming frame is 589.0\nupcoming frame is 588.0\nupcoming frame is 587.0\nupcoming frame is 586.0\nupcoming frame is 585.0\nupcoming frame is 584.0\nupcoming frame is 583.0\nupcoming frame is 582.0\nupcoming frame is 581.0\nupcoming frame is 580.0\nupcoming frame is 579.0\nupcoming frame is 578.0\nupcoming frame is 577.0\nupcoming frame is 576.0\nupcoming frame is 575.0\nupcoming frame is 574.0\nupcoming frame is 573.0\nupcoming frame is 572.0\nupcoming frame is 571.0\nupcoming frame is 570.0\nupcoming frame is 569.0\nupcoming frame is 568.0\nupcoming frame is 567.0\nupcoming frame is 566.0\nupcoming frame is 565.0\nupcoming frame is 564.0\nupcoming frame is 563.0\nupcoming frame is 562.0\nupcoming frame is 561.0\nupcoming frame is 560.0\nupcoming frame is 559.0\nupcoming frame is 558.0\nupcoming frame is 557.0\nupcoming frame is 556.0\nupcoming frame is 555.0\nupcoming frame is 554.0\nupcoming frame is 553.0\nupcoming frame is 552.0\nupcoming frame is 551.0\nupcoming frame is 550.0\nupcoming frame is 549.0\nupcoming frame is 548.0\nupcoming frame is 547.0\nupcoming frame is 546.0\nupcoming frame is 545.0\nupcoming frame is 544.0\nupcoming frame is 543.0\nupcoming frame is 542.0\nupcoming frame is 541.0\nupcoming frame is 540.0\nupcoming frame is 539.0\nupcoming frame is 538.0\nupcoming frame is 537.0\nupcoming frame is 536.0\nupcoming frame is 535.0\nupcoming frame is 534.0\nupcoming frame is 533.0\nupcoming frame is 532.0\nupcoming frame is 531.0\nupcoming frame is 530.0\nupcoming frame is 529.0\nupcoming frame is 528.0\nupcoming frame is 527.0\nupcoming frame is 526.0\nupcoming frame is 525.0\nupcoming frame is 524.0\nupcoming frame is 523.0\nupcoming frame is 522.0\nupcoming frame is 521.0\nupcoming frame is 520.0\nupcoming frame is 519.0\nupcoming frame is 518.0\nupcoming frame is 517.0\nupcoming frame is 516.0\nupcoming frame is 515.0\nupcoming frame is 514.0\nupcoming frame is 513.0\nupcoming frame is 512.0\nupcoming frame is 511.0\nupcoming frame is 510.0\nupcoming frame is 509.0\nupcoming frame is 508.0\nupcoming frame is 507.0\nupcoming frame is 506.0\nupcoming frame is 505.0\nupcoming frame is 504.0\nupcoming frame is 503.0\nupcoming frame is 502.0\nupcoming frame is 501.0\nupcoming frame is 500.0\nupcoming frame is 499.0\nupcoming frame is 498.0\nupcoming frame is 497.0\nupcoming frame is 496.0\nupcoming frame is 495.0\nupcoming frame is 494.0\nupcoming frame is 493.0\nupcoming frame is 492.0\nupcoming frame is 491.0\nupcoming frame is 490.0\nupcoming frame is 489.0\nupcoming frame is 488.0\nupcoming frame is 487.0\nupcoming frame is 486.0\nupcoming frame is 485.0\nupcoming frame is 484.0\nupcoming frame is 483.0\nupcoming frame is 482.0\nupcoming frame is 481.0\nupcoming frame is 480.0\nupcoming frame is 479.0\nupcoming frame is 478.0\nupcoming frame is 477.0\nupcoming frame is 476.0\nupcoming frame is 475.0\nupcoming frame is 474.0\nupcoming frame is 473.0\nupcoming frame is 472.0\nupcoming frame is 471.0\nupcoming frame is 470.0\nupcoming frame is 469.0\nupcoming frame is 468.0\nupcoming frame is 467.0\nupcoming frame is 466.0\nupcoming frame is 465.0\nupcoming frame is 464.0\nupcoming frame is 463.0\nupcoming frame is 462.0\nupcoming frame is 461.0\nupcoming frame is 460.0\nupcoming frame is 459.0\nupcoming frame is 458.0\nupcoming frame is 457.0\nupcoming frame is 456.0\nupcoming frame is 455.0\nupcoming frame is 454.0\nupcoming frame is 453.0\nupcoming frame is 452.0\nupcoming frame is 451.0\nupcoming frame is 450.0\nupcoming frame is 449.0\nupcoming frame is 448.0\nupcoming frame is 447.0\nupcoming frame is 446.0\nupcoming frame is 445.0\nupcoming frame is 444.0\nupcoming frame is 443.0\nupcoming frame is 442.0\nupcoming frame is 441.0\nupcoming frame is 440.0\nupcoming frame is 439.0\nupcoming frame is 438.0\nupcoming frame is 437.0\nupcoming frame is 436.0\nupcoming frame is 435.0\nupcoming frame is 434.0\nupcoming frame is 433.0\nupcoming frame is 432.0\nupcoming frame is 431.0\nupcoming frame is 430.0\nupcoming frame is 429.0\nupcoming frame is 428.0\nupcoming frame is 427.0\nupcoming frame is 426.0\nupcoming frame is 425.0\nupcoming frame is 424.0\nupcoming frame is 423.0\nupcoming frame is 422.0\nupcoming frame is 421.0\nupcoming frame is 420.0\nupcoming frame is 419.0\nupcoming frame is 418.0\nupcoming frame is 417.0\nupcoming frame is 416.0\nupcoming frame is 415.0\nupcoming frame is 414.0\nupcoming frame is 413.0\nupcoming frame is 412.0\nupcoming frame is 411.0\nupcoming frame is 410.0\nupcoming frame is 409.0\nupcoming frame is 408.0\nupcoming frame is 407.0\nupcoming frame is 406.0\nupcoming frame is 405.0\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecbf41f8eb23c30ea5f190fbf9aae7c0822fdac8 | 402,908 | ipynb | Jupyter Notebook | assignment_4/home_assignment_4.ipynb | vuthanhdatt/atom-assignments | 7d34c2ffa50024028861ba6a35e52242400dae80 | [
"MIT"
] | null | null | null | assignment_4/home_assignment_4.ipynb | vuthanhdatt/atom-assignments | 7d34c2ffa50024028861ba6a35e52242400dae80 | [
"MIT"
] | null | null | null | assignment_4/home_assignment_4.ipynb | vuthanhdatt/atom-assignments | 7d34c2ffa50024028861ba6a35e52242400dae80 | [
"MIT"
] | null | null | null | 201.857715 | 122,742 | 0.843585 | [
[
[
"# HOME ASSIGNMENT #4: DATABASE DESIGN & SQL\n\n**Mục đích của bài Assignment**\n> * Hiểu các bước design Database cho Case study cụ thể: DataCracy\n* Bài tập SQL (Dựa trên Cheatsheet)\n* `[Optional]` Bài tập Python Pandas (Dựa trên Cheatsheet)\n\n**Các kiến thức áp dụng**\n* Slack API, JSON to DataFrame\n* Phân tích Data (Assignment#1, Lab#1)\n* Database (DB) Design\n* SQL\n* Python Pandas\n\n**Lời Khuyên**\n* Đây là bài tập dài, nhưng thiên về hiểu case studies và thiết kế => Nên bạn **hãy bắt đầu sớm**\n* **Chắc chắn là bạn đọc thực kỹ các bước hướng dẫn và tài liệu, tận dụng Slack để trao đổi**\n* **Đừng sa đà vào Code của STEP 0** (bước này tương tự Assignmetn 3 về Slack API, nhưng biến đổi thêm cho phù hợp)\n* **Đừng sa đà vào các tiểu tiết** => Rất quan trọng là bạn cố gắng đi đến hết `TODO#6` để hiểu bức tranh toàn cảnh và kết nối được kiến thức. Sau khi đi hết qua, các bạn có thể trở lại và nhìn cận vào các tiểu tiết\n* Làm nhiều nhất có thể (Đừng quá lo lắng nếu bạn không thể hoàn thành hết)",
"_____no_output_____"
],
[
"# CONCEPT: DB Design & SQL\n## **Database (DB) - Cơ sở dữ liệu**\n> Là cấu trúc các nhóm data, lưu trữ trên bộ nhớ hoặc trên cloud, cho phép truy nhập để trích xuất dữ liệu bằng nhiều cách thức khác nhau\n\n* Cấu trúc một DB phải giúp cho việc lưu trữ an toàn, tiết kiệm, linh động và bền vững. Đồng thời, việc trích xuất dữ liệu dễ dàng, nhanh chóng, và hiệu quả.\n* Dạng DB đề cập trong Atom chủ yếu là **Relational Database** => Đây là dạng cấu trúc: \n * Dữ liệu được tổ chức và lưu trữ dưới dáng bảng (tables)\n * Đặc trưng có các keys (Primary Keys - PK, Foreign Keys - FK) để biểu diễn/quy định mối quan hệ giữa các bảng. Chính nhờ các keys này, ta có thể kết nối các bảng khác nhau.\n * Việc tách dữ liệu thành các bảng giúp việc tổ chức dữ liệu được linh động, lưu trữ hiệu quả hơn. Nhưng vẫn đảm bảo việc dễ dàng kết nối các bảng bằng keys\n* Ngoài ra, còn có dạng **Non-relational Database** các bạn có thể tìm hiểu thêm ([Databases 101](https://towardsdatascience.com/databases-101-introduction-to-databases-for-data-scientists-ee18c9f0785d))\n\n## **DB Design**\n> Là quá trình thiết kế và tổ chức dữ liệu theo mô hình Database. Thiết kế quy định những dữ liệu gì được lưu trữ, tổ chức các bảng được lưu trữ như thế nào, và các bảng data liên quan đến nhau ra sao. \n\n* **Thiết kế DB cần thoả mãn**: \n 1. Hạn chế trùng lặp trong lưu trữ thông tin\n 2. Keys chỉ mối quan hệ của các bảng (PK, FK) hợp lý\n 3. Kiểm tra tính đúng đắn (liên hệ keys, chất lượng data)\n 4. Hỗ trợ hiệu quả nhất cho quá trình xử lý, phân tích và báo cáo\n\n## **SQL**\n* Là ngôn ngữ dùng để trích xuất, xử lý và phân tích data trên Rational Database\n",
"_____no_output_____"
],
[
"# CASE STUDY: DATACracy \n* **Context**: Không có doanh nghiệp hay tổ chức nào là quá nhỏ để dùng data, và bất kỳ tổ chức nào có vận hành (operation) thì nhất định sẽ sản sinh ra data, và có thể dùng data để theo dõi và cải thiện vận hành đó (Đọc: [DataCracy - Data Strategy](https://app.gitbook.com/@anhdang/s/datacracy/atom/1-data-strategy-and-metrics))\n* **Hoạt động/Vận hành**: \n * Datacracy đang vận hành lớp học mở ATOM (với 40 learners, 6 mentors, và Ban Tổ Chức)\n * Lớp học diễn ra mỗi tuần, với bài tập được gửi vào sáng Chủ Nhật và hoàn thành trước buổi học sáng T7\n * Các bạn Learners upload link (github) vào các Slack channels theo tuần (ví dụ: `#atom-assignment-1`)\n * Các hoạt động trao đổi, hướng dẫn giữa learners, mentors và BTC chủ yếu diễn ra trong tuần qua Slack\n\n\n> Bản thân Datacracy cũng có vận hành và các hoạt động. Vậy hãy dùng chính mình làm case study cho việc, ta có thể tạo ra data solution siêu nhỏ, siêu rẻ cho một tổ chức quy mô siêu nhỏ như DataCracy không?\n\n\n\n",
"_____no_output_____"
],
[
"Trong **Assignment** này, chúng ta sẽ đi qua **6 bước chính** của quá trình design DB (cho case study cụ thể của DataCracy)\n",
"_____no_output_____"
]
],
[
[
"#!pip install duckdb",
"_____no_output_____"
],
[
"import requests # -> Để call API\nimport json # -> Xử lý file JSON\nimport pandas as pd # -> Thư viện xử lý dữ liệu dạng bảng\nimport numpy as np\nimport re # -> Thư viện xử lý text: regular expressions\nfrom datetime import datetime as dt # -> Thư viện xử lý dữ liệu thời gian\nimport duckdb # -> Thư viện \"giả lập\" xử lý dữ liệu bằng SQL ",
"_____no_output_____"
]
],
[
[
"## STEP 0: XEM LẠI DATA ĐÃ CÓ (SLACK API)\n\nỞ bước đầu tiên, ta xem lại tất cả các data của DATACracy Atom mà ra đã biết: \n* **Data từ Slack API**:\n * Danh sách thành viên\n * Danh sách các channels\n * Lịch sử tin nhắn trên các channels\n* **Data do dự án tự collect - File CSV (trích xuất từ Google Spreadsheet)**:\n * Danh sách thành viên được phân theo vị trí (mentors, learners, BTC)\n\n\n===> Các dữ liệu này lần lượt được lấy bằng code bên dưới.\n",
"_____no_output_____"
]
],
[
[
"## Load Token file \n## WARNING!! --- Put it in gitignore and DO NOT print out to notebook\nwith open('..\\\\assignment_3\\env_variable.json', 'r') as j:\n json_data = json.load(j)",
"_____no_output_____"
]
],
[
[
"### 0.1. Pull List of Members",
"_____no_output_____"
]
],
[
[
"# 1. LIST OF SLACK MEMBERS \n\n## Pull list of member as JSON\n## Gọi API từ Endpoints (Input - Token được đưa vào Headers)\n## Challenge: Thử gọi API này bằng Postman\nendpoint = \"https://slack.com/api/users.list\"\nheaders = {\"Authorization\": \"Bearer {}\".format(json_data['SLACK_BEARER_TOKEN'])}\nresponse_json = requests.post(endpoint, headers=headers).json() \nuser_dat = response_json['members']\n\n## Convert to CSV\nuser_dict = {'user_id':[],'name':[],'display_name':[],'real_name':[],'title':[],'is_bot':[]}\nfor i in range(len(user_dat)):\n user_dict['user_id'].append(user_dat[i]['id'])\n user_dict['name'].append(user_dat[i]['name'])\n user_dict['display_name'].append(user_dat[i]['profile']['display_name'])\n user_dict['real_name'].append(user_dat[i]['profile']['real_name_normalized'])\n user_dict['title'].append(user_dat[i]['profile']['title'])\n user_dict['is_bot'].append(user_dat[i]['is_bot'])\n\nuser_df = pd.DataFrame(user_dict) \nuser_df = user_df.replace('', np.nan) # -> replace khoảng trắng bằng giá trị NULL (nan)\nuser_df.head()",
"_____no_output_____"
],
[
"user_df[user_df['display_name'] == 'Đạt Vũ Thành']",
"_____no_output_____"
],
[
"user_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 76 entries, 0 to 75\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 user_id 76 non-null object\n 1 name 76 non-null object\n 2 display_name 51 non-null object\n 3 real_name 76 non-null object\n 4 title 8 non-null object\n 5 is_bot 76 non-null bool \ndtypes: bool(1), object(5)\nmemory usage: 3.2+ KB\n"
]
],
[
[
"### 0.2. List of Channels",
"_____no_output_____"
]
],
[
[
"# 2. LIST OF SLACK CHANNELS\n\nendpoint2 = \"https://slack.com/api/conversations.list\"\ndata = {'types': 'public_channel,private_channel'} # -> CHECK: API Docs https://api.slack.com/methods/conversations.list/test\nresponse_json = requests.post(endpoint2, headers=headers, data=data).json() \nchannel_dat = response_json['channels']\n\nchannel_dict = {'channel_id':[], 'channel_name':[], 'is_channel':[],'creator':[],'created_at':[],'topics':[],'purpose':[],'num_members':[]}\nfor i in range(len(channel_dat)):\n channel_dict['channel_id'].append(channel_dat[i]['id'])\n channel_dict['channel_name'].append(channel_dat[i]['name'])\n channel_dict['is_channel'].append(channel_dat[i]['is_channel'])\n channel_dict['creator'].append(channel_dat[i]['creator'])\n channel_dict['created_at'].append(dt.fromtimestamp(float(channel_dat[i]['created'])))\n channel_dict['topics'].append(channel_dat[i]['topic']['value'])\n channel_dict['purpose'].append(channel_dat[i]['purpose']['value'])\n channel_dict['num_members'].append(channel_dat[i]['num_members'])\n\nchannel_df = pd.DataFrame(channel_dict) \nchannel_df = channel_df.replace('', np.nan) # -> replace khoảng trắng bằng giá trị NULL (nan)\nchannel_df.head()",
"_____no_output_____"
],
[
"channel_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 21 entries, 0 to 20\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 channel_id 21 non-null object \n 1 channel_name 21 non-null object \n 2 is_channel 21 non-null bool \n 3 creator 21 non-null object \n 4 created_at 21 non-null datetime64[ns]\n 5 topics 6 non-null object \n 6 purpose 13 non-null object \n 7 num_members 21 non-null int64 \ndtypes: bool(1), datetime64[ns](1), int64(1), object(5)\nmemory usage: 1.3+ KB\n"
]
],
[
[
"### 0.3. Message Data",
"_____no_output_____"
]
],
[
[
"endpoint3 = \"https://slack.com/api/conversations.history\"",
"_____no_output_____"
],
[
"msg_dict = {'channel_id':[],'msg_id':[], 'msg_ts':[], 'user_id':[], 'latest_reply':[],'reply_user_count':[],'reply_users':[],'github_link':[]}\nfor channel_id, channel_name in zip(channel_df['channel_id'], channel_df['channel_name']):\n print('Channel ID: {} - Channel Name: {}'.format(channel_id, channel_name))\n try:\n data = {\"channel\": channel_id} \n response_json = requests.post(endpoint3, data=data, headers=headers).json()\n msg_ls = response_json['messages']\n for i in range(len(msg_ls)):\n if 'client_msg_id' in msg_ls[i].keys():\n msg_dict['channel_id'].append(channel_id)\n msg_dict['msg_id'].append(msg_ls[i]['client_msg_id'])\n msg_dict['msg_ts'].append(dt.fromtimestamp(float(msg_ls[i]['ts'])))\n msg_dict['latest_reply'].append(dt.fromtimestamp(float(msg_ls[i]['latest_reply'] if 'latest_reply' in msg_ls[i].keys() else 0))) ## -> No reply: 1970-01-01\n msg_dict['user_id'].append(msg_ls[i]['user'])\n msg_dict['reply_user_count'].append(msg_ls[i]['reply_users_count'] if 'reply_users_count' in msg_ls[i].keys() else 0)\n msg_dict['reply_users'].append(msg_ls[i]['reply_users'] if 'reply_users' in msg_ls[i].keys() else 0) \n ## -> Censor message contains tokens\n text = msg_ls[i]['text']\n github_link = re.findall('(?:https?://)?(?:www[.])?github[.]com/[\\w-]+/?', text)\n msg_dict['github_link'].append(github_link[0] if len(github_link) > 0 else np.nan)\n except:\n print('====> '+ str(response_json))",
"Channel ID: C01B4PVGLVB - Channel Name: general\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C01BYH7JHB5 - Channel Name: contents\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C01CAMNCJJV - Channel Name: branding-design\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C01U6P7LZ8F - Channel Name: atom-assignment1\nChannel ID: C01UL6K1C7L - Channel Name: atom-week1\nChannel ID: C01ULCHGN75 - Channel Name: atom-general\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C020VMT58JK - Channel Name: topics-data-analytics\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C0213MNH9L6 - Channel Name: topics-python\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C0213N56M2A - Channel Name: topics-materials\n====> {'ok': False, 'error': 'not_in_channel'}\nChannel ID: C021FSDN7LJ - Channel Name: atom-assignment2\nChannel ID: C021KLB0DSB - Channel Name: discuss-group3\nChannel ID: C021KLB90GP - Channel Name: discuss-group4\nChannel ID: C02204B2CD6 - Channel Name: atom-week2\nChannel ID: C0220KU9399 - Channel Name: discuss-group1\nChannel ID: C0226D3LEQ4 - Channel Name: atom-week3\nChannel ID: C0227A51SAY - Channel Name: atom-assignment3\nChannel ID: C022Q7TGRLG - Channel Name: discuss-group2\nChannel ID: C022RRWQ6US - Channel Name: atom-assignment4\nChannel ID: C022Y1FUETE - Channel Name: atom-week4\nChannel ID: C023UJGMDND - Channel Name: atom-assignment5\nChannel ID: C0245PZUFSL - Channel Name: atom-week5\n"
],
[
"msg_df = pd.DataFrame(msg_dict)\nmsg_df = msg_df.replace('', np.nan) # -> replace khoảng trắng bằng giá trị NULL (nan)\nmsg_df.tail()",
"_____no_output_____"
],
[
"msg_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 275 entries, 0 to 274\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 channel_id 275 non-null object \n 1 msg_id 275 non-null object \n 2 msg_ts 275 non-null datetime64[ns]\n 3 user_id 275 non-null object \n 4 latest_reply 275 non-null datetime64[ns]\n 5 reply_user_count 275 non-null int64 \n 6 reply_users 275 non-null object \n 7 github_link 132 non-null object \ndtypes: datetime64[ns](2), int64(1), object(5)\nmemory usage: 17.3+ KB\n"
]
],
[
[
"### 0.4. DataCracy Info\n* **Data do dự án tự collect - File CSV (trích xuất từ Google Spreadsheet)**: Danh sách thành viên được phân theo vị trí (mentors, learners, BTC)\n* Trong cùng folder Github `assignment_4`",
"_____no_output_____"
]
],
[
[
"dtc_groups = pd.read_csv('datacracy_groups.csv')\ndtc_groups.head()",
"_____no_output_____"
]
],
[
[
"## STEP 1: NHU CẦU & MỤC ĐÍCH\n> Đặt mình vào vị trí người chủ, bạn quan tâm đến điều gì?\n\n* Quan trọng nhất của mọi Data Solution, là bắt đầu từ nhu cầu, mục đích và câu hỏi lớn của Clients (người chủ). \n* Chính từ những câu hỏi lớn này, ta có thể khoanh vùng thông tin nào quan trọng, ta muốn đạt được điều gì?",
"_____no_output_____"
],
[
"### TODO#1: Requirements\nTự trả lời các câu hỏi sau, từ góc nhìn của bạn (đặt mình vào vị trí bạn là co-founder của dự án DataCracy): \n1. Mục đích của lớp học Atom là gì?\n2. BTC sẽ quan tâm đến những chủ đề/quy trình gì để đạt được Mục Đích trong (1)?\n3. Làm sao để đo lường các điểm trong (2)? => Metrics?\n4. Dựa vào các data đã có như liệt kê trong `STEP 0`:\n * Chỉ dùng những data sẵn có, ta có thể đo lường và thiết kế những metrics nào bạn đã liệt kê trong (3)?\n * Tham khảo Slack API và hình dung về các thông tin DataCracy có khả năng thu thập, bạn sẽ đề xuất DataCracy thu thập thêm những thông tin gì?\n",
"_____no_output_____"
]
],
[
[
"## Điền vào bên dưới câu trả lời của bạn\n''' \nREQUIREMENTS\n---\n1. \n2. \n3. \n4. \n'''",
"_____no_output_____"
]
],
[
[
"## STEP 2: TỔ CHỨC THÔNG TIN\n> Thu thập và hệ thống lại các thông tin",
"_____no_output_____"
]
],
[
[
"## Hints: info() để check các thông tin (Column), số dòng (Count), và Data Type của mỗi cột\nuser_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 76 entries, 0 to 75\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 user_id 76 non-null object\n 1 name 76 non-null object\n 2 display_name 51 non-null object\n 3 real_name 76 non-null object\n 4 title 8 non-null object\n 5 is_bot 76 non-null bool \ndtypes: bool(1), object(5)\nmemory usage: 3.2+ KB\n"
]
],
[
[
"### TODO#2: List Down\nTrả lời các câu hỏi sau: \n1. Có những thông tin gì trong các bảng data ở `STEP 0`? Ý nghĩa của mỗi trường (Column). `Hints: Đọc thêm Slack API để hiểu ý nghĩa data trả về`\n2. Data Type của mỗi trường\n3. Có NULL không? (Non-Null Count <> entries)\n\n* Ta sẽ dùng công cụ: [QuickDBD](https://www.quickdatabasediagrams.com/) cho Assignment này => Tham khảo Sample lúc mở tool\n* Copy điền đoạn text vào tools",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Điền vào bên dưới câu trả lời của bạn\n## Bên dưới là ví dụ của bảng user_df (Điền tiếp các bản còn lại)\n''' \n# Dtype obj: object -> string, bool, int, float\n# NULL - Nếu có null count < số entries\n\n# Ví dụ bảng user_df \nuser_df\n-\nuser_id string\nname string\ndisplay_name NULL string\nreal_name string\ntitle NULL string\nis_bot bool\n\n# Điền tiếp các bản còn lại\nchannel_df\n-\nchannel_id string\nchannel_name string\nis_channel bool\ncreator string\ncreate_at datetime\ntopics NULL string\npurpose NULL string\nnum_members int\n\nmsg_df\n-\nchannel_id string \nmsg_id string \nmsg_ts datetime\nuser_id string \nlatest_reply datetime\nreply_user_count int\nreply_users string\ngithub_link NULL string\n\n\n'''",
"_____no_output_____"
]
],
[
[
"## STEP 3: NGUYÊN TẮC CẦN ÁP DỤNG",
"_____no_output_____"
],
[
"### TODO#3: Rules & Logics\nDựa vào các kiến nghị `TODO#1` và Quan sát ở `TODO#2`, bạn có những đề xuất gì về rules trong vận hành để cải thiện thông tin và quan sát? \n\n#### Về mặt vận hành\n> Rules gì cần áp dụng cho cách thức nhập data?\n\n**Một số gợi ý**\n1. Rules để cải thiện tỷ lệ NULL trong các bạn? (Ví dụ: Bắt buộc nhập các thông tin trên Slack? Có cần thiết không?)\n2. Làm sao để xác định message nào là bài submit assigment? Message nào là review? Message vào là các nội dung không liên quan? (Ví dụ: Users để tag #submit, #review?)\n3. ...\n\n#### Về mặt data\n> Logics gì cần áp dụng để kiểm tra sự hợp lý của Data?\n\n**Một số gợi ý**\n1. Hai users trùng tên?\n2. Ngày latest reply > ngày post? \n3. Ngày post trong năm 2021 (sau khi dự án DataCracy thành lập)\n4. Mentor Group nào sẽ chỉ post trong channel discussion của group đó?\n5. ...\n\nBạn có thể đưa các đề xuất để đưa vào vận hành nhằm cải thiện data và giúp bạn đo lường theo dõi các metrics đã được liệt kể trong `TODO#1`",
"_____no_output_____"
]
],
[
[
"## Điền vào bên dưới câu trả lời của bạn\n''' \nRULES\n---\n1. \n2. \n3. \n4. \n'''",
"_____no_output_____"
],
[
"## Điền vào bên dưới câu trả lời của bạn\n''' \nLOGICS\n---\n1. \n2. \n3. \n4. \n'''",
"_____no_output_____"
]
],
[
[
"## STEP 4: TỔ CHỨC BẢNG - PRIMARY KEYS",
"_____no_output_____"
]
],
[
[
"## Hints: nunique() để check số giá trị unique của từng trường\nuser_df.nunique()",
"_____no_output_____"
]
],
[
[
"### TODO#4: Tables & PK\n1. Nhìn lại diagram của `TODO#2` trong Quick DBD Diagram: Có bảng nào bạn nghĩ nên gộp lại, hay tách ra không? Vì sao?\n2. Tìm Primary Key (PK): Unique cho từng dòng và Không NULL\n> Primary Key (PK) là trường giá trị/ID unique cho mỗi dòng của bảng. Hay nói cách khác, không có hai dòng trùng lặp (duplicate ID). \n* Fun fact để nhớ: Thử tưởng tượng nếu 2 người không quen biết, có cùng Số TK Ngân hàng :((((",
"_____no_output_____"
]
],
[
[
"## Copy lại phần text của TODO#2\n## Đặt PK bên cạnh col bạn chọn làm PK\n''' \n# Dtype obj: object -> string, bool, int, float\n# NULL - Nếu có null count < số entries\n\n# Ví dụ bảng user_df \nuser_df\n-\nuser_id string PK\nname string\ndisplay_name NULL string\nreal_name string\ntitle NULL string\nis_bot bool\n\n# Điền tiếp các bản còn lại\nchannel_df\n-\nchannel_id string PK\nchannel_name string\nis_channel bool\ncreator string F\ncreate_at datetime\ntopics NULL string\npurpose NULL string\nnum_members int\n\n\n\nmsg_df\n-\nchannel_id string \nmsg_id string PK\nmsg_ts datetime\nuser_id string \nlatest_reply datetime\nreply_user_count int\nreply_users\ngithub_link NULL string\n\n\ndtc_groups\n-\nname PK \nDataCracy_Role\n'''",
"_____no_output_____"
]
],
[
[
"## STEP 5: MỐI QUAN HỆ GIỮA CÁC BẢNG",
"_____no_output_____"
],
[
"### TODO#5: FK & Mapping\nNhư đã giới thiệu ở phần Concept, keys quan trọng trong Relational DB vì nó thể hiện mối quan hệ giữa các bạn, thông qua key, cho phép ta nối các bảng với nhau. \nBây giờ ta sẽ đi tìm FK (Foreign Key):\n> Foreign Key: Là các keys nằm trong một bảng để liên kết với PK trong bảng khác\n\n1. Đâu là các ID trong bảng, nhưng không phải là PK (do thoả điều kiện unique)?\n2. Các ID này dẫn đến PK nào trong các bảng còn lại?\n3. Trong các key được nối với nhau, xác định kiểu quan hệ:\n * n:1 - PK ở bảng gốc lặp lại nhiều lần (nhiều dòng) ở bản chứa FK\n * 1:1 - PK ở bảng gốc chỉ nối với 1 dòng\n\n**Ví dụ:** `channel_id string FK >- channel_df.channel_id`",
"_____no_output_____"
]
],
[
[
"## Copy lại phần text của TODO#2\n## Đặt FK bên cạnh col bạn chọn làm FK\n## Và thể hiện mối quan hệ bằng: id >- bảng_gốc.id (FK là PK trong bảng gốc)\n''' \n# Ví dụ bảng user_df \nuser_df\n-\nuser_id string \nname string\ndisplay_name NULL string\nreal_name string\ntitle NULL string\nis_bot bool\n\nmsg_df\n-\nchannel_id string FK >- channel_df.channel_id #Rel: n:1\n\n# Điền tiếp các bản còn lại\nchannel_df\n-\nchannel_id string PK\nchannel_name string\nis_channel bool\ncreator string FK >- user_df.user_id #Rel: n:1\ncreate_at datetime\ntopics NULL string\npurpose NULL string\nnum_members int\n\n\n\nmsg_df\n-\nchannel_id string FK >- channel_df.channel_id #Rel: n:1\nmsg_id string PK\nmsg_ts datetime\nuser_id string FK >- user_df.user_id #Rel: n:1\nlatest_reply datetime\nreply_user_count int\nreply_users string\ngithub_link NULL string\n\ndtc_groups\n-\nname PK FK >- user_df.name\nDataCracy_Role\n'''",
"_____no_output_____"
]
],
[
[
"## STEP 6: DIAGRAM & TEST\nSau khi làm hết `TODO#5` bạn sẽ được 1 DB Diagram như trong sample mẫu bên dưới (Sample này không phải của DataCracy)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Test Database Design by SQL\n* Để giúp các bạn làm quen với việc thao tác trên SQL, ta sử dụng thư việc duckdb, cho phép ta thao tác với Data trên Python bằng SQL: https://duckdb.org/2021/05/14/sql-on-pandas.html\n* Để hiểu các code SQL bên dưới, tham khảo [SQL Basic Cheatsheet](https://learnsql.com/blog/sql-basics-cheat-sheet/), xem các phần:\n * Querying Single Table\n * Aliases \n * Filtering\n * Querying Multiple Tables: Các dạng Join\n * Aggregation Functions\n\n* Dựa vào DB Diagram bạn đã vẽ, sẽ giúp bạn dễ dàng theo dõi hơn trong việc join và xử lý data",
"_____no_output_____"
]
],
[
[
"\n## Đoạn code dưới join 2 bảng user_df vằ dtc_groups bằng key name => Tạo thành bảng members_df\nSQL_dim_members = '''\n-- Để comment trong SQL dùng -- tương đường với ## trong Python\nCREATE TABLE members_df AS\nSELECT \n t1.*,\n t2.DataCracy_role\nFROM user_df AS t1\nJOIN dtc_groups AS t2\nON t1.name = t2.name\n'''",
"_____no_output_____"
],
[
"con = duckdb.connect(database=':memory:', read_only=False) # -> Tạo DB connection\n# create a table\ncon.execute(SQL_dim_members) # -> Chạy đoạn lệch SQL\ncon.execute(\"SELECT * FROM members_df LIMIT 5\").fetch_df() # -> In 10 dòng đầu tiên của bảng members_df ra Dataframe",
"_____no_output_____"
],
[
"con.execute(\"SELECT * FROM members_df LIMIT 5\").fetch_df()",
"_____no_output_____"
],
[
"con.execute(\"SELECT * FROM channel_df LIMIT 5\").fetch_df() # -> In 5 dòng đầu tiên của bảng channel_df ra Dataframe",
"_____no_output_____"
],
[
"con.execute(\"SELECT * FROM msg_df LIMIT 5\").fetch_df() # -> In 5 dòng đầu tiên của bảng msg_df ra Dataframe",
"_____no_output_____"
],
[
"## Đoạn code dưới: Theo từng channel, count số message (phải join với bảng channel_df để lấy tên channel)\nSQL = '''\nSELECT \n t2.channel_name,\n COUNT(DISTINCT t1.msg_id) AS msg_cnt\nFROM msg_df AS t1\nJOIN channel_df AS t2\nON t1.channel_id = t2.channel_id\nGROUP BY t2.channel_name \nORDER BY COUNT(DISTINCT t1.msg_id) DESC\n'''\ncon.execute(SQL).fetch_df()",
"_____no_output_____"
],
[
"## Đoạn code dưới: Lấy top 3 mentors post nhiều message nhất trong discuss-group của các nhóm\nSQL = '''\nWITH msg_cnt AS ( ------------- > (1) Chain SQL: tạo bảng tạm thời msg_cnt: count số msg theo user, theo channel\n SELECT \n user_id,\n channel_id,\n COUNT(msg_id) AS msg_cnt\n FROM msg_df \n GROUP BY 1, 2\n)\nSELECT\n t2.real_name,\n t3.channel_name,\n t1.msg_cnt,\n t2.DataCracy_role\nFROM msg_cnt AS t1\nJOIN members_df AS t2 ------------ > (2) Join msg_count với members_df để lấy tên và role\nON t1.user_id = t2.user_id\nJOIN channel_df AS t3 --> (3) Join với channel_df để lấy tên channel\nON t1.channel_id = t3.channel_id\nWHERE t2.DataCracy_role LIKE 'Mentor%' ------------- > (4) Filter Mentors\nAND t3.channel_name LIKE 'discuss-group%' ---------- > (5) Filter channel discuss theo các group\nORDER BY t1.msg_cnt DESC ---------- > (6) Sẵp xếp theo số msg từ cao xuống thấp\nLIMIT 3 ------------> (7) Lấy top 3\n'''\ncon.execute(SQL).fetch_df()",
"_____no_output_____"
]
],
[
[
"### TODO#6: SQL\n* Thay đổi các phần trong những đoạn code SQL trên, print kết quả để hiểu về code\n* Tham khảo thêm [SQL Basic Cheatsheet](https://learnsql.com/blog/sql-basics-cheat-sheet/)\n* Và, GOOGLE! + Cùng trao đổi trên Slack\n* Và viết SQL để trả lời các câu hỏi sau:\n```\n 1. Learners groups nào hoạt động tích cực trên Slack nhất? (tính theo message count)?\n Learners nào nộp bài sớm nhất trong Assignment 1, 2, 3?\n 2. Learner nào nộp bài trễ nhất trong Assignment 1, 2, 3?\n 3. Learner nào chưa nộp bài Assignment 3?\n 4. Learner nào chưa nộp bất kỳ 1 assignment nào?\n 5. Tỷ lệ % Learner đã nộp assignment 1, 2, 3? (*giả sử có message trong channel atom-assignment, được tính là đã submit*)\n 6. Tỷ lệ % Learner đã submit bài và dc review trong assignment 1, 2, 3?\n 7. Learners theo Group nào có tỷ lệ % hoàn thành bài tập cao nhất?\n```\n* `[Optional]` Bạn có thể tự đặt thêm bất kỳ câu hỏi nào bạn quan tâm\n\n",
"_____no_output_____"
]
],
[
[
"\nCREATE = '''\nCREATE TABLE assign_df AS\nSELECT \n t1.*,\n t2.name, t2.real_name,t2.DataCracy_role\nFROM msg_df AS t1\nJOIN members_df AS t2\nON t1.user_id = t2.user_id\n'''\ncon.execute(CREATE).fetch_df()",
"_____no_output_____"
],
[
"#1. Learners groups nào hoạt động tích cực trên Slack nhất? (tính theo message count)?\n# Learners nào nộp bài sớm nhất trong Assignment 1, 2, 3?\n\nSQL_1 = '''\nSELECT DataCracy_role, COUNT(msg_id) AS 'msg_cnt'\nFROM members_df AS mb\nJOIN msg_df AS msg\nON mb.user_id = msg.user_id\nWHERE DataCracy_role LIKE 'Learner%' \nGROUP BY DataCracy_role \nORDER BY msg_cnt DESC \nLIMIT 1 \n'''\ncon.execute(SQL_1).fetch_df()",
"_____no_output_____"
],
[
"EARLY_ASSIGN = '''\nWITH a AS (\n SELECT channel_name, msg_ts, real_name, DataCracy_role, github_link\n FROM channel_df b\n JOIN assign_df c\n ON b.channel_id = c.channel_id\n) \nSELECT a.channel_name, a.real_name, a.msg_ts\nFROM a\nINNER JOIN(\n SELECT channel_name, MIN(msg_ts) AS 'Time_assign'\n FROM a\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n GROUP BY channel_name\n) d\nON a.channel_name = d.channel_name AND a.msg_ts = d.Time_assign\n'''\ncon.execute(EARLY_ASSIGN).fetch_df()",
"_____no_output_____"
],
[
"#2. Learner nào nộp bài trễ nhất trong Assignment 1, 2, 3?\nLATE_ASSIGN = '''\nWITH a AS (\n SELECT channel_name, msg_ts, real_name, DataCracy_role, github_link\n FROM channel_df b\n JOIN assign_df c\n ON b.channel_id = c.channel_id\n) \nSELECT a.channel_name, a.real_name, a.msg_ts\nFROM a\nINNER JOIN(\n SELECT channel_name, MAX(msg_ts) AS 'Time_assign'\n FROM a\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n GROUP BY channel_name\n) d\nON a.channel_name = d.channel_name AND a.msg_ts = d.Time_assign\n'''\ncon.execute(LATE_ASSIGN).fetch_df()",
"_____no_output_____"
],
[
"#3. Learner nào chưa nộp bài Assignment 3?\nNOT_ASSIGN = '''\nSELECT user_id, real_name\nFROM members_df a\nWHERE a.user_id NOT IN (\n SELECT user_id\n FROM assign_df b\n JOIN channel_df c\n ON b.channel_id = c.channel_id\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name = 'atom-assignment3'\n AND github_link != 'NaN'\n)\nAND DataCracy_role LIKE 'Learner%'\n\n'''\ncon.execute(NOT_ASSIGN).fetch_df()",
"_____no_output_____"
],
[
"#4.Learner nào chưa nộp bất kỳ 1 assignment nào?\n# Ở đây có một vấn đề là em đăng nhập vào Slack khác email đăng kí với chương trình, nên email ctr set cho em nó không có hoạt động gì cả\nNOT_ASSIGN_ALL = '''\nSELECT user_id, real_name\nFROM members_df a\nWHERE a.user_id NOT IN (\n SELECT user_id\n FROM assign_df b\n JOIN channel_df c\n ON b.channel_id = c.channel_id\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n)\nAND DataCracy_role LIKE 'Learner%'\n\n'''\ncon.execute(NOT_ASSIGN_ALL).fetch_df()",
"_____no_output_____"
],
[
"# 5. Tỷ lệ % Learner đã nộp assignment 1, 2, 3?\nASSIGN_PERCENT = '''\nSELECT channel_name, ROUND(num_submit * 100/ num_learner,0) AS '%_submit'\nFROM (SELECT COUNT(user_id) AS num_learner\nFROM members_df\nWHERE DataCracy_Role LIKE 'Learner%') a,\n (SELECT COUNT(DISTINCT user_id) AS num_submit, channel_name\n FROM assign_df b JOIN channel_df c\n ON b.channel_id = c.channel_id\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n GROUP BY channel_name) b\n \n'''\ncon.execute(ASSIGN_PERCENT).fetch_df()",
"_____no_output_____"
],
[
"# 6.Tỷ lệ % Learner đã submit bài và dc review trong assignment 1, 2, 3?\nREVIEW_PERCENT = '''\nSELECT a.channel_name, ROUND(num_review*100/num_submit) AS '%_review'\nFROM ( --Number of reviews\n SELECT COUNT(reply_user_count) AS num_review, channel_name\n FROM assign_df b JOIN channel_df c\n ON b.channel_id = c.channel_id\n WHERE reply_user_count !=0\n AND DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n GROUP BY channel_name\n) a\nJOIN ( --Number of submit\n SELECT COUNT(DISTINCT user_id) AS num_submit, channel_name\n FROM assign_df b JOIN channel_df c\n ON b.channel_id = c.channel_id\n WHERE DataCracy_role LIKE 'Learner%'\n AND channel_name IN ('atom-assignment1','atom-assignment2','atom-assignment3')\n AND github_link != 'NaN'\n GROUP BY channel_name\n) b\nON a.channel_name = b.channel_name\n'''\ncon.execute(REVIEW_PERCENT).fetch_df()",
"_____no_output_____"
],
[
"\n# 7. Learners theo Group nào có tỷ lệ % hoàn thành bài tập cao nhất?\nSQL = '''\nWITH msg_cnt AS ( ------------- > (1) Chain SQL: tạo bảng tạm thời msg_cnt: count số msg theo user, theo channel\n SELECT \n user_id,\n channel_id,\n COUNT(msg_id) AS msg_cnt\n FROM msg_df \n GROUP BY 1, 2\n)\nSELECT\n t2.real_name,\n t3.channel_name,\n t1.msg_cnt,\n t2.DataCracy_role\nFROM msg_cnt AS t1\nJOIN members_df AS t2 ------------ > (2) Join msg_count với members_df để lấy tên và role\nON t1.user_id = t2.user_id\nJOIN channel_df AS t3 --> (3) Join với channel_df để lấy tên channel\nON t1.channel_id = t3.channel_id\nWHERE t2.DataCracy_role LIKE 'Mentor%' ------------- > (4) Filter Mentors\nAND t3.channel_name LIKE 'discuss-group%' ---------- > (5) Filter channel discuss theo các group\nORDER BY t1.msg_cnt DESC ---------- > (6) Sẵp xếp theo số msg từ cao xuống thấp\nLIMIT 3 ------------> (7) Lấy top 3\n'''\ncon.execute(SQL).fetch_df()",
"_____no_output_____"
]
],
[
[
"## Pandas vs. SQL \n* Python Pandas và SQL dù là 2 ngôn ngữ khác nhau, nhưng về cách thức thao tác và chuyển đổi data thì như nhau\n* Tuỳ theo từng tình huống cụ thể mà ta sẽ sử dụng Python Pandas hay SQL\n* Nhưng các bước thao tác/khám phá/tổng hợp data căn bản nhất gồm:\n\n| Thao tác | SQL | Python | SpreadSheet |\n|------------|-------------|------|----------|\n| Filter/Selection | WHERE | df['col'] | Filter |\n| Join Data | JOIN | .join() | - |\n| Group Data | GROUP BY | .groupby(col) | Pivot |\n| Summarize | SUM, AVG, MIN, MAX | .sum(), .mean(), .min(), .max() | SUM, MIN, MAX |\n\n* Tìm hiểu cách thao tác bằng Pandas. [Pandas CheatSheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf) => Xem các phần:\n * **Subset(Rows & Columns)**\n * **Summarize Data**: Để làm các phép tổng hợp, thống kê\n * **Group Data**: Tương đương với Pivot trong Excel và `GROUP BY` trong SQL\n * **Combine Data**: Tương đương với `JOIN` trong SQL\n * **Plot**\n",
"_____no_output_____"
]
],
[
[
"## Ví dụ: Đoạn code (Pandas) sau làm\n## 1. Group by channel_id\n## 2. Count các message\n## 3. Sắp xếp theo thứ tự số message từ cao xuống thấp (ascending=False)\n## 4. Filter lấy Top 5\nmsg_df.groupby('channel_id')['msg_id'].count().sort_values(ascending=False).head(5)",
"_____no_output_____"
],
[
"## Kết quả tương ứng bằng SQL\nSQL = '''\n SELECT \n channel_id,\n COUNT(msg_id) AS msg_cnt\n FROM msg_df \n GROUP BY 1\n ORDER BY COUNT(msg_id) DESC\n LIMIT 5\n'''\ncon.execute(SQL).fetch_df()",
"_____no_output_____"
]
],
[
[
"### TODO#7 (Optional): Pandas\n* Thực hiện lại các thao tác trong `TODO#6.2` bằng Python Pandas",
"_____no_output_____"
]
],
[
[
"#2. Learner nào nộp bài trễ nhất trong Assignment 1, 2, 3?\nmembers_df = pd.merge(user_df, dtc_groups, how='inner', on='name')\nassign_df = pd.merge(msg_df, members_df, how='inner', on='user_id')\ndf = pd.merge(channel_df, assign_df, how = 'inner', on='channel_id')\ndf = df[~df['github_link'].isnull()]\ndf = df[df['DataCracy_role'].str.startswith('Learner')]\ndf = df.query('channel_name in [\"atom-assignment1\",\"atom-assignment2\",\"atom-assignment3\"]')\ndf = df[['channel_name','real_name','msg_ts']]\ndf.loc[df.groupby('channel_name').msg_ts.idxmax()]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbf446832d8a2e3915b03357e5ab533a688bbb4 | 239,287 | ipynb | Jupyter Notebook | 19 - Credit Risk Modeling in Python/5_PD model: data preparation/15_Data preparation. Preprocessing continuous variables: creating dummies. Homework/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%205-15.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 19 - Credit Risk Modeling in Python/5_PD model: data preparation/15_Data preparation. Preprocessing continuous variables: creating dummies. Homework/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%205-15.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 19 - Credit Risk Modeling in Python/5_PD model: data preparation/15_Data preparation. Preprocessing continuous variables: creating dummies. Homework/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%205-15.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 938.380392 | 136,520 | 0.790135 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecbf44f430efc99661896e07012ea96e6949e12f | 52,449 | ipynb | Jupyter Notebook | Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb | Yancheng-Li-PHYS/nrpytutorial | 73b706c7f7e80ba22dd563735c0a7452c82c5245 | [
"BSD-2-Clause"
] | 1 | 2019-12-23T05:31:25.000Z | 2019-12-23T05:31:25.000Z | Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb | Yancheng-Li-PHYS/nrpytutorial | 73b706c7f7e80ba22dd563735c0a7452c82c5245 | [
"BSD-2-Clause"
] | null | null | null | Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb | Yancheng-Li-PHYS/nrpytutorial | 73b706c7f7e80ba22dd563735c0a7452c82c5245 | [
"BSD-2-Clause"
] | null | null | null | 49.017757 | 607 | 0.53921 | [
[
[
"<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# The BSSN Time-Evolution Equations\n\n## Author: Zach Etienne\n### Formatting improvements courtesy Brandon Clark\n\n[comment]: <> (Abstract: TODO)\n\n**Module Status:** <font color='green'><b> Validated </b></font>\n\n**Validation Notes:** All expressions generated in this module have been validated against a trusted code (the original NRPy+/SENR code, which itself was validated against [Baumgarte's code](https://arxiv.org/abs/1211.6632)).\n\n### NRPy+ Source Code for this module: [BSSN/BSSN_RHS.py](../edit/BSSN/BSSN_RHSs.py)\n\n## Introduction:\nThis module documents and constructs the time evolution equations of the BSSN formulation of Einstein's equations, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). \n\n**This module is part of the following set of NRPy+ tutorial modules on the BSSN formulation of general relativity:**\n\n* An overview of the BSSN formulation of Einstein's equations, as well as links for background reading/lectures, are provided in [the NRPy+ tutorial module on the BSSN formulation](Tutorial-BSSN_formulation.ipynb).\n* Basic BSSN quantities are defined in the [BSSN quantities NRPy+ tutorial module](Tutorial-BSSN_quantities.ipynb).\n* Other BSSN equation tutorial modules:\n * [Time-evolution equations the BSSN gauge quantities $\\alpha$, $\\beta^i$, and $B^i$](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb).\n * [BSSN Hamiltonian and momentum constraints](Tutorial-BSSN_constraints.ipynb)\n * [Enforcing the $\\bar{\\gamma} = \\hat{\\gamma}$ constraint](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)\n\n### A Note on Notation\n\nAs is standard in NRPy+, \n\n* Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.\n* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.\n\nAs a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module).",
"_____no_output_____"
],
[
"<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis module is organized as follows\n\n0. [Preliminaries](#bssntimeevolequations): BSSN time-evolution equations, as described in the [BSSN formulation NRPy+ tutorial module](Tutorial-BSSN_formulation.ipynb)\n1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules\n1. [Step 2](#gammabar): Right-hand side of $\\partial_t \\bar{\\gamma}_{ij}$\n 1. [Step 2.a](#term1_partial_gamma): Term 1 of $\\partial_t \\bar{\\gamma}_{i j}$\n 1. [Step 2.b](#term2_partial_gamma): Term 2 of $\\partial_t \\bar{\\gamma}_{i j}$\n 1. [Step 2.c](#term3_partial_gamma): Term 3 of $\\partial_t \\bar{\\gamma}_{i j}$\n1. [Step 3](#abar): Right-hand side of $\\partial_t \\bar{A}_{ij}$\n 1. [Step 3.a](#term1_partial_upper_a): Term 1 of $\\partial_t \\bar{A}_{i j}$\n 1. [Step 3.c](#term2_partial_upper_a): Term 2 of $\\partial_t \\bar{A}_{i j}$\n 1. [Step 3.c](#term3_partial_upper_a): Term 3 of $\\partial_t \\bar{A}_{i j}$\n1. [Step 4](#cf): Right-hand side of $\\partial_t \\phi \\to \\partial_t (\\text{cf})$\n1. [Step 5](#trk): Right-hand side of $\\partial_t \\text{tr} K$\n1. [Step 6](#lambdabar): Right-hand side of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.a](#term1_partial_lambda): Term 1 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.b](#term2_partial_lambda): Term 2 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.c](#term3_partial_lambda): Term 3 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.d](#term4_partial_lambda): Term 4 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.e](#term5_partial_lambda): Term 5 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.f](#term6_partial_lambda): Term 6 of $\\partial_t \\bar{\\Lambda}^i$\n 1. [Step 6.g](#term7_partial_lambda): Term 7 of $\\partial_t \\bar{\\Lambda}^i$\n1. [Step 7](#rescalingrhss): Rescaling the BSSN right-hand sides; rewriting them in terms of the rescaled quantities $\\left\\{h_{i j},a_{i j},\\text{cf}, K, \\lambda^{i}, \\alpha, \\mathcal{V}^i, \\mathcal{B}^i\\right\\}$\n1. [Step 8](#code_validation): Code Validation against `BSSN.BSSN_RHSs` NRPy+ module\n1. [Step 9](#latex_pdf_output): Output this module to $\\LaTeX$-formatted PDF",
"_____no_output_____"
],
[
"<a id='bssntimeevolequations'></a>\n\n# Preliminaries: BSSN time-evolution equations \\[Back to [top](#toc)\\]\n$$\\label{bssntimeevolequations}$$\n\nAs described in the [BSSN formulation NRPy+ tutorial module](Tutorial-BSSN_formulation.ipynb), the BSSN time-evolution equations are given by\n\n\\begin{align}\n \\partial_t \\bar{\\gamma}_{i j} {} = {} & \\left[\\beta^k \\partial_k \\bar{\\gamma}_{ij} + \\partial_i \\beta^k \\bar{\\gamma}_{kj} + \\partial_j \\beta^k \\bar{\\gamma}_{ik} \\right] + \\frac{2}{3} \\bar{\\gamma}_{i j} \\left (\\alpha \\bar{A}_{k}^{k} - \\bar{D}_{k} \\beta^{k}\\right ) - 2 \\alpha \\bar{A}_{i j} \\; , \\\\\n \\partial_t \\bar{A}_{i j} {} = {} & \\left[\\beta^k \\partial_k \\bar{A}_{ij} + \\partial_i \\beta^k \\bar{A}_{kj} + \\partial_j \\beta^k \\bar{A}_{ik} \\right] - \\frac{2}{3} \\bar{A}_{i j} \\bar{D}_{k} \\beta^{k} - 2 \\alpha \\bar{A}_{i k} {\\bar{A}^{k}}_{j} + \\alpha \\bar{A}_{i j} K \\nonumber \\\\\n & + e^{-4 \\phi} \\left \\{-2 \\alpha \\bar{D}_{i} \\bar{D}_{j} \\phi + 4 \\alpha \\bar{D}_{i} \\phi \\bar{D}_{j} \\phi + 4 \\bar{D}_{(i} \\alpha \\bar{D}_{j)} \\phi - \\bar{D}_{i} \\bar{D}_{j} \\alpha + \\alpha \\bar{R}_{i j} \\right \\}^{\\text{TF}} \\; , \\\\\n \\partial_t \\phi {} = {} & \\left[\\beta^k \\partial_k \\phi \\right] + \\frac{1}{6} \\left (\\bar{D}_{k} \\beta^{k} - \\alpha K \\right ) \\; , \\\\\n \\partial_{t} K {} = {} & \\left[\\beta^k \\partial_k K \\right] + \\frac{1}{3} \\alpha K^{2} + \\alpha \\bar{A}_{i j} \\bar{A}^{i j} - e^{-4 \\phi} \\left (\\bar{D}_{i} \\bar{D}^{i} \\alpha + 2 \\bar{D}^{i} \\alpha \\bar{D}_{i} \\phi \\right ) \\; , \\\\\n \\partial_t \\bar{\\Lambda}^{i} {} = {} & \\left[\\beta^k \\partial_k \\bar{\\Lambda}^i - \\partial_k \\beta^i \\bar{\\Lambda}^k \\right] + \\bar{\\gamma}^{j k} \\hat{D}_{j} \\hat{D}_{k} \\beta^{i} + \\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j} + \\frac{1}{3} \\bar{D}^{i} \\bar{D}_{j} \\beta^{j} \\nonumber \\\\\n & - 2 \\bar{A}^{i j} \\left (\\partial_{j} \\alpha - 6 \\partial_{j} \\phi \\right ) + 2 \\alpha \\bar{A}^{j k} \\Delta_{j k}^{i} -\\frac{4}{3} \\alpha \\bar{\\gamma}^{i j} \\partial_{j} K\n\\end{align}\n\nwhere the Lie derivative terms (often seen on the left-hand side of these equations) are enclosed in square braces.\n\nNotice that the shift advection operator $\\beta^k \\partial_k \\left\\{\\bar{\\gamma}_{i j},\\bar{A}_{i j},\\phi, K, \\bar{\\Lambda}^{i}\\right\\}$ appears on the right-hand side of *every* expression. As the shift determines how the spatial coordinates $x^i$ move on the next 3D slice of our 4D manifold, we find that representing $\\partial_k$ in these shift advection terms via an *upwinded* finite difference stencil results in far lower numerical errors. This trick is implemented below in all shift advection terms. Upwinded derivatives are indicated in NRPy+ by the `_dupD` variable suffix.\n\n\nAs discussed in the [NRPy+ tutorial module on BSSN quantities](Tutorial-BSSN_quantities.ipynb), tensorial expressions can diverge at coordinate singularities, so each tensor in the set of BSSN variables\n\n$$\\left\\{\\bar{\\gamma}_{i j},\\bar{A}_{i j},\\phi, K, \\bar{\\Lambda}^{i}, \\alpha, \\beta^i, B^i\\right\\},$$\n\nis written in terms of the corresponding rescaled quantity in the set\n\n$$\\left\\{h_{i j},a_{i j},\\text{cf}, K, \\lambda^{i}, \\alpha, \\mathcal{V}^i, \\mathcal{B}^i\\right\\},$$ \n\nrespectively, as defined in the [BSSN quantities tutorial](Tutorial-BSSN_quantities.ipynb). ",
"_____no_output_____"
],
[
"<a id='initializenrpy'></a>\n\n# Step 1: Initialize core Python/NRPy+ modules \\[Back to [top](#toc)\\]\n$$\\label{initializenrpy}$$\n\nLet's start by importing all the needed modules from NRPy+:",
"_____no_output_____"
]
],
[
[
"# Step 1.a: import all needed modules from Python/NRPy+:\nimport sympy as sp\nimport NRPy_param_funcs as par\nimport indexedexp as ixp\nimport grid as gri\nimport finite_difference as fin\nimport reference_metric as rfm\n\n# Step 1.b: Set the coordinate system for the numerical grid\npar.set_parval_from_str(\"reference_metric::CoordSystem\",\"Spherical\")\n\n# Step 1.c: Given the chosen coordinate system, set up \n# corresponding reference metric and needed\n# reference metric quantities\n# The following function call sets up the reference metric\n# and related quantities, including rescaling matrices ReDD,\n# ReU, and hatted quantities.\nrfm.reference_metric()\n\n# Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is \n# a 3+1-dimensional decomposition of the general \n# relativistic field equations)\nDIM = 3\n\n# Step 1.e: Import all basic (unrescaled) BSSN scalars & tensors\nimport BSSN.BSSN_quantities as Bq\nBq.BSSN_basic_tensors()\ngammabarDD = Bq.gammabarDD\nAbarDD = Bq.AbarDD\nLambdabarU = Bq.LambdabarU\ntrK = Bq.trK\nalpha = Bq.alpha\nbetaU = Bq.betaU\n\n# Step 1.f: Import all neeeded rescaled BSSN tensors:\naDD = Bq.aDD\ncf = Bq.cf\nlambdaU = Bq.lambdaU",
"_____no_output_____"
]
],
[
[
"<a id='gammabar'></a>\n\n# Step 2: Right-hand side of $\\partial_t \\bar{\\gamma}_{ij}$ \\[Back to [top](#toc)\\]\n$$\\label{gammabar}$$\n\nLet's start with\n\n$$\n\\partial_t \\bar{\\gamma}_{i j} = \n{\\underbrace {\\textstyle \\left[\\beta^k \\partial_k \\bar{\\gamma}_{ij} + \\partial_i \\beta^k \\bar{\\gamma}_{kj} + \\partial_j \\beta^k \\bar{\\gamma}_{ik} \\right]}_{\\text{Term 1}}} + \n{\\underbrace {\\textstyle \\frac{2}{3} \\bar{\\gamma}_{i j} \\left (\\alpha \\bar{A}_{k}^{k} - \\bar{D}_{k} \\beta^{k}\\right )}_{\\text{Term 2}}}\n{\\underbrace {\\textstyle -2 \\alpha \\bar{A}_{i j}}_{\\text{Term 3}}}.\n$$",
"_____no_output_____"
],
[
"<a id='term1_partial_gamma'></a>\n\n## Step 2.a: Term 1 of $\\partial_t \\bar{\\gamma}_{i j}$ \\[Back to [top](#toc)\\] \n$$\\label{term1_partial_gamma}$$\n\nTerm 1 of $\\partial_t \\bar{\\gamma}_{i j} =$ `gammabar_rhsDD[i][j]`: $\\beta^k \\bar{\\gamma}_{ij,k} + \\beta^k_{,i} \\bar{\\gamma}_{kj} + \\beta^k_{,j} \\bar{\\gamma}_{ik}$\n\n\nFirst we import derivative expressions for betaU defined in the [NRPy+ BSSN quantities tutorial module](Tutorial-BSSN_quantities.ipynb)",
"_____no_output_____"
]
],
[
[
"# Step 2.a.i: Import derivative expressions for betaU defined in the BSSN.BSSN_quantities module:\nBq.betaU_derivs()\nbetaU_dD = Bq.betaU_dD\nbetaU_dupD = Bq.betaU_dupD\nbetaU_dDD = Bq.betaU_dDD\n# Step 2.a.ii: Import derivative expression for gammabarDD\nBq.gammabar__inverse_and_derivs()\ngammabarDD_dupD = Bq.gammabarDD_dupD\n\n# Step 2.a.iii: First term of \\partial_t \\bar{\\gamma}_{i j} right-hand side:\n# \\beta^k \\bar{\\gamma}_{ij,k} + \\beta^k_{,i} \\bar{\\gamma}_{kj} + \\beta^k_{,j} \\bar{\\gamma}_{ik}\ngammabar_rhsDD = ixp.zerorank2()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n gammabar_rhsDD[i][j] += betaU[k]*gammabarDD_dupD[i][j][k] + betaU_dD[k][i]*gammabarDD[k][j] \\\n + betaU_dD[k][j]*gammabarDD[i][k]",
"_____no_output_____"
]
],
[
[
"<a id='term2_partial_gamma'></a>\n\n## Step 2.b: Term 2 of $\\partial_t \\bar{\\gamma}_{i j}$ \\[Back to [top](#toc)\\]\n$$\\label{term2_partial_gamma}$$\n\nTerm 2 of $\\partial_t \\bar{\\gamma}_{i j} =$ `gammabar_rhsDD[i][j]`: $\\frac{2}{3} \\bar{\\gamma}_{i j} \\left (\\alpha \\bar{A}_{k}^{k} - \\bar{D}_{k} \\beta^{k}\\right )$\n\nLet's first convert this expression to be in terms of the evolved variables $a_{ij}$ and $\\mathcal{B}^i$, starting with $\\bar{A}_{ij} = a_{ij} \\text{ReDD[i][j]}$. Then $\\bar{A}^k_{k} = \\bar{\\gamma}^{ij} \\bar{A}_{ij}$, and we have already defined $\\bar{\\gamma}^{ij}$ in terms of the evolved quantity $h_{ij}$.\n\nNext, we wish to compute \n\n$$\\bar{D}_{k} \\beta^{k} = \\beta^k_{,k} + \\frac{\\beta^k \\bar{\\gamma}_{,k}}{2 \\bar{\\gamma}},$$\n\nwhere $\\bar{\\gamma}$ is the determinant of the conformal metric $\\bar{\\gamma}_{ij}$. ***Exercise to student: Prove the above relation.***\n[Solution.](https://physics.stackexchange.com/questions/81453/general-relativity-christoffel-symbol-identity)\n\nUsually (i.e., so long as we make the parameter choice `detgbarOverdetghat_equals_one = False` ) we will choose $\\bar{\\gamma}=\\hat{\\gamma}$, so $\\bar{\\gamma}$ will in general possess coordinate singularities. Thus we would prefer to rewrite derivatives of $\\bar{\\gamma}$ in terms of derivatives of $\\bar{\\gamma}/\\hat{\\gamma} = 1$.",
"_____no_output_____"
]
],
[
[
"# Step 2.b.i: First import \\bar{A}_{ij} = AbarDD[i][j], and its contraction trAbar = \\bar{A}^k_k\n# from BSSN.BSSN_quantities\nBq.AbarUU_AbarUD_trAbar_AbarDD_dD()\ntrAbar = Bq.trAbar\n\n# Step 2.b.ii: Import detgammabar quantities from BSSN.BSSN_quantities:\nBq.detgammabar_and_derivs()\ndetgammabar = Bq.detgammabar\ndetgammabar_dD = Bq.detgammabar_dD\n\n# Step 2.b.ii: Compute the contraction \\bar{D}_k \\beta^k = \\beta^k_{,k} + \\frac{\\beta^k \\bar{\\gamma}_{,k}}{2 \\bar{\\gamma}}\nDbarbetacontraction = sp.sympify(0)\nfor k in range(DIM):\n Dbarbetacontraction += betaU_dD[k][k] + betaU[k]*detgammabar_dD[k]/(2*detgammabar)\n\n# Step 2.b.iii: Second term of \\partial_t \\bar{\\gamma}_{i j} right-hand side:\n# \\frac{2}{3} \\bar{\\gamma}_{i j} \\left (\\alpha \\bar{A}_{k}^{k} - \\bar{D}_{k} \\beta^{k}\\right )\nfor i in range(DIM):\n for j in range(DIM):\n gammabar_rhsDD[i][j] += sp.Rational(2,3)*gammabarDD[i][j]*(alpha*trAbar - Dbarbetacontraction)",
"_____no_output_____"
]
],
[
[
"<a id='term3_partial_gamma'></a>\n\n## Step 2.c: Term 3 of $\\partial_t \\bar{\\gamma}_{i j}$ \\[Back to [top](#toc)\\] \n$$\\label{term3_partial_gamma}$$\n\nTerm 3 of $\\partial_t \\bar{\\gamma}_{i j}$ = `gammabar_rhsDD[i][j]`: $-2 \\alpha \\bar{A}_{ij}$\n",
"_____no_output_____"
]
],
[
[
"# Step 2.c: Third term of \\partial_t \\bar{\\gamma}_{i j} right-hand side:\n# -2 \\alpha \\bar{A}_{ij}\nfor i in range(DIM):\n for j in range(DIM):\n gammabar_rhsDD[i][j] += -2*alpha*AbarDD[i][j]",
"_____no_output_____"
]
],
[
[
"<a id='abar'></a>\n\n# Step 3: Right-hand side of $\\partial_t \\bar{A}_{ij}$ \\[Back to [top](#toc)\\]\n$$\\label{abar}$$\n\n$$\\partial_t \\bar{A}_{i j} = \n{\\underbrace {\\textstyle \\left[\\beta^k \\partial_k \\bar{A}_{ij} + \\partial_i \\beta^k \\bar{A}_{kj} + \\partial_j \\beta^k \\bar{A}_{ik} \\right]}_{\\text{Term 1}}}\n{\\underbrace {\\textstyle - \\frac{2}{3} \\bar{A}_{i j} \\bar{D}_{k} \\beta^{k} - 2 \\alpha \\bar{A}_{i k} {\\bar{A}^{k}}_{j} + \\alpha \\bar{A}_{i j} K}_{\\text{Term 2}}} + \n{\\underbrace {\\textstyle e^{-4 \\phi} \\left \\{-2 \\alpha \\bar{D}_{i} \\bar{D}_{j} \\phi + 4 \\alpha \\bar{D}_{i} \\phi \\bar{D}_{j} \\phi + 4 \\bar{D}_{(i} \\alpha \\bar{D}_{j)} \\phi - \\bar{D}_{i} \\bar{D}_{j} \\alpha + \\alpha \\bar{R}_{i j} \\right \\}^{\\text{TF}}}_{\\text{Term 3}}}$$",
"_____no_output_____"
],
[
"<a id='term1_partial_upper_a'></a>\n\n## Step 3.a: Term 1 of $\\partial_t \\bar{A}_{i j}$ \\[Back to [top](#toc)\\]\n$$\\label{term1_partial_upper_a}$$\n\nTerm 1 of $\\partial_t \\bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $\\left[\\beta^k \\partial_k \\bar{A}_{ij} + \\partial_i \\beta^k \\bar{A}_{kj} + \\partial_j \\beta^k \\bar{A}_{ik} \\right]$\n\n\nNotice the first subexpression has a $\\beta^k \\partial_k A_{ij}$ advection term, which will be upwinded.",
"_____no_output_____"
]
],
[
[
"# Step 3.a: First term of \\partial_t \\bar{A}_{i j}:\n# \\beta^k \\partial_k \\bar{A}_{ij} + \\partial_i \\beta^k \\bar{A}_{kj} + \\partial_j \\beta^k \\bar{A}_{ik}\n\nAbarDD_dupD = Bq.AbarDD_dupD # From Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()\n\nAbar_rhsDD = ixp.zerorank2()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n Abar_rhsDD[i][j] += betaU[k]*AbarDD_dupD[i][j][k] + betaU_dD[k][i]*AbarDD[k][j] \\\n + betaU_dD[k][j]*AbarDD[i][k]",
"_____no_output_____"
]
],
[
[
"<a id='term2_partial_upper_a'></a>\n\n## Step 3.b: Term 2 of $\\partial_t \\bar{A}_{i j}$ \\[Back to [top](#toc)\\]\n$$\\label{term2_partial_upper_a}$$\n\nTerm 2 of $\\partial_t \\bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $- \\frac{2}{3} \\bar{A}_{i j} \\bar{D}_{k} \\beta^{k} - 2 \\alpha \\bar{A}_{i k} \\bar{A}^{k}_{j} + \\alpha \\bar{A}_{i j} K$\n\n\nNote that $\\bar{D}_{k} \\beta^{k}$ was already defined as `Dbarbetacontraction`.",
"_____no_output_____"
]
],
[
[
"# Step 3.b: Second term of \\partial_t \\bar{A}_{i j}:\n# - (2/3) \\bar{A}_{i j} \\bar{D}_{k} \\beta^{k} - 2 \\alpha \\bar{A}_{i k} {\\bar{A}^{k}}_{j} + \\alpha \\bar{A}_{i j} K\ngammabarUU = Bq.gammabarUU # From Bq.gammabar__inverse_and_derivs()\nAbarUD = Bq.AbarUD # From Bq.AbarUU_AbarUD_trAbar()\nfor i in range(DIM):\n for j in range(DIM):\n Abar_rhsDD[i][j] += -sp.Rational(2,3)*AbarDD[i][j]*Dbarbetacontraction + alpha*AbarDD[i][j]*trK\n for k in range(DIM):\n Abar_rhsDD[i][j] += -2*alpha * AbarDD[i][k]*AbarUD[k][j]",
"_____no_output_____"
]
],
[
[
"<a id='term3_partial_upper_a'></a>\n\n## Step 3.c: Term 3 of $\\partial_t \\bar{A}_{i j}$ \\[Back to [top](#toc)\\]\n$$\\label{term3_partial_upper_a}$$\n\n\nTerm 3 of $\\partial_t \\bar{A}_{i j}$ = `Abar_rhsDD[i][j]`: $e^{-4 \\phi} \\left \\{-2 \\alpha \\bar{D}_{i} \\bar{D}_{j} \\phi + 4 \\alpha \\bar{D}_{i} \\phi \\bar{D}_{j} \\phi + 4 \\bar{D}_{(i} \\alpha \\bar{D}_{j)} \\phi - \\bar{D}_{i} \\bar{D}_{j} \\alpha + \\alpha \\bar{R}_{i j} \\right \\}^{\\text{TF}}$ ",
"_____no_output_____"
],
[
"The first covariant derivatives of $\\phi$ and $\\alpha$ are simply partial derivatives. However, $\\phi$ is not a gridfunction; `cf` is. cf = $W$ (default value) denotes that the evolved variable is $W=e^{-2 \\phi}$, which results in smoother spacetime fields around puncture black holes (desirable).",
"_____no_output_____"
]
],
[
[
"# Step 3.c.i: Define partial derivatives of \\phi in terms of evolved quantity \"cf\":\nBq.phi_and_derivs()\nphi_dD = Bq.phi_dD\nphi_dupD = Bq.phi_dupD\nphi_dDD = Bq.phi_dDD\nexp_m4phi = Bq.exp_m4phi\nphi_dBarD = Bq.phi_dBarD # phi_dBarD = Dbar_i phi = phi_dD (since phi is a scalar) \nphi_dBarDD = Bq.phi_dBarDD # phi_dBarDD = Dbar_i Dbar_j phi (covariant derivative)\n\n# Step 3.c.ii: Define RbarDD\nBq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()\nRbarDD = Bq.RbarDD\n\n# Step 3.c.iii: Define first and second derivatives of \\alpha, as well as \n# \\bar{D}_i \\bar{D}_j \\alpha, which is defined just like phi\nalpha_dD = ixp.declarerank1(\"alpha_dD\")\nalpha_dDD = ixp.declarerank2(\"alpha_dDD\",\"sym01\")\nalpha_dBarD = alpha_dD\nalpha_dBarDD = ixp.zerorank2()\nGammabarUDD = Bq.GammabarUDD # Defined in Bq.gammabar__inverse_and_derivs()\nfor i in range(DIM):\n for j in range(DIM):\n alpha_dBarDD[i][j] = alpha_dDD[i][j]\n for k in range(DIM):\n alpha_dBarDD[i][j] += - GammabarUDD[k][i][j]*alpha_dD[k]\n\n# Step 3.c.iv: Define the terms in curly braces:\ncurlybrackettermsDD = ixp.zerorank2()\nfor i in range(DIM):\n for j in range(DIM):\n curlybrackettermsDD[i][j] = -2*alpha*phi_dBarDD[i][j] + 4*alpha*phi_dBarD[i]*phi_dBarD[j] \\\n +2*alpha_dBarD[i]*phi_dBarD[j] \\\n +2*alpha_dBarD[j]*phi_dBarD[i] \\\n -alpha_dBarDD[i][j] + alpha*RbarDD[i][j]\n\n# Step 3.c.v: Compute the trace:\ncurlybracketterms_trace = sp.sympify(0)\nfor i in range(DIM):\n for j in range(DIM):\n curlybracketterms_trace += gammabarUU[i][j]*curlybrackettermsDD[i][j]\n\n# Step 3.c.vi: Third and final term of Abar_rhsDD[i][j]:\nfor i in range(DIM):\n for j in range(DIM):\n Abar_rhsDD[i][j] += exp_m4phi*(curlybrackettermsDD[i][j] - \\\n sp.Rational(1,3)*gammabarDD[i][j]*curlybracketterms_trace) ",
"_____no_output_____"
]
],
[
[
"<a id='cf'></a>\n\n# Step 4: Right-hand side of $\\partial_t \\phi \\to \\partial_t (\\text{cf})$ \\[Back to [top](#toc)\\]\n$$\\label{cf}$$\n\n$$\\partial_t \\phi = \n{\\underbrace {\\textstyle \\left[\\beta^k \\partial_k \\phi \\right]}_{\\text{Term 1}}} + \n{\\underbrace {\\textstyle \\frac{1}{6} \\left (\\bar{D}_{k} \\beta^{k} - \\alpha K \\right)}_{\\text{Term 2}}}$$\n\nThe right-hand side of $\\partial_t \\phi$ is trivial except for the fact that the actual evolved variable is `cf` (short for conformal factor), which could represent\n* cf = $\\phi$\n* cf = $W = e^{-2 \\phi}$ (default)\n* cf = $\\chi = e^{-4 \\phi}$\n\nThus we are actually computing the right-hand side of the equation $\\partial_t $cf, which is related to $\\partial_t \\phi$ via simple relations:\n* cf = $\\phi$: $\\partial_t $cf$ = \\partial_t \\phi$ (unchanged)\n* cf = $W$: $\\partial_t $cf$ = \\partial_t (e^{-2 \\phi}) = -2 e^{-2\\phi}\\partial_t \\phi = -2 W \\partial_t \\phi$. Thus we need to multiply the right-hand side by $-2 W = -2$cf when cf = $W$.\n* cf = $\\chi$: Same argument as for $W$, except the right-hand side must be multiplied by $-4 \\chi=-4$cf.",
"_____no_output_____"
]
],
[
[
"# Step 4: Right-hand side of conformal factor variable \"cf\". Supported\n# options include: cf=phi, cf=W=e^(-2*phi) (default), and cf=chi=e^(-4*phi)\n# \\partial_t phi = \\left[\\beta^k \\partial_k \\phi \\right] <- TERM 1\n# + \\frac{1}{6} \\left (\\bar{D}_{k} \\beta^{k} - \\alpha K \\right ) <- TERM 2\n\ncf_rhs = sp.Rational(1,6) * (Dbarbetacontraction - alpha*trK) # Term 2\nfor k in range(DIM):\n cf_rhs += betaU[k]*phi_dupD[k] # Term 1\n\n# Next multiply to convert phi_rhs to cf_rhs.\nif par.parval_from_str(\"BSSN.BSSN_quantities::EvolvedConformalFactor_cf\") == \"phi\":\n pass # do nothing; cf_rhs = phi_rhs\nelif par.parval_from_str(\"BSSN.BSSN_quantities::EvolvedConformalFactor_cf\") == \"W\":\n cf_rhs *= -2*cf # cf_rhs = -2*cf*phi_rhs\nelif par.parval_from_str(\"BSSN.BSSN_quantities::EvolvedConformalFactor_cf\") == \"chi\":\n cf_rhs *= -4*cf # cf_rhs = -4*cf*phi_rhs\nelse:\n print(\"Error: EvolvedConformalFactor_cf == \"+\n par.parval_from_str(\"BSSN.BSSN_quantities::EvolvedConformalFactor_cf\")+\" unsupported!\")\n exit(1) ",
"_____no_output_____"
]
],
[
[
"<a id='trk'></a>\n\n# Step 5: Right-hand side of $\\partial_t K$ \\[Back to [top](#toc)\\]\n$$\\label{trk}$$\n\n$$\n\\partial_{t} K = \n{\\underbrace {\\textstyle \\left[\\beta^i \\partial_i K \\right]}_{\\text{Term 1}}} + \n{\\underbrace {\\textstyle \\frac{1}{3} \\alpha K^{2}}_{\\text{Term 2}}} +\n{\\underbrace {\\textstyle \\alpha \\bar{A}_{i j} \\bar{A}^{i j}}_{\\text{Term 3}}}\n{\\underbrace {\\textstyle - e^{-4 \\phi} \\left (\\bar{D}_{i} \\bar{D}^{i} \\alpha + 2 \\bar{D}^{i} \\alpha \\bar{D}_{i} \\phi \\right )}_{\\text{Term 4}}}\n$$",
"_____no_output_____"
]
],
[
[
"# Step 5: right-hand side of trK (trace of extrinsic curvature):\n# \\partial_t K = \\beta^k \\partial_k K <- TERM 1\n# + \\frac{1}{3} \\alpha K^{2} <- TERM 2\n# + \\alpha \\bar{A}_{i j} \\bar{A}^{i j} <- TERM 3\n# - - e^{-4 \\phi} (\\bar{D}_{i} \\bar{D}^{i} \\alpha + 2 \\bar{D}^{i} \\alpha \\bar{D}_{i} \\phi ) <- TERM 4\n# TERM 2:\ntrK_rhs = sp.Rational(1,3)*alpha*trK*trK\ntrK_dupD = ixp.declarerank1(\"trK_dupD\")\nfor i in range(DIM):\n # TERM 1:\n trK_rhs += betaU[i]*trK_dupD[i]\nfor i in range(DIM):\n for j in range(DIM):\n # TERM 4:\n trK_rhs += -exp_m4phi*gammabarUU[i][j]*(alpha_dBarDD[i][j] + 2*alpha_dBarD[j]*phi_dBarD[i]) \nAbarUU = Bq.AbarUU # From Bq.AbarUU_AbarUD_trAbar()\nfor i in range(DIM):\n for j in range(DIM):\n # TERM 3:\n trK_rhs += alpha*AbarDD[i][j]*AbarUU[i][j]",
"_____no_output_____"
]
],
[
[
"<a id='lambdabar'></a>\n\n# Step 6: Right-hand side of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{lambdabar}$$\n\n\\begin{align}\n\\partial_t \\bar{\\Lambda}^{i} &= \n{\\underbrace {\\textstyle \\left[\\beta^k \\partial_k \\bar{\\Lambda}^i - \\partial_k \\beta^i \\bar{\\Lambda}^k \\right]}_{\\text{Term 1}}} + \n{\\underbrace {\\textstyle \\bar{\\gamma}^{j k} \\hat{D}_{j} \\hat{D}_{k} \\beta^{i}}_{\\text{Term 2}}} + \n{\\underbrace {\\textstyle \\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j}}_{\\text{Term 3}}} + \n{\\underbrace {\\textstyle \\frac{1}{3} \\bar{D}^{i} \\bar{D}_{j} \\beta^{j}}_{\\text{Term 4}}} \\nonumber \\\\\n & \n{\\underbrace {\\textstyle - 2 \\bar{A}^{i j} \\left (\\partial_{j} \\alpha - 6 \\alpha \\partial_{j} \\phi \\right )}_{\\text{Term 5}}} + \n{\\underbrace {\\textstyle 2 \\alpha \\bar{A}^{j k} \\Delta_{j k}^{i}}_{\\text{Term 6}}} \n{\\underbrace {\\textstyle -\\frac{4}{3} \\alpha \\bar{\\gamma}^{i j} \\partial_{j} K}_{\\text{Term 7}}}\n\\end{align}",
"_____no_output_____"
],
[
"<a id='term1_partial_lambda'></a>\n\n## Step 6.a: Term 1 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term1_partial_lambda}$$\n\nTerm 1 of $\\partial_t \\bar{\\Lambda}^{i}$: $\\beta^k \\partial_k \\bar{\\Lambda}^i - \\partial_k \\beta^i \\bar{\\Lambda}^k$\n\nComputing this term requires that we define $\\bar{\\Lambda}^i$ and $\\bar{\\Lambda}^i_{,j}$ in terms of the rescaled (i.e., actual evolved) variable $\\lambda^i$ and derivatives: \n\\begin{align}\n\\bar{\\Lambda}^i &= \\lambda^i \\text{ReU[i]} \\\\\n\\bar{\\Lambda}^i_{,\\ j} &= \\lambda^i_{,\\ j} \\text{ReU[i]} + \\lambda^i \\text{ReUdD[i][j]} \n\\end{align}",
"_____no_output_____"
]
],
[
[
"# Step 6: right-hand side of \\partial_t \\bar{\\Lambda}^i:\n# \\partial_t \\bar{\\Lambda}^i = \\beta^k \\partial_k \\bar{\\Lambda}^i - \\partial_k \\beta^i \\bar{\\Lambda}^k <- TERM 1\n# + \\bar{\\gamma}^{j k} \\hat{D}_{j} \\hat{D}_{k} \\beta^{i} <- TERM 2\n# + \\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j} <- TERM 3\n# + \\frac{1}{3} \\bar{D}^{i} \\bar{D}_{j} \\beta^{j} <- TERM 4\n# - 2 \\bar{A}^{i j} (\\partial_{j} \\alpha - 6 \\partial_{j} \\phi) <- TERM 5\n# + 2 \\alpha \\bar{A}^{j k} \\Delta_{j k}^{i} <- TERM 6\n# - \\frac{4}{3} \\alpha \\bar{\\gamma}^{i j} \\partial_{j} K <- TERM 7\n\n# Step 6.a: Term 1 of \\partial_t \\bar{\\Lambda}^i: \\beta^k \\partial_k \\bar{\\Lambda}^i - \\partial_k \\beta^i \\bar{\\Lambda}^k\n# First we declare \\bar{\\Lambda}^i and \\bar{\\Lambda}^i_{,j} in terms of \\lambda^i and \\lambda^i_{,j}\nLambdabarU_dupD = ixp.zerorank2()\nlambdaU_dupD = ixp.declarerank2(\"lambdaU_dupD\",\"nosym\")\nfor i in range(DIM):\n for j in range(DIM):\n LambdabarU_dupD[i][j] = lambdaU_dupD[i][j]*rfm.ReU[i] + lambdaU[i]*rfm.ReUdD[i][j]\n\nLambdabar_rhsU = ixp.zerorank1()\nfor i in range(DIM):\n for k in range(DIM):\n Lambdabar_rhsU[i] += betaU[k]*LambdabarU_dupD[i][k] - betaU_dD[i][k]*LambdabarU[k] # Term 1",
"_____no_output_____"
]
],
[
[
"<a id='term2_partial_lambda'></a>\n\n## Step 6.b: Term 2 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term2_partial_lambda}$$\n\nTerm 2 of $\\partial_t \\bar{\\Lambda}^{i}$: $\\bar{\\gamma}^{j k} \\hat{D}_{j} \\hat{D}_{k} \\beta^{i}$\n\nThis is a relatively difficult term to compute, as it requires we evaluate the second covariant derivative of the shift vector, with respect to the hatted (i.e., reference) metric.\n\nBased on the definition of covariant derivative, we have\n$$\n\\hat{D}_{k} \\beta^{i} = \\beta^i_{,k} + \\hat{\\Gamma}^i_{mk} \\beta^m\n$$\n\nSince $\\hat{D}_{k} \\beta^{i}$ is a tensor, the covariant derivative of this will have the same indexing as a tensor $T_k^i$:\n\n$$\n\\hat{D}_{j} T^i_k = T^i_{k,j} + \\hat{\\Gamma}^i_{dj} T^d_k - \\hat{\\Gamma}^d_{kj} T^i_d.\n$$\n\nTherefore,\n\\begin{align}\n\\hat{D}_{j} \\left(\\hat{D}_{k} \\beta^{i}\\right) &= \\left(\\beta^i_{,k} + \\hat{\\Gamma}^i_{mk} \\beta^m\\right)_{,j} + \\hat{\\Gamma}^i_{dj} \\left(\\beta^d_{,k} + \\hat{\\Gamma}^d_{mk} \\beta^m\\right) - \\hat{\\Gamma}^d_{kj} \\left(\\beta^i_{,d} + \\hat{\\Gamma}^i_{md} \\beta^m\\right) \\\\\n&= \\beta^i_{,kj} + \\hat{\\Gamma}^i_{mk,j} \\beta^m + \\hat{\\Gamma}^i_{mk} \\beta^m_{,j} + \\hat{\\Gamma}^i_{dj}\\beta^d_{,k} + \\hat{\\Gamma}^i_{dj}\\hat{\\Gamma}^d_{mk} \\beta^m - \\hat{\\Gamma}^d_{kj} \\beta^i_{,d} - \\hat{\\Gamma}^d_{kj} \\hat{\\Gamma}^i_{md} \\beta^m \\\\\n&= {\\underbrace {\\textstyle \\beta^i_{,kj}}_{\\text{Term 2a}}}\n{\\underbrace {\\textstyle \\hat{\\Gamma}^i_{mk,j} \\beta^m + \\hat{\\Gamma}^i_{mk} \\beta^m_{,j} + \\hat{\\Gamma}^i_{dj}\\beta^d_{,k} - \\hat{\\Gamma}^d_{kj} \\beta^i_{,d}}_{\\text{Term 2b}}} +\n{\\underbrace {\\textstyle \\hat{\\Gamma}^i_{dj}\\hat{\\Gamma}^d_{mk} \\beta^m - \\hat{\\Gamma}^d_{kj} \\hat{\\Gamma}^i_{md} \\beta^m}_{\\text{Term 2c}}},\n\\end{align}\n\nwhere \n$$\n\\text{Term 2} = \\bar{\\gamma}^{jk} \\left(\\text{Term 2a} + \\text{Term 2b} + \\text{Term 2c}\\right)\n$$",
"_____no_output_____"
]
],
[
[
"# Step 6.b: Term 2 of \\partial_t \\bar{\\Lambda}^i = \\bar{\\gamma}^{jk} (Term 2a + Term 2b + Term 2c)\n# Term 2a: \\bar{\\gamma}^{jk} \\beta^i_{,kj}\nTerm2aUDD = ixp.zerorank3()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n Term2aUDD[i][j][k] += betaU_dDD[i][k][j]\n# Term 2b: \\hat{\\Gamma}^i_{mk,j} \\beta^m + \\hat{\\Gamma}^i_{mk} \\beta^m_{,j}\n# + \\hat{\\Gamma}^i_{dj}\\beta^d_{,k} - \\hat{\\Gamma}^d_{kj} \\beta^i_{,d}\nTerm2bUDD = ixp.zerorank3()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n for m in range(DIM):\n Term2bUDD[i][j][k] += rfm.GammahatUDDdD[i][m][k][j]*betaU[m] \\\n + rfm.GammahatUDD[i][m][k]*betaU_dD[m][j] \\\n + rfm.GammahatUDD[i][m][j]*betaU_dD[m][k] \\\n - rfm.GammahatUDD[m][k][j]*betaU_dD[i][m]\n# Term 2c: \\hat{\\Gamma}^i_{dj}\\hat{\\Gamma}^d_{mk} \\beta^m - \\hat{\\Gamma}^d_{kj} \\hat{\\Gamma}^i_{md} \\beta^m\nTerm2cUDD = ixp.zerorank3()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n for m in range(DIM):\n for d in range(DIM):\n Term2cUDD[i][j][k] += ( rfm.GammahatUDD[i][d][j]*rfm.GammahatUDD[d][m][k] \\\n -rfm.GammahatUDD[d][k][j]*rfm.GammahatUDD[i][m][d])*betaU[m]\n\nLambdabar_rhsUpieceU = ixp.zerorank1()\n\n# Put it all together to get Term 2:\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n Lambdabar_rhsU[i] += gammabarUU[j][k] * (Term2aUDD[i][j][k] + Term2bUDD[i][j][k] + Term2cUDD[i][j][k])\n Lambdabar_rhsUpieceU[i] += gammabarUU[j][k] * (Term2aUDD[i][j][k] + Term2bUDD[i][j][k] + Term2cUDD[i][j][k])",
"_____no_output_____"
]
],
[
[
"<a id='term3_partial_lambda'></a>\n\n## Step 6.c: Term 3 of $\\partial_t \\bar{\\Lambda}^{i}$: $\\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j}$ \\[Back to [top](#toc)\\]\n$$\\label{term3_partial_lambda}$$\n\nTerm 3 of $\\partial_t \\bar{\\Lambda}^{i}$: $\\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j}$\n \nThis term is the simplest to implement, as $\\bar{D}_{j} \\beta^{j}$ and $\\Delta^i$ have already been defined, as `Dbarbetacontraction` and `DGammaU[i]`, respectively:",
"_____no_output_____"
]
],
[
[
"# Step 6.c: Term 3 of \\partial_t \\bar{\\Lambda}^i:\n# \\frac{2}{3} \\Delta^{i} \\bar{D}_{j} \\beta^{j}\nDGammaU = Bq.DGammaU # From Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()\nfor i in range(DIM):\n Lambdabar_rhsU[i] += sp.Rational(2,3)*DGammaU[i]*Dbarbetacontraction # Term 3",
"_____no_output_____"
]
],
[
[
"<a id='term4_partial_lambda'></a>\n\n## Step 6.d: Term 4 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term4_partial_lambda}$$\n\n$\\partial_t \\bar{\\Lambda}^{i}$: $\\frac{1}{3} \\bar{D}^{i} \\bar{D}_{j} \\beta^{j}$\n\nRecall first that\n\n$$\\bar{D}_{k} \\beta^{k} = \\beta^k_{,\\ k} + \\frac{\\beta^k \\bar{\\gamma}_{,k}}{2 \\bar{\\gamma}},$$\nwhich is a scalar, so\n\n\\begin{align}\n\\bar{D}_m \\bar{D}_{j} \\beta^{j} &= \\left(\\beta^k_{,\\ k} + \\frac{\\beta^k \\bar{\\gamma}_{,k}}{2 \\bar{\\gamma}}\\right)_{,m} \\\\\n&= \\beta^k_{\\ ,km} + \\frac{\\beta^k_{\\ ,m} \\bar{\\gamma}_{,k} + \\beta^k \\bar{\\gamma}_{\\ ,km}}{2 \\bar{\\gamma}} - \\frac{\\beta^k \\bar{\\gamma}_{,k} \\bar{\\gamma}_{,m}}{2 \\bar{\\gamma}^2}\n\\end{align}\n\nThus, \n\\begin{align}\n\\bar{D}^i \\bar{D}_{j} \\beta^{j} \n&= \\bar{\\gamma}^{im} \\bar{D}_m \\bar{D}_{j} \\beta^{j} \\\\\n&= \\bar{\\gamma}^{im} \\left(\\beta^k_{\\ ,km} + \\frac{\\beta^k_{\\ ,m} \\bar{\\gamma}_{,k} + \\beta^k \\bar{\\gamma}_{\\ ,km}}{2 \\bar{\\gamma}} - \\frac{\\beta^k \\bar{\\gamma}_{,k} \\bar{\\gamma}_{,m}}{2 \\bar{\\gamma}^2} \\right)\n\\end{align}",
"_____no_output_____"
]
],
[
[
"# Step 6.d: Term 4 of \\partial_t \\bar{\\Lambda}^i:\n# \\frac{1}{3} \\bar{D}^{i} \\bar{D}_{j} \\beta^{j}\ndetgammabar_dDD = Bq.detgammabar_dDD # From Bq.detgammabar_and_derivs()\nDbarbetacontraction_dBarD = ixp.zerorank1()\nfor k in range(DIM):\n for m in range(DIM):\n Dbarbetacontraction_dBarD[m] += betaU_dDD[k][k][m] + \\\n (betaU_dD[k][m]*detgammabar_dD[k] + \n betaU[k]*detgammabar_dDD[k][m])/(2*detgammabar) \\\n -betaU[k]*detgammabar_dD[k]*detgammabar_dD[m]/(2*detgammabar*detgammabar)\nfor i in range(DIM):\n for m in range(DIM):\n Lambdabar_rhsU[i] += sp.Rational(1,3)*gammabarUU[i][m]*Dbarbetacontraction_dBarD[m]",
"_____no_output_____"
]
],
[
[
"<a id='term5_partial_lambda'></a>\n\n## Step 6.e: Term 5 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term5_partial_lambda}$$\n\nTerm 5 of $\\partial_t \\bar{\\Lambda}^{i}$: $- 2 \\bar{A}^{i j} \\left (\\partial_{j} \\alpha - 6\\alpha \\partial_{j} \\phi\\right)$",
"_____no_output_____"
]
],
[
[
"# Step 6.e: Term 5 of \\partial_t \\bar{\\Lambda}^i:\n# - 2 \\bar{A}^{i j} (\\partial_{j} \\alpha - 6 \\alpha \\partial_{j} \\phi)\nfor i in range(DIM):\n for j in range(DIM):\n Lambdabar_rhsU[i] += -2*AbarUU[i][j]*(alpha_dD[j] - 6*alpha*phi_dD[j])",
"_____no_output_____"
]
],
[
[
"<a id='term6_partial_lambda'></a>\n\n## Step 6.f: Term 6 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term6_partial_lambda}$$\n\nTerm 6 of $\\partial_t \\bar{\\Lambda}^{i}$: $2\\alpha \\bar{A}^{j k} \\Delta_{j k}^{i}$",
"_____no_output_____"
]
],
[
[
"# Step 6.f: Term 6 of \\partial_t \\bar{\\Lambda}^i:\n# 2 \\alpha \\bar{A}^{j k} \\Delta^{i}_{j k}\nDGammaUDD = Bq.DGammaUDD # From RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()\nfor i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n Lambdabar_rhsU[i] += 2*alpha*AbarUU[j][k]*DGammaUDD[i][j][k]",
"_____no_output_____"
]
],
[
[
"<a id='term7_partial_lambda'></a>\n\n## Step 6.g: Term 7 of $\\partial_t \\bar{\\Lambda}^{i}$ \\[Back to [top](#toc)\\]\n$$\\label{term7_partial_lambda}$$\n\n$\\partial_t \\bar{\\Lambda}^{i}$: $-\\frac{4}{3} \\alpha \\bar{\\gamma}^{i j} \\partial_{j} K$",
"_____no_output_____"
]
],
[
[
"# Step 6.g: Term 7 of \\partial_t \\bar{\\Lambda}^i:\n# -\\frac{4}{3} \\alpha \\bar{\\gamma}^{i j} \\partial_{j} K\ntrK_dD = ixp.declarerank1(\"trK_dD\")\nfor i in range(DIM):\n for j in range(DIM):\n Lambdabar_rhsU[i] += -sp.Rational(4,3)*alpha*gammabarUU[i][j]*trK_dD[j]",
"_____no_output_____"
]
],
[
[
"<a id='rescalingrhss'></a>\n\n# Step 7: Rescaling the BSSN right-hand sides; rewriting them in terms of the rescaled quantities $\\left\\{h_{i j},a_{i j},\\text{cf}, K, \\lambda^{i}, \\alpha, \\mathcal{V}^i, \\mathcal{B}^i\\right\\}$ \\[Back to [top](#toc)\\]\n$$\\label{rescalingrhss}$$\n\nNext we rescale the right-hand sides of the BSSN equations so that the evolved variables are $\\left\\{h_{i j},a_{i j},\\text{cf}, K, \\lambda^{i}\\right\\}$",
"_____no_output_____"
]
],
[
[
"# Step 7: Rescale the RHS quantities so that the evolved \n# variables are smooth across coord singularities\nh_rhsDD = ixp.zerorank2()\na_rhsDD = ixp.zerorank2()\nlambda_rhsU = ixp.zerorank1()\nfor i in range(DIM):\n lambda_rhsU[i] = Lambdabar_rhsU[i] / rfm.ReU[i]\n for j in range(DIM):\n h_rhsDD[i][j] = gammabar_rhsDD[i][j] / rfm.ReDD[i][j]\n a_rhsDD[i][j] = Abar_rhsDD[i][j] / rfm.ReDD[i][j]\n#print(str(Abar_rhsDD[2][2]).replace(\"**\",\"^\").replace(\"_\",\"\").replace(\"xx\",\"x\").replace(\"sin(x2)\",\"Sin[x2]\").replace(\"sin(2*x2)\",\"Sin[2*x2]\").replace(\"cos(x2)\",\"Cos[x2]\").replace(\"detgbaroverdetghat\",\"detg\"))\n#print(str(Dbarbetacontraction).replace(\"**\",\"^\").replace(\"_\",\"\").replace(\"xx\",\"x\").replace(\"sin(x2)\",\"Sin[x2]\").replace(\"detgbaroverdetghat\",\"detg\"))\n#print(betaU_dD)\n#print(str(trK_rhs).replace(\"xx2\",\"xx3\").replace(\"xx1\",\"xx2\").replace(\"xx0\",\"xx1\").replace(\"**\",\"^\").replace(\"_\",\"\").replace(\"sin(xx2)\",\"Sinx2\").replace(\"xx\",\"x\").replace(\"sin(2*x2)\",\"Sin2x2\").replace(\"cos(x2)\",\"Cosx2\").replace(\"detgbaroverdetghat\",\"detg\"))\n#print(str(bet_rhsU[0]).replace(\"xx2\",\"xx3\").replace(\"xx1\",\"xx2\").replace(\"xx0\",\"xx1\").replace(\"**\",\"^\").replace(\"_\",\"\").replace(\"sin(xx2)\",\"Sinx2\").replace(\"xx\",\"x\").replace(\"sin(2*x2)\",\"Sin2x2\").replace(\"cos(x2)\",\"Cosx2\").replace(\"detgbaroverdetghat\",\"detg\"))",
"_____no_output_____"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step 8: Code Validation against `BSSN.BSSN_RHSs` NRPy+ module \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nHere, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between\n1. this tutorial and \n2. the NRPy+ [BSSN.BSSN_RHSs](../edit/BSSN/BSSN_RHSs.py) module.\n\nBy default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.",
"_____no_output_____"
]
],
[
[
"# Step 8: We already have SymPy expressions for BSSN RHS expressions\n# in terms of other SymPy variables. Even if we reset the \n# list of NRPy+ gridfunctions, these *SymPy* expressions for\n# BSSN RHS variables *will remain unaffected*. \n# \n# Here, we will use the above-defined BSSN RHS expressions\n# to validate against the same expressions in the \n# BSSN/BSSN_RHSs.py file, to ensure consistency between \n# this tutorial and the module itself.\n#\n# Reset the list of gridfunctions, as registering a gridfunction\n# twice will spawn an error.\ngri.glb_gridfcs_list = []\n\n\n# Step 9.a: Call the BSSN_RHSs() function from within the\n# BSSN/BSSN_RHSs.py module,\n# which should do exactly the same as in Steps 1-16 above.\nimport BSSN.BSSN_RHSs as bssnrhs\nbssnrhs.BSSN_RHSs()\n\nprint(\"Consistency check between BSSN_RHSs tutorial and NRPy+ module: ALL SHOULD BE ZERO.\")\n\nprint(\"trK_rhs - bssnrhs.trK_rhs = \" + str(trK_rhs - bssnrhs.trK_rhs))\nprint(\"cf_rhs - bssnrhs.cf_rhs = \" + str(cf_rhs - bssnrhs.cf_rhs))\n\nfor i in range(DIM):\n print(\"lambda_rhsU[\"+str(i)+\"] - bssnrhs.lambda_rhsU[\"+str(i)+\"] = \" + \n str(lambda_rhsU[i] - bssnrhs.lambda_rhsU[i]))\n for j in range(DIM):\n print(\"h_rhsDD[\"+str(i)+\"][\"+str(j)+\"] - bssnrhs.h_rhsDD[\"+str(i)+\"][\"+str(j)+\"] = \" \n + str(h_rhsDD[i][j] - bssnrhs.h_rhsDD[i][j]))\n print(\"a_rhsDD[\"+str(i)+\"][\"+str(j)+\"] - bssnrhs.a_rhsDD[\"+str(i)+\"][\"+str(j)+\"] = \" \n + str(a_rhsDD[i][j] - bssnrhs.a_rhsDD[i][j]))",
"Consistency check between BSSN_RHSs tutorial and NRPy+ module: ALL SHOULD BE ZERO.\ntrK_rhs - bssnrhs.trK_rhs = 0\ncf_rhs - bssnrhs.cf_rhs = 0\nlambda_rhsU[0] - bssnrhs.lambda_rhsU[0] = 0\nh_rhsDD[0][0] - bssnrhs.h_rhsDD[0][0] = 0\na_rhsDD[0][0] - bssnrhs.a_rhsDD[0][0] = 0\nh_rhsDD[0][1] - bssnrhs.h_rhsDD[0][1] = 0\na_rhsDD[0][1] - bssnrhs.a_rhsDD[0][1] = 0\nh_rhsDD[0][2] - bssnrhs.h_rhsDD[0][2] = 0\na_rhsDD[0][2] - bssnrhs.a_rhsDD[0][2] = 0\nlambda_rhsU[1] - bssnrhs.lambda_rhsU[1] = 0\nh_rhsDD[1][0] - bssnrhs.h_rhsDD[1][0] = 0\na_rhsDD[1][0] - bssnrhs.a_rhsDD[1][0] = 0\nh_rhsDD[1][1] - bssnrhs.h_rhsDD[1][1] = 0\na_rhsDD[1][1] - bssnrhs.a_rhsDD[1][1] = 0\nh_rhsDD[1][2] - bssnrhs.h_rhsDD[1][2] = 0\na_rhsDD[1][2] - bssnrhs.a_rhsDD[1][2] = 0\nlambda_rhsU[2] - bssnrhs.lambda_rhsU[2] = 0\nh_rhsDD[2][0] - bssnrhs.h_rhsDD[2][0] = 0\na_rhsDD[2][0] - bssnrhs.a_rhsDD[2][0] = 0\nh_rhsDD[2][1] - bssnrhs.h_rhsDD[2][1] = 0\na_rhsDD[2][1] - bssnrhs.a_rhsDD[2][1] = 0\nh_rhsDD[2][2] - bssnrhs.h_rhsDD[2][2] = 0\na_rhsDD[2][2] - bssnrhs.a_rhsDD[2][2] = 0\n"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step 9: Output this module to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-BSSN_time_evolution-BSSN_RHSs.pdf](Tutorial-BSSN_time_evolution-BSSN_RHSs.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)",
"_____no_output_____"
]
],
[
[
"!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb\n!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex\n!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex\n!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_RHSs.tex\n!rm -f Tut*.out Tut*.aux Tut*.log",
"[NbConvertApp] Converting notebook Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb to latex\n[NbConvertApp] Writing 100995 bytes to Tutorial-BSSN_time_evolution-BSSN_RHSs.tex\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbf4ab7e69b4e6191932d1c186b3c652034df8d | 4,478 | ipynb | Jupyter Notebook | Practical-Machine-Learning-with-Python/Classification/Support Vector Machine/Machine_Learning_ P_20_Support Vector Machine Intro and Application.ipynb | glauco-endrigo/Practical-Machine-Learning-with-Python | 4d75949dfe7a3c7d8c888dc22fba58ff4d2359ee | [
"MIT"
] | null | null | null | Practical-Machine-Learning-with-Python/Classification/Support Vector Machine/Machine_Learning_ P_20_Support Vector Machine Intro and Application.ipynb | glauco-endrigo/Practical-Machine-Learning-with-Python | 4d75949dfe7a3c7d8c888dc22fba58ff4d2359ee | [
"MIT"
] | null | null | null | Practical-Machine-Learning-with-Python/Classification/Support Vector Machine/Machine_Learning_ P_20_Support Vector Machine Intro and Application.ipynb | glauco-endrigo/Practical-Machine-Learning-with-Python | 4d75949dfe7a3c7d8c888dc22fba58ff4d2359ee | [
"MIT"
] | null | null | null | 32.215827 | 388 | 0.606967 | [
[
[
"# Practical Machine Learning with Python from the youtube channel sentdex <br />\n\n\"The goal is to break it down so much that it is panfully simple \"\n\n\n#### Episode:\n\n\n* [Support Vector Machine Intro and Application - Practical Machine Learning Tutorial with Python p.20:](https://www.youtube.com/watch?v=mA5nwGoRAOo&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v&index=20)\n",
"_____no_output_____"
],
[
"#### Objective: \n\nLearn about Practical Machine Learning with Python from Scratch with Python .\n\n",
"_____no_output_____"
],
[
"#### Introduction: \nThe purpuse is to know how Machine Learning works at a very deep level, because trying to solve more complex problems is gonna require a deep understand of how things actually work.\nWe are gonna be doing this buy covering a variety of algorithms, so first we are gonna be covering regression when we are going to be movingitno classification with knearst-neighbors and support vector machines and then we are going to get into clustering with flat clustering, hierarchical clustering and then finally we will be getting into deep learning with neural networks \n\nIn each of the major algorithms, we are gonna cover theory, aplicattion and then we are gonna dive in deep into the inner workings of each of them",
"_____no_output_____"
],
[
"#### Support Vector Machine\n\n\nThe support vector Machine is what is called a binary classifier, so it separetes only into \nto groups at a time. These two groups are denoted as positive or negative.The objective of \nthe svm is to find the best separeting hyperplane which is also refered to as your decision boundary(that is why ideally we want it to be agaist linear data). And then we can classify new data points.\n",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import neighbors, svm\n \ndf = pd.read_csv('breast-cancer-wisconsin.data')\ndf.replace(\"?\",-99999, inplace = True) ## Now we will see how different svm handles outlies as opposed toK Nearest Neighbors\n\ndf.drop(['id'],1, inplace = True )\n\nX = np.array(df.drop(['clas'],1 )) # features \ny = np.array(df['clas']) # label \n\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size= 0.2) \nclf = svm.SVC()\nclf.fit(X_train,y_train)\naccuracy = clf.score(X_test,y_test) \nprint(accuracy)\nex_measures = np.array([[4,2,1,1,1,2,3,2,1],[4,2,2,3,1,2,3,2,1]])\nval = len(ex_measures)\nprint(val)\nex_measures = ex_measures.reshape(val,-1)\nprediction = clf.predict(ex_measures)\nprint(prediction)\n",
"0.9357142857142857\n2\n[2 2]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
ecbf76b52b3275a30da608b0896dc92dc96bb9de | 77,069 | ipynb | Jupyter Notebook | notebooks/papers/3did/compare features detected by 3DID and Biosaur.ipynb | WEHI-Proteomics/tfde | 10f6d9e89cb14a12655ce2378089abce28de9db6 | [
"MIT"
] | null | null | null | notebooks/papers/3did/compare features detected by 3DID and Biosaur.ipynb | WEHI-Proteomics/tfde | 10f6d9e89cb14a12655ce2378089abce28de9db6 | [
"MIT"
] | null | null | null | notebooks/papers/3did/compare features detected by 3DID and Biosaur.ipynb | WEHI-Proteomics/tfde | 10f6d9e89cb14a12655ce2378089abce28de9db6 | [
"MIT"
] | null | null | null | 85.44235 | 33,272 | 0.778095 | [
[
[
"import pandas as pd\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib_venn import venn2\nfrom os.path import expanduser\nimport seaborn as sns\nfrom cmcrameri import cm\nfrom matplotlib import colors\nimport matplotlib.patches as patches",
"_____no_output_____"
],
[
"DETECTS_3DID_NAME = 'minvi-600-2021-12-19-02-35-34'\nDETECTS_3DID_DIR = '/media/big-ssd/results-P3856_YHE211-3did/{}/P3856_YHE211/features-3did'.format(DETECTS_3DID_NAME)\nDETECTS_3DID_FILE = '{}/exp-P3856_YHE211-run-P3856_YHE211_1_Slot1-1_1_5104-features-3did-dedup.feather'.format(DETECTS_3DID_DIR)",
"_____no_output_____"
],
[
"# load the 3DID features classified as identifiable\ndetects_3did_df = pd.read_feather(DETECTS_3DID_FILE)",
"_____no_output_____"
],
[
"detects_3did_df.sample(n=4)",
"_____no_output_____"
],
[
"detects_3did_df.loc[detects_3did_df.scan_apex.idxmax()][['scan_apex','inverse_k0_apex']]",
"_____no_output_____"
],
[
"detects_3did_df.loc[detects_3did_df.scan_apex.idxmin()][['scan_apex','inverse_k0_apex']]",
"_____no_output_____"
],
[
"(1.292446-0.851201)/(916-49)*20",
"_____no_output_____"
],
[
"detects_3did_df.columns",
"_____no_output_____"
],
[
"biosaur_name = 'P3856_YHE211_1_Slot1-1_1_5104-denoised-biosaur-mini-200-ac-10-minl-3.features.tsv'\n\nDETECTS_BIOSAUR_FILE = '{}/{}'.format(expanduser('~'), biosaur_name)",
"_____no_output_____"
],
[
"detects_biosaur_df = pd.read_csv(DETECTS_BIOSAUR_FILE, sep='\\\\t', engine='python')",
"_____no_output_____"
],
[
"detects_biosaur_df = detects_biosaur_df[(detects_biosaur_df.nIsotopes >= 3)]",
"_____no_output_____"
],
[
"detects_biosaur_df.loc[detects_biosaur_df.ion_mobility.idxmax()].ion_mobility",
"_____no_output_____"
],
[
"detects_biosaur_df.columns",
"_____no_output_____"
],
[
"# corresponds to about the same scan tolerance as 20 scans over 910\nDUP_INVERSE_K0 = (detects_3did_df.inverse_k0_apex.max()-detects_3did_df.inverse_k0_apex.min())*(20/910) * 4\nDUP_INVERSE_K0 = 0.05",
"_____no_output_____"
],
[
"# definition of a duplicate feature\nDUP_MZ_TOLERANCE_PPM = 25\nDUP_RT_TOLERANCE = 5",
"_____no_output_____"
],
[
"# set up dup definitions\nMZ_TOLERANCE_PERCENT = DUP_MZ_TOLERANCE_PPM * 10**-4\ndetects_biosaur_df['dup_mz_ppm_tolerance'] = detects_biosaur_df.mz * MZ_TOLERANCE_PERCENT / 100\ndetects_biosaur_df['dup_mz_lower'] = detects_biosaur_df.mz - detects_biosaur_df.dup_mz_ppm_tolerance\ndetects_biosaur_df['dup_mz_upper'] = detects_biosaur_df.mz + detects_biosaur_df.dup_mz_ppm_tolerance\ndetects_biosaur_df['dup_inverse_k0_lower'] = detects_biosaur_df.ion_mobility - DUP_INVERSE_K0\ndetects_biosaur_df['dup_inverse_k0_upper'] = detects_biosaur_df.ion_mobility + DUP_INVERSE_K0\ndetects_biosaur_df['dup_rt_lower'] = detects_biosaur_df.rtApex - DUP_RT_TOLERANCE\ndetects_biosaur_df['dup_rt_upper'] = detects_biosaur_df.rtApex + DUP_RT_TOLERANCE",
"_____no_output_____"
],
[
"# find the features detected by 3DID and Biosaur\nmatched_features_3did = set()\nmatched_features_biosaur = set()\n\nfor row in detects_3did_df.itertuples():\n df = detects_biosaur_df[(row.charge == detects_biosaur_df.charge) & (row.monoisotopic_mz >= detects_biosaur_df.dup_mz_lower) & (row.monoisotopic_mz <= detects_biosaur_df.dup_mz_upper) & (row.inverse_k0_apex >= detects_biosaur_df.dup_inverse_k0_lower) & (row.inverse_k0_apex <= detects_biosaur_df.dup_inverse_k0_upper) & (row.rt_apex >= detects_biosaur_df.dup_rt_lower) & (row.rt_apex <= detects_biosaur_df.dup_rt_upper)]\n if len(df) > 0:\n matched_features_3did.update(set([row.feature_id]))\n matched_features_biosaur.update(set(df.id.tolist()))",
"_____no_output_____"
],
[
"contained_in_3did_not_biosaur = len(detects_3did_df) - len(matched_features_3did)\ncontained_in_biosaur_not_3did = len(detects_biosaur_df) - len(matched_features_biosaur)\ncontained_in_both = len(matched_features_biosaur)",
"_____no_output_____"
],
[
"f, ax = plt.subplots()\nf.set_figheight(15)\nf.set_figwidth(20)\n\nplt.margins(0.06)\n\nv = venn2(subsets = (contained_in_3did_not_biosaur, contained_in_biosaur_not_3did, contained_in_both), set_labels = ('3DID', 'Biosaur'))\nfor text in v.subset_labels:\n text.set_fontsize(15)\nfor text in v.set_labels:\n text.set_fontsize(15)\n\nplt.show()",
"_____no_output_____"
],
[
"matched_3did_df = detects_3did_df[detects_3did_df.feature_id.isin(matched_features_3did)]",
"_____no_output_____"
],
[
"f, ax = plt.subplots()\nf.set_figheight(10)\nf.set_figwidth(15)\n\nplt.margins(0.06)\n\nbins = 500\nvalues = np.log2(detects_3did_df.feature_intensity)\ny, x, _ = ax.hist(values, bins=bins, label='detected')\n\nvalues = np.log2(matched_3did_df.feature_intensity)\ny, x, _ = ax.hist(values, bins=bins, label='matched')\n\nax.set_xlabel('log2 feature intensity')\nax.set_ylabel('count')\n\nplt.show()",
"_____no_output_____"
],
[
"biosaur_only_df = detects_biosaur_df[~detects_biosaur_df.id.isin(matched_features_biosaur)]",
"_____no_output_____"
],
[
"biosaur_only_df.sample(n=5)",
"_____no_output_____"
],
[
"biosaur_only_df.mz.max()",
"_____no_output_____"
],
[
"detects_3did_df.charge.min()",
"_____no_output_____"
],
[
"matched_biosaur_df = detects_biosaur_df[detects_biosaur_df.id.isin(matched_features_biosaur)]",
"_____no_output_____"
],
[
"matched_biosaur_df.intensityApex.mean()",
"_____no_output_____"
],
[
"biosaur_only_df.intensityApex.max()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbf7758b0b134abc915d486a8d9047162e4318c | 8,982 | ipynb | Jupyter Notebook | notebooks/03-au-column-types.ipynb | energeeks/ashrae-energy-prediction | 2b15a3749dd5810df228f99d7c23d62234117a1f | [
"MIT"
] | 14 | 2019-12-10T14:30:14.000Z | 2021-08-23T05:44:23.000Z | notebooks/03-au-column-types.ipynb | Adrodoc/ashrae-energy-prediction | 2b15a3749dd5810df228f99d7c23d62234117a1f | [
"MIT"
] | 114 | 2019-11-05T14:48:01.000Z | 2020-11-04T15:35:53.000Z | notebooks/03-au-column-types.ipynb | Adrodoc/ashrae-energy-prediction | 2b15a3749dd5810df228f99d7c23d62234117a1f | [
"MIT"
] | 5 | 2019-12-25T21:38:03.000Z | 2021-04-17T18:24:37.000Z | 31.405594 | 87 | 0.51503 | [
[
[
"# Notebook to help figure out optimal column types\nChoose column types from:\n* https://numpy.org/devdocs/user/basics.types.html\n* https://numpy.org/devdocs/reference/arrays.datetime.html",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"def reduce_mem_usage(df, verbose=True):\n \"\"\"\n Takes an dataframe as argument and adjusts the datatypes of the respective\n columns to reduce memory allocation\n \"\"\"\n numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']\n start_mem = df.memory_usage().sum() / 1024 ** 2\n for col in df.columns:\n col_type = df[col].dtypes\n if col_type in numerics:\n c_min = df[col].min()\n c_max = df[col].max()\n if str(col_type)[:3] == 'int':\n if (c_min > np.iinfo(np.int8).min and\n c_max < np.iinfo(np.int8).max):\n df[col] = df[col].astype(np.int8)\n elif (c_min > np.iinfo(np.int16).min and\n c_max < np.iinfo(np.int16).max):\n df[col] = df[col].astype(np.int16)\n elif (c_min > np.iinfo(np.int32).min and\n c_max < np.iinfo(np.int32).max):\n df[col] = df[col].astype(np.int32)\n elif (c_min > np.iinfo(np.int64).min and\n c_max < np.iinfo(np.int64).max):\n df[col] = df[col].astype(np.int64)\n else:\n if (c_min > np.finfo(np.float16).min and\n c_max < np.finfo(np.float16).max):\n df[col] = df[col].astype(np.float16)\n elif (c_min > np.finfo(np.float32).min and\n c_max < np.finfo(np.float32).max):\n df[col] = df[col].astype(np.float32)\n else:\n df[col] = df[col].astype(np.float64)\n end_mem = df.memory_usage().sum() / 1024 ** 2\n reduced_mem = 100 * (start_mem - end_mem) / start_mem\n if verbose:\n print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'\n .format(end_mem, reduced_mem))\n return df",
"_____no_output_____"
],
[
"def print_min_max_column(df, column):\n print(column, \"min:\", df[column].min(), \"max:\", df[column].max())\n \ndef print_min_max(df):\n for column in df.columns:\n print_min_max_column(df, column)",
"_____no_output_____"
],
[
"data_dir = \"../data\"",
"_____no_output_____"
],
[
"train_df = pd.read_csv(data_dir + \"/raw/train.csv\")\nprint_min_max(train_df)\ntrain_df = reduce_mem_usage(train_df)\nprint(train_df.dtypes)",
"building_id min: 0 max: 1448\nmeter min: 0 max: 3\ntimestamp min: 2016-01-01 00:00:00 max: 2016-12-31 23:00:00\nmeter_reading min: 0.0 max: 21904700.0\nMem. usage decreased to 289.19 Mb (53.1% reduction)\nbuilding_id int16\nmeter int8\ntimestamp object\nmeter_reading float32\ndtype: object\n"
],
[
"test_df = pd.read_csv(data_dir + \"/raw/test.csv\")\nprint_min_max(test_df)\ntrain_df = reduce_mem_usage(test_df)\nprint(test_df.dtypes)",
"row_id min: 0 max: 41697599\nbuilding_id min: 0 max: 1448\nmeter min: 0 max: 3\ntimestamp min: 2017-01-01 00:00:00 max: 2018-12-31 23:00:00\nMem. usage decreased to 596.49 Mb (53.1% reduction)\nrow_id int32\nbuilding_id int16\nmeter int8\ntimestamp object\ndtype: object\n"
],
[
"weather_train_df = pd.read_csv(data_dir + \"/raw/weather_train.csv\")\nprint_min_max(weather_train_df)\nweather_train_df = reduce_mem_usage(weather_train_df)\nprint(weather_train_df.dtypes)",
"site_id min: 0 max: 15\ntimestamp min: 2016-01-01 00:00:00 max: 2016-12-31 23:00:00\nair_temperature min: -28.9 max: 47.2\ncloud_coverage min: 0.0 max: 9.0\ndew_temperature min: -35.0 max: 26.1\nprecip_depth_1_hr min: -1.0 max: 343.0\nsea_level_pressure min: 968.2 max: 1045.5\nwind_direction min: 0.0 max: 360.0\nwind_speed min: 0.0 max: 19.0\nMem. usage decreased to 3.07 Mb (68.1% reduction)\nsite_id int8\ntimestamp object\nair_temperature float16\ncloud_coverage float16\ndew_temperature float16\nprecip_depth_1_hr float16\nsea_level_pressure float16\nwind_direction float16\nwind_speed float16\ndtype: object\n"
],
[
"weather_test_df = pd.read_csv(data_dir + \"/raw/weather_test.csv\")\nprint_min_max(weather_test_df)\nweather_test_df = reduce_mem_usage(weather_test_df)\nprint(weather_test_df.dtypes)",
"site_id min: 0 max: 15\ntimestamp min: 2017-01-01 00:00:00 max: 2018-12-31 23:00:00\nair_temperature min: -28.1 max: 48.3\ncloud_coverage min: 0.0 max: 9.0\ndew_temperature min: -31.6 max: 26.7\nprecip_depth_1_hr min: -1.0 max: 597.0\nsea_level_pressure min: 972.0 max: 1050.1\nwind_direction min: 0.0 max: 360.0\nwind_speed min: 0.0 max: 24.2\nMem. usage decreased to 6.08 Mb (68.1% reduction)\nsite_id int8\ntimestamp object\nair_temperature float16\ncloud_coverage float16\ndew_temperature float16\nprecip_depth_1_hr float16\nsea_level_pressure float16\nwind_direction float16\nwind_speed float16\ndtype: object\n"
],
[
"building_metadata_df = pd.read_csv(data_dir + \"/raw/building_metadata.csv\")\nprint_min_max(building_metadata_df)\nbuilding_metadata_df = reduce_mem_usage(building_metadata_df)\nprint(building_metadata_df.dtypes)",
"site_id min: 0 max: 15\nbuilding_id min: 0 max: 1448\nprimary_use min: Education max: Warehouse/storage\nsquare_feet min: 283 max: 875000\nyear_built min: 1900.0 max: 2017.0\nfloor_count min: 1.0 max: 26.0\nMem. usage decreased to 0.03 Mb (60.3% reduction)\nsite_id int8\nbuilding_id int16\nprimary_use object\nsquare_feet int32\nyear_built float16\nfloor_count float16\ndtype: object\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecbf7bdea702cc3e2f88e092548537a3f2b899d0 | 446,414 | ipynb | Jupyter Notebook | notebooks/vikas/TELECORM_CHURN_EDA.ipynb | Akash20x/Customer-Churn-Prediction | 00c345af0f319c80912780e3d6d293e384a2940e | [
"MIT"
] | 2 | 2021-08-30T06:13:36.000Z | 2021-09-01T03:27:16.000Z | notebooks/vikas/TELECORM_CHURN_EDA.ipynb | Akash20x/Customer-Churn-Prediction | 00c345af0f319c80912780e3d6d293e384a2940e | [
"MIT"
] | 7 | 2021-08-16T16:42:30.000Z | 2021-08-24T06:21:34.000Z | notebooks/vikas/TELECORM_CHURN_EDA.ipynb | Akash20x/Customer-Churn-Prediction | 00c345af0f319c80912780e3d6d293e384a2940e | [
"MIT"
] | 16 | 2021-08-11T14:46:26.000Z | 2021-11-09T07:23:11.000Z | 197.26646 | 45,032 | 0.876626 | [
[
[
"### Table Of content",
"_____no_output_____"
],
[
"[Data Analysis](#Data_Analysis)\n",
"_____no_output_____"
],
[
"[Reason For churn](#Frequence_of_reason_for_churn)",
"_____no_output_____"
],
[
"[City wise churn](#City_wise_customer)",
"_____no_output_____"
],
[
"[Filling missing values](#Filling_missing_values)",
"_____no_output_____"
],
[
"[Churn VS Services](#Churn_VS_Services)",
"_____no_output_____"
],
[
"### Importing important Libaries",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mtick\nimport seaborn as sns ",
"_____no_output_____"
]
],
[
[
"## Importing_data",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df=pd.read_csv(r\"F:\\MLPacktPro\\Customer-Churn-Prediction-main\\Customer-Churn-Prediction-main\\data\\1\\TelcoCustomerChurn.csv\")",
"_____no_output_____"
]
],
[
[
"### Data_Analysis",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df.head()",
"_____no_output_____"
],
[
"# Drop ID & count as its not valuable to the model\n\nTele_comm_Df.drop(columns=[\"CustomerID\"])\nTele_comm_Df.drop(columns=[\"Count\"])",
"_____no_output_____"
],
[
"# View the data types\n\nTele_comm_Df.info()\n",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4718 entries, 0 to 4717\nData columns (total 31 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CustomerID 4687 non-null object \n 1 Count 4691 non-null float64\n 2 Country 4708 non-null object \n 3 State 4715 non-null object \n 4 City 4688 non-null object \n 5 Zip Code 4698 non-null float64\n 6 Lat Long 4713 non-null object \n 7 Latitude 4695 non-null float64\n 8 Longitude 4685 non-null float64\n 9 Gender 4698 non-null object \n 10 Senior Citizen 4711 non-null object \n 11 Partner 4678 non-null object \n 12 Dependents 4672 non-null object \n 13 Tenure Months 4704 non-null float64\n 14 Phone Service 4692 non-null object \n 15 Multiple Lines 4696 non-null object \n 16 Internet Service 4692 non-null object \n 17 Online Security 4674 non-null object \n 18 Online Backup 4714 non-null object \n 19 Device Protection 4711 non-null object \n 20 Tech Support 4695 non-null object \n 21 Streaming TV 4686 non-null object \n 22 Streaming Movies 4688 non-null object \n 23 Contract 4672 non-null object \n 24 Paperless Billing 4694 non-null object \n 25 Payment Method 4717 non-null object \n 26 Monthly Charges 4714 non-null float64\n 27 Total Charges 4704 non-null object \n 28 CLTV 4692 non-null float64\n 29 Churn Reason 1261 non-null object \n 30 Churn Value 4693 non-null float64\ndtypes: float64(8), object(23)\nmemory usage: 1.1+ MB\n"
],
[
"Tele_comm_Df[\"Total Charges\"] = pd.to_numeric(Tele_comm_Df[\"Total Charges\"], errors='coerce')",
"_____no_output_____"
],
[
"# Gather a list of the column names\n\nTele_comm_Df.columns.tolist()",
"_____no_output_____"
],
[
"# Print out all unique values for each variable\nfor col in Tele_comm_Df.columns:\n print(col, \":\",Tele_comm_Df[col].unique())",
"CustomerID : ['5196-WPYOW' '8189-HBVRW' '4091-TVOCN' ... '7384-GHBPI' '9254-RBFON'\n '0022-TCJCI']\nCount : [ 1. nan]\nCountry : ['United States' nan]\nState : ['California' nan]\nCity : ['Paso Robles' 'Los Angeles' 'Potrero' ... 'Long Barn' 'Mount Hermon'\n 'Gerber']\nZip Code : [93446. 90005. 91963. ... 95041. 96035. 94403.]\nLat Long : ['35.634222, -120.728341' '34.059281, -118.30742' '32.619465, -116.593605'\n ... '37.051166, -122.056194' '40.03194, -122.176023'\n '37.538309, -122.305109']\nLatitude : [35.634222 34.059281 32.619465 ... 37.051166 40.03194 37.538309]\nLongitude : [-120.728341 -118.30742 -116.593605 ... -122.056194 -122.176023\n -122.305109]\nGender : ['Male' 'Female' nan]\nSenior Citizen : ['No' 'Yes' nan]\nPartner : ['Yes' 'No' nan]\nDependents : ['Yes' 'No' nan]\nTenure Months : [67. 53. 48. 1. 57. 10. 33. 59. 28. 54. 29. 43. 62. 51. 9. 71. 65. 44.\n 19. 39. 72. 40. 63. 3. 61. 25. 23. 11. 12. 5. 4. 35. 2. 7. 22. 26.\n 37. 58. 36. 34. 15. 13. 18. 55. 46. 6. 21. 70. 8. 16. 52. 24. 49. nan\n 32. 38. 20. 41. 56. 60. 17. 42. 50. 68. 27. 69. 30. 66. 64. 45. 31. 14.\n 47. 0.]\nPhone Service : ['Yes' 'No' nan]\nMultiple Lines : ['No' 'Yes' 'No phone service' nan]\nInternet Service : ['DSL' 'Fiber optic' 'No' nan]\nOnline Security : ['Yes' 'No' 'No internet service' nan]\nOnline Backup : ['Yes' 'No' 'No internet service' nan]\nDevice Protection : ['No' 'Yes' 'No internet service' nan]\nTech Support : ['Yes' nan 'No' 'No internet service']\nStreaming TV : ['No' 'Yes' 'No internet service' nan]\nStreaming Movies : ['No' 'Yes' 'No internet service' nan]\nContract : ['One year' 'Month-to-month' 'Two year' nan]\nPaperless Billing : ['No' 'Yes' nan]\nPayment Method : ['Mailed check' 'Electronic check' 'Credit card (automatic)'\n 'Bank transfer (automatic)' nan]\nMonthly Charges : [60.05 90.8 78.75 ... 48.65 86.95 62.7 ]\nTotal Charges : [3994.05 4921.2 3682.45 ... 7061.65 1704.95 2791.5 ]\nCLTV : [6148. 5249. 2257. ... 5322. 3758. 4305.]\nChurn Reason : [nan 'Attitude of support person' 'Competitor made better offer' 'Moved'\n 'Attitude of service provider' 'Limited range of services'\n 'Product dissatisfaction' 'Competitor had better devices'\n 'Extra data charges' \"Don't know\" 'Competitor offered more data'\n 'Competitor offered higher download speeds' 'Price too high'\n 'Service dissatisfaction' 'Lack of affordable download/upload speed'\n 'Long distance charges' 'Network reliability'\n 'Lack of self-service on Website' 'Poor expertise of phone support'\n 'Poor expertise of online support' 'Deceased']\nChurn Value : [ 0. 1. nan]\n"
],
[
"Tele_comm_Df.drop(columns=[\"CustomerID\",\"Count\",\"Latitude\",\"Longitude\"],axis=1,inplace=True)",
"_____no_output_____"
]
],
[
[
"#### Frequence_of_reason_for_churn",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df[\"Churn Reason\"].value_counts()",
"_____no_output_____"
]
],
[
[
"##### Based of above analysis we can say that bad customer support, getting better offer from competitor comparatively high price are measure reason for churn/",
"_____no_output_____"
]
],
[
[
"# Bar Plot for reason of churn\n\n\nax =Tele_comm_Df['Churn Reason'].value_counts().plot(kind = 'bar',rot = 0, width = 0.4, figsize=(15,10))\nax.set_ylabel('# of Customers')\nax.set_title('# of Customers by Contract Type')\n",
"_____no_output_____"
]
],
[
[
"#### City_wise_customer",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df[\"City\"].value_counts()",
"_____no_output_____"
]
],
[
[
"##### Service is present im 1117 different city",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df[\"City\"].value_counts().head(65)",
"_____no_output_____"
],
[
"Tele_comm_Df[\"City\"].value_counts().head(770).sum()",
"_____no_output_____"
],
[
"Tele_comm_Df[\"City\"].value_counts().head(550).sum()",
"_____no_output_____"
]
],
[
[
"#### 50 % city covers 70 % of customers\n#### 70% of cities cover 82 % of customers.\n#### That means most of the revenue is coming from 70 % of the cities only.",
"_____no_output_____"
]
],
[
[
"Tele_comm_Df[\"City\"].value_counts().unique()",
"_____no_output_____"
],
[
"Tele_comm_Df.drop(columns=[\"Country\",\"State\"],axis=1,inplace=True)",
"_____no_output_____"
],
[
"Tele_comm_Df.head()",
"_____no_output_____"
],
[
"Tele_comm_Df.isna().sum()",
"_____no_output_____"
]
],
[
[
"### Filling_missing_values",
"_____no_output_____"
]
],
[
[
"# Replacing with mean\n\nreplcae_mean=[\"Monthly Charges\",\"Total Charges\",\"CLTV\",\"Tenure Months\"]\nfor i in replcae_mean:\n Tele_comm_Df[i].fillna(value=Tele_comm_Df[i].mean(),inplace=True)",
"_____no_output_____"
],
[
"# Replacing missing values with mode\n\nfind_mode=['Lat Long','Gender','City','Zip Code',\n 'Churn Value',\n 'Senior Citizen',\n 'Partner',\n 'Dependents',\n 'Phone Service',\n 'Multiple Lines',\n 'Internet Service',\n 'Online Security',\n 'Online Backup',\n 'Device Protection',\n 'Tech Support',\n 'Streaming TV',\n 'Streaming Movies',\n 'Contract',\n 'Paperless Billing',\n 'Payment Method']\nfor j in find_mode:\n a=Tele_comm_Df[j].mode()\n for i in a:\n Tele_comm_Df[j].fillna(value=i,inplace=True)\n \n \n \n",
"_____no_output_____"
],
[
"Tele_comm_Df.isna().sum() ",
"_____no_output_____"
],
[
"target = Tele_comm_Df[\"Churn Value\"] ",
"_____no_output_____"
],
[
"corrdata = pd.concat([Tele_comm_Df.drop(columns=[\"Churn Value\"],axis=1),target],axis=1)\ncorr = corrdata.corr()\nsns.heatmap(corr, annot=True)",
"_____no_output_____"
]
],
[
[
"#### Churn_VS_Services",
"_____no_output_____"
]
],
[
[
"services = ['Phone Service','Multiple Lines','Internet Service','Online Security',\n 'Online Backup','Device Protection','Tech Support','Streaming TV','Streaming Movies']\n\nfig, axes = plt.subplots(nrows = 3,ncols = 3,figsize = (15,12))\nfor i, item in enumerate(services):\n if i < 3:\n ax = Tele_comm_Df[item].value_counts().plot(kind = 'bar',ax=axes[i,0],rot = 0)\n \n elif i >=3 and i < 6:\n ax = Tele_comm_Df[item].value_counts().plot(kind = 'bar',ax=axes[i-3,1],rot = 0)\n \n elif i < 9:\n ax = Tele_comm_Df[item].value_counts().plot(kind = 'bar',ax=axes[i-6,2],rot = 0)\n ax.set_title(item)",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Phone Service','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Phone Service',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
],
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Online Security','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Online Security',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"### Churn rate is highest for customer without online security",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Tech Support','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Tech Support',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"### Churn rate is highest for customer without tech support",
"_____no_output_____"
],
[
"#### Total Value for online security and tech support and churn ratio is same for type of service in equal in both the column that means those who have taken online security also got tech support as well so we can delete any of the column out of two. Since online security has more missing value so will delete that column.¶",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Online Backup','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Online backup',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
],
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Device Protection','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn by Device Protection',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"#### Total Value for online backup and device protection and churn ratio is for type of service in equal in both the column that means those who have taken online backup also got device protection as well so we can delete any of the column out of two. Since online backup has more missing value so will delete that column.¶",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B'] \ncity_churn =Tele_comm_Df.groupby(['Streaming Movies','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Streaming Movies',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
],
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Streaming TV','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Streaming TV',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14) ",
"_____no_output_____"
]
],
[
[
"#### Total Value for streaming movies and streaming and churn value is equal that means those who have taken streaming movies also got streaming Tv as well so we can delete any of the column out of two. Since streaming TV have more missing value so will delete that column.",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Internet Service','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',\n width = 0.5,\n stacked = True,\n rot = 0, \n figsize = (10,5),\n color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Internet Service',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"#### Churn rate is highest for fiber optic internal service",
"_____no_output_____"
]
],
[
[
"colors = ['#4D3425','#E4512B']\ncity_churn =Tele_comm_Df.groupby(['Multiple Lines','Churn Value']).size().unstack()\n\nax = (city_churn.T*100.0 / city_churn.T.sum()).T.plot(kind='bar',width = 0.5,stacked = True,rot = 0, figsize = (10,5),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Multiple Lines',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"#### Churn rate is almost same for all type of lines ",
"_____no_output_____"
]
],
[
[
"ax =Tele_comm_Df['Contract'].value_counts().plot(kind = 'bar',rot = 0, width = 0.3)\nax.set_ylabel('# of Customers')\nax.set_title('# of Customers VS Contract Type')",
"_____no_output_____"
],
[
"colors = ['#4D3425','#E4512B']\ncontract_churn =Tele_comm_Df.groupby(['Contract','Churn Value']).size().unstack()\n\nax = (contract_churn.T*100.0 / contract_churn.T.sum()).T.plot(kind='bar',width = 0.3,stacked = True,rot = 0, figsize = (10,6),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='best',prop={'size':14},title = 'Churn')\nax.set_ylabel('% Customers',size = 14)\nax.set_title('Churn VS Contract Type',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',\n size = 14)",
"_____no_output_____"
]
],
[
[
"#### Most of the customer prefer monthly plan and churn rate is highest for monthly plan",
"_____no_output_____"
]
],
[
[
"ax =Tele_comm_Df['Senior Citizen'].value_counts().plot(kind = 'bar',rot = 0, width = 0.3)\nax.set_ylabel('# of Customers')\nax.set_title('# of Customers VS Seniority Level') ",
"_____no_output_____"
],
[
"colors = ['#4D3425','#E4512B']\nseniority_churn = Tele_comm_Df.groupby(['Senior Citizen','Churn Value']).size().unstack()\nax = (seniority_churn.T*100.0 / seniority_churn.T.sum()).T.plot(kind='bar',width = 0.2,stacked = True,rot = 0, figsize = (8,6),color = colors)\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nax.legend(loc='center',prop={'size':14},title = 'Churn Value')\nax.set_ylabel('% Customers')\nax.set_title('Churn VS Seniority Level',size = 14)\n\n# Code to add the data labels on the stacked bar chart\nfor p in ax.patches:\n width, height = p.get_width(), p.get_height()\n x, y = p.get_xy() \n ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),\n color = 'white',\n weight = 'bold',size =14)",
"_____no_output_____"
]
],
[
[
"#### Churn rate for senior citizen is comparatively high",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecbf81a2c7e9e1e467e62cc6d29543c7c42d079f | 109,785 | ipynb | Jupyter Notebook | examples/sudire_example.ipynb | emmanueljordy/RobDimRed | db0a2cd924a7581de01f8adaebde5d5acea6b12e | [
"MIT"
] | null | null | null | examples/sudire_example.ipynb | emmanueljordy/RobDimRed | db0a2cd924a7581de01f8adaebde5d5acea6b12e | [
"MIT"
] | null | null | null | examples/sudire_example.ipynb | emmanueljordy/RobDimRed | db0a2cd924a7581de01f8adaebde5d5acea6b12e | [
"MIT"
] | null | null | null | 175.375399 | 55,568 | 0.889985 | [
[
[
"# sudire.py example notebook",
"_____no_output_____"
],
[
"The aim of this notebook is to show how to perform Sufficient Dimension Reduction using the direpack package. The data we will use is the [auto-mpg dataset](http://archive.ics.uci.edu/ml/datasets/Auto+MPG). We wil show how the dimension of the central subspace and a basis for the central subspace can be estimated using Sufficient Dimension Reduction via Ball covariance and by using a user defined function. ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nfrom scipy import stats\nfrom direpack.sudire.sudire import sudire,estimate_structural_dim\nfrom direpack.plot.sudire_plot import sudire_plot\nimport warnings\nfrom sklearn.model_selection import train_test_split\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = [16,13]\nplt.rcParams['figure.constrained_layout.use'] = True\nfrom matplotlib import rcParams\nrcParams.update({'figure.autolayout': True})",
"_____no_output_____"
]
],
[
[
"## Data preprocessing",
"_____no_output_____"
]
],
[
[
"auto_data = pd.read_csv('../data/auto-mpg.csv', index_col='car_name')\ndisplay(auto_data.head())\nprint('dataset shape is',auto_data.shape)\nprint(auto_data.dtypes)",
"_____no_output_____"
]
],
[
[
" Looking at the data, we see that the horsepower variable should be a numeric variable but is displayed as type object. This is because missing values are coded as '?'. We thus remove those missing values. After this step, there are no more missing values into the data. ",
"_____no_output_____"
]
],
[
[
"auto_data = auto_data[auto_data.horsepower != '?']\nauto_data.horsepower = auto_data.horsepower.astype('float')\nprint('data types \\n', auto_data.dtypes)\nprint('any missing values \\n',auto_data.isnull().any())",
"data types \n mpg float64\ncylinders int64\ndisplacement float64\nhorsepower float64\nweight int64\nacceleration float64\nmodel year int64\norigin int64\ndtype: object\nany missing values \n mpg False\ncylinders False\ndisplacement False\nhorsepower False\nweight False\nacceleration False\nmodel year False\norigin False\ndtype: bool\n"
],
[
"X = auto_data.copy()\ny = X['mpg']\nX.drop('mpg', axis=1, inplace=True)\nX.drop('origin', axis = 1, inplace = True)",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(\nX, y, test_size=0.3, random_state=42)",
"_____no_output_____"
]
],
[
[
"# Estimating a basis of the central subspace",
"_____no_output_____"
],
[
"First let us suppose that we know the dimension of the central subspace to be 2. We will then see how to estimate a basis for the central subspaces using the various options.",
"_____no_output_____"
]
],
[
[
"struct_dim = 2",
"_____no_output_____"
]
],
[
[
"# via distance covariance",
"_____no_output_____"
]
],
[
[
"dcov_auto = sudire('dcov-sdr', center_data= True, scale_data=True,n_components=struct_dim)\ndcov_auto.fit(X_train.values, y_train.values)\ndcov_auto.x_loadings_",
"_____no_output_____"
]
],
[
[
"## via Martingale Difference Divergence",
"_____no_output_____"
]
],
[
[
"mdd_auto = sudire('mdd-sdr', center_data= True, scale_data=True,n_components=struct_dim)\nmdd_auto.fit(X_train.values, y_train.values)\nmdd_auto.x_loadings_",
"_____no_output_____"
]
],
[
[
"## User defined functions",
"_____no_output_____"
],
[
"Here we show how user can optimize their own functions as is done for Distance Covariance and Martingale Difference Divergence.\nFor this example we will use Ball covariance. There is a python package : [Ball](https://pypi.org/project/Ball/) available on PyPi which computes the Ball covariance between random variables. We follow the development of the article [Robust sufficient Dimension Reduction Via Ball covariance](https://www.sciencedirect.com/science/article/pii/S0167947319301380). The process is similar to using scipy.optimize.minimize function. ",
"_____no_output_____"
]
],
[
[
"import Ball",
"_____no_output_____"
]
],
[
[
"First we define the objective function to be optimized. Here, beta is the flattened array representing the basis of the central subpace. A series of arguments can be passed to this function, including the X and y data as well as the dimension of the central subspace. ",
"_____no_output_____"
]
],
[
[
"def ballcov_func(beta, *args):\n X= args[0]\n Y= args[1]\n h=args[2]\n beta = np.reshape(beta,(-1,h),order = 'F')\n X_dat = np.matmul(X, beta)\n res = Ball.bcov_test(X_dat,Y,num_permutations=0)[0] \n return(-10*res)\n",
"_____no_output_____"
]
],
[
[
"Next we define the contraints and additional optimization arguments. both the constraints and arguments are assumed to be dicts or tuples. ",
"_____no_output_____"
]
],
[
[
"def optim_const(beta, *args):\n X= args[0]\n h= args[1]\n i = args[2]\n j = args[3]\n beta = np.reshape(beta,(-1,h),order = 'F')\n covx = np.cov(X, rowvar=False)\n ans = np.matmul(np.matmul(beta.T,covx), beta) - np.identity(h)\n return(ans[i,j])\n\nball_const= []\nfor i in range(0, struct_dim): \n for j in range(0,struct_dim): \n ball_const.append({'type': 'eq', 'fun' : optim_const,\n 'args':(X_train,struct_dim,i,j)})\n \nball_const =tuple(ball_const)\n\noptim_args = (X_train,y_train, struct_dim)",
"_____no_output_____"
],
[
"bcov_auto = sudire(ballcov_func, center_data= True, scale_data=True,n_components=struct_dim)\nbcov_auto.fit(X_train.values, y_train.values)\nbcov_auto.x_loadings_",
"_____no_output_____"
]
],
[
[
"## Estimating the dimension of the central subspace",
"_____no_output_____"
],
[
"The dimension of the central subspace can be estimated using the bootstrap method proposed in [Sufficient Dimension Reduction via Distance Covariance](https://www.tandfonline.com/doi/abs/10.1080/10618600.2015.1026601). All the implemented sdr methods can be used. Here we present the method using Directional Regression.",
"_____no_output_____"
]
],
[
[
"central_dim, diff_vec = estimate_structural_dim('dr',X_train.values, y_train.values, B=100, n_slices=4)\n\ncentral_dim",
"possible dim 1\npossible dim 2\npossible dim 3\npossible dim 4\npossible dim 5\npossible dim 6\n"
]
],
[
[
"## Plots",
"_____no_output_____"
],
[
"Once the sufficient Dimension Reduction has been done, an OLS regression is fitted using the reduced subset of variables. we can visualise the predicted response values using the plot functions from sudire_plots.",
"_____no_output_____"
]
],
[
[
"sdr_plot=sudire_plot(dcov_auto,['w','w','g','y','m'])\nsdr_plot.plot_yyp(label='mpg',title='fitted vs true mpg')",
"_____no_output_____"
]
],
[
[
"The projections of the data can also be visualised",
"_____no_output_____"
]
],
[
[
"sdr_plot=sudire_plot(dcov_auto,['w','w','g','y','m'])\nsdr_plot.plot_projections(label='mpg', title='projected data')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbfa3275c15cb763f2de190b6ccac8a9221aa1d | 10,476 | ipynb | Jupyter Notebook | automation/.ipynb_checkpoints/aula1-semPy-checkpoint.ipynb | eliasalbuquerque/Python | e2e63f1790ee576c4ef599a3d36c13a0aa7611b8 | [
"MIT"
] | null | null | null | automation/.ipynb_checkpoints/aula1-semPy-checkpoint.ipynb | eliasalbuquerque/Python | e2e63f1790ee576c4ef599a3d36c13a0aa7611b8 | [
"MIT"
] | 1 | 2021-05-09T12:05:21.000Z | 2021-05-09T12:05:21.000Z | automation/.ipynb_checkpoints/aula1-semPy-checkpoint.ipynb | eliasalbuquerque/Python | e2e63f1790ee576c4ef599a3d36c13a0aa7611b8 | [
"MIT"
] | null | null | null | 31.554217 | 130 | 0.404926 | [
[
[
"# importar as bibliotecas\nimport pyautogui # faz a automacao do mouse e teclado;\nimport time # controla o tempo do nosso programa;\nimport pyperclip # ela permite a gente copiar e colar com o python;\nimport pandas as pd\n\n# passo 1: Entrar no sistema (link do Google Drive)\npyautogui.hotkey('ctrl', 't')\n\n# passo 2: Entrar na pasta da Aula 1\nlink = 'https://drive.google.com/drive/folders/1mhXZ3JPAnekXP_4vX7Z_sJj35VWqayaR'\npyperclip.copy(link) # para copiar algo no codigo;\npyautogui.hotkey('ctrl', 'v') # após abrir a nova aba, cola o link;\npyautogui.press('enter') # e dá 'enter' em seguida;\ntime.sleep(5)\npyautogui.click(300, 380, clicks = 2) # abrir a pasta da 'Aula 1'\n\n# passo 3: Fazer o dawnload da Base de Vendas\ntime.sleep(3)\npyautogui.click(317, 256)\ntime.sleep(2)\npyautogui.click(1089, 171)\ntime.sleep(2)\npyautogui.click(915, 494)\ntime.sleep(10)\n\n# passo 4: Calcular os indicadores (Faturamento e a quantidade de produtos)\ntabela = pd.read_excel(r'C:\\Users\\elias\\Downloads\\Vendas - Dez.xlsx') # o 'r' é para ler exatamente o link como está;\ndisplay(tabela)\nfaturamento = tabela['Valor Final'].sum()\nqProdutos = tabela['Quantidade'].sum()\n\n# passo 5: Entrar no meu email\n\n# passo 6: Criar o email\n# passo 7: Enviar o email",
"_____no_output_____"
],
[
"time.sleep(7)\npyautogui.position()",
"_____no_output_____"
],
[
"# passo 4: Calcular os indicadores (Faturamento e a quantidade de produtos)\ntabela = pd.read_excel(r'C:\\Users\\elias\\Downloads\\Vendas - Dez.xlsx') # o 'r' é para ler exatamente o link como está;\n# display(tabela)\nfaturamento = tabela['Valor Final'].sum()\nqProdutos = tabela['Quantidade'].sum()\n\n# passo 5: Entrar no meu email\npyautogui.hotkey('ctrl', 't')\nlink = 'https://mail.google.com/mail/u/0/#inbox'\npyperclip.copy(link)\npyautogui.hotkey('ctrl', 'v')\npyautogui.press('enter')\ntime.sleep(12)\n\n# passo 6: Criar o email\npyautogui.click(181, 20)\npyautogui.write('[email protected]')\npyautogui.press('tab')\npyautogui.press('tab')\npyautogui.write('Relatório de vendas')\n\n# passo 7: Enviar o email\n",
"_____no_output_____"
]
],
[
[
"##### ",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecbfabc84e5c693d6a12828c2679eec42f82bfe0 | 266,540 | ipynb | Jupyter Notebook | docs/source/Fitting_single_flow_curve.ipynb | marcocaggioni/rheofit | 5cd441044d1744e4a287372296cf8f6e65c87639 | [
"MIT"
] | 1 | 2019-11-15T19:49:56.000Z | 2019-11-15T19:49:56.000Z | docs/source/Fitting_single_flow_curve.ipynb | marcocaggioni/rheofit | 5cd441044d1744e4a287372296cf8f6e65c87639 | [
"MIT"
] | null | null | null | docs/source/Fitting_single_flow_curve.ipynb | marcocaggioni/rheofit | 5cd441044d1744e4a287372296cf8f6e65c87639 | [
"MIT"
] | 1 | 2019-10-21T00:54:35.000Z | 2019-10-21T00:54:35.000Z | 167.952111 | 83,864 | 0.868916 | [
[
[
"# Fitting single flow curve\n\nThis notebook contains a list scripts focusing on analysis and visualization of a single flow curve. A flow curve is the measurement of the viscosity as a function of shear rate. In a measurment a constant shear rate is imposed on a fluid sample and the required shear stress is measured. \n\nSince for Non newtonian fluids the viscosity is a function of the shear rate, the flow curve measurement is a way to characterize such functional dependence. \n\nRheological models can be used to fit the flow curve and extract parameters that in some case directly relates to fundamental material properties.\n\nIn this notebook we provide some example of analysis of flow curves using powerful python libraries that we simply integrate to make flow curves analysis simple, easy to share and reproduce.\n\nWe provide a sequence of exaples structured as \"recepies\" each with a specific scope, plotting data, fitting data, evaluating confidence itervals...\n\nEach recepy can be considered \"stand alone\" meaning that the only objects that cells share are the general imports in the first cell and the functions defined through the notebook. This should make it easy to copy and paste a cell in case you want to use it as a starting point for your own analysis.",
"_____no_output_____"
],
[
"If you have a google account you can run this documentation notebook [Open in colab](https://colab.research.google.com/github/rheopy/rheofit/blob/master/docs/source/Fitting_single_flow_curve.ipynb)",
"_____no_output_____"
]
],
[
[
"#executing the notebook in colab requires installation of the rheofit library\n!pip install git+https://github.com/marcocaggioni/rheofit.git",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport sys\nsys.path.append(\"./../\") #in case you are running the notebook in binder or from the cloned repository\nimport rheofit\nimport lmfit\nimport pybroom as pb\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport emcee\nimport corner\nimport seaborn",
"_____no_output_____"
]
],
[
[
"## Getting Example flow curve data\n\nTo make demonstration simple we stored an example flow curve in the rheodata module. In the future we would like to collect flow curves from published works to make them easily available for training purpouse. \n\nThe data provided refer to a 75w% mineral oil in 11%LAS surfactant solution. The emusion droplets size ranges from 1-10 um and the measurement was performed with a Couette geometry at 40C.\n\nThe example_emulsion method returns a Pandas dataframe with column 'Shear rate' in [1/s] and column 'Stress' in [Pa]. The choice of the name convention is from the excel exported file by the TA Trios software. A possibly better choice could be the RheoML convention 'ShearRate' and 'ShearStress'.\n\nExample of importing data from different file formats are provided in a different Notebook, here we focus on possible analysis and visualization on a single flow curve.",
"_____no_output_____"
]
],
[
[
"#using provided example data\nexample_data=rheofit.rheodata.example_emulsion()\nexample_data.head()",
"_____no_output_____"
]
],
[
[
"## Plotting flow curve data\nTo plot the data we use matplolib. We define a simple function that plot the data, set x and y scale to log and the label for x and y. The function return a matplolib figure object that can be stored in a variable.",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\n\ndef plot_data(data):\n plt.plot(data['Shear rate'],data['Stress'],'o')\n plt.yscale('log')\n plt.xscale('log')\n plt.ylabel('$\\sigma [Pa]$');\n plt.xlabel('$\\dot\\gamma [1/s]$');\n \n return plt.gcf()\n\nfig1=plot_data(data);",
"_____no_output_____"
]
],
[
[
"## Fitting flow curve data to a rheological model\nThis is the most important functionality mostly provided by the [LMFIT](https://lmfit.github.io/lmfit-py/) library. The rheofit module just provide the rheology model definition and documentation. \n\nFor example a popular model for yield stress fluids is the Hershel Bulckley model is a lmfit.Model object",
"_____no_output_____"
]
],
[
[
"rheofit.models.HB_model",
"_____no_output_____"
]
],
[
[
"The model object is based on the function rheofit.models.HB",
"_____no_output_____"
]
],
[
[
"help(rheofit.models.HB)",
"Help on function HB in module rheofit.models:\n\nHB(x, ystress=1.0, K=1.0, n=0.5)\n Hershel-Bulkley Model\n \n Note:\n \n .. math::\n \\sigma= \\sigma_y + K \\cdot \\dot\\gamma^n\n \n Args:\n ystress: yield stress [Pa]\n \n K : Consistency index [Pa s^n]\n \n n : Shear thinning index []\n \n Returns:\n stress : Shear Stress, [Pa]\n\n"
]
],
[
[
"To fit the data we use the fit method of the model object passing the **stress**, the **shear rate** and the weights",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\nres_fit",
"_____no_output_____"
]
],
[
[
"## Plotting Fit results\n\nThe fit method returns a fit result object that contains all the information on the data, the model used and the results",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\ndata=data[data['Shear rate']>0.1]\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']*0.1))\n\nkwarg={'xscale':'log','yscale':('log')}\nres_fit.plot_fit(ax_kws=kwarg,yerr=False);\nplt.figure()\nres_fit.plot_residuals(yerr=False,ax_kws={'xscale':'log'})",
"_____no_output_____"
],
[
"data=rheofit.rheodata.example_emulsion()\ndata=data[data['Shear rate']>0.1]\n\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\ndef plot_fit(res_fit, xlist=None):\n if xlist is None:\n xlist=res_fit.userkws['x']\n \n plt.plot(res_fit.userkws['x'], res_fit.data,'o',label='Data')\n plt.plot(xlist, res_fit.eval(x=xlist),label='Best fit')\n plt.yscale('log')\n plt.xscale('log')\n plt.ylabel('$\\sigma [Pa]$')\n plt.xlabel('$\\dot\\gamma [1/s]$')\n\nplot_fit(res_fit);",
"_____no_output_____"
]
],
[
[
"## Explore fit results with pybroom\nExploring the lmfit.fit_result object is not intuitive, using the convenient [pybroom](https://pybroom.readthedocs.io/en/stable/) library make things very easy, especially when we explore lists of results from different sample. ",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\npb.glance(res_fit)",
"_____no_output_____"
],
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\npb.tidy(res_fit)",
"_____no_output_____"
],
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\naugmented=pb.augment(res_fit)\naugmented.head()",
"_____no_output_____"
]
],
[
[
"## Plot residuals",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\n\ndef plot_residuals(res_fit):\n augmented=pb.augment(res_fit)\n plt.plot('x','residual',data=augmented)\n plt.xscale('log')\n plt.ylabel('residual - relative deviation');\n plt.xlabel('$\\dot\\gamma [1/s]$');\n \n return plt.gcf()\n\nplot_residuals(res_fit);",
"_____no_output_____"
]
],
[
[
"## Confidence interval",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\ndata=data[data['Shear rate']>0.1]\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\ndef plot_confidence(res_fit,expand=10):\n dely = res_fit.eval_uncertainty(x=res_fit.userkws['x'],sigma=3)*expand\n \n plt.plot(res_fit.userkws['x'], res_fit.data,'o',color='black',label='Data',markersize=5)\n plt.plot(res_fit.userkws['x'], res_fit.best_fit,label='Best fit TC model',color='red')\n plt.fill_between(res_fit.userkws['x'], res_fit.best_fit-dely,res_fit.best_fit+dely,\n color='blue',alpha=0.2,label='0.9973 Confidence interval')\n plt.yscale('log')\n plt.xscale('log')\n plt.ylabel('$\\sigma [Pa]$')\n plt.xlabel('$\\dot\\gamma [1/s]$')\n \n return plt.gcf()\n\nplot_confidence(res_fit);",
"_____no_output_____"
]
],
[
[
"## Fit specific shear rate range",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\ndata=data[data['Shear rate']>0.1]\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\nmin_shear_rate=0.1\nmax_shear_rate=1000\n\ndef fit_range(res_fit,min_shear_rate=0.1,max_shear_rate=100):\n data=pd.DataFrame.from_dict({'Shear rate':res_fit.userkws['x'],'Stress':res_fit.data})\n mask=(data['Shear rate']>=min_shear_rate) & (data['Shear rate']<=max_shear_rate)\n res_fit=model.fit(data[mask]['Stress'],x=data[mask]['Shear rate'],weights=1/(data[mask]['Stress']))\n \n return res_fit\n\nplot_fit(res_fit);\nplot_fit(fit_range(res_fit));",
"_____no_output_____"
],
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\nmin_shear_rate=0.1\nmax_shear_rate=1000\n\ndef fit_range(res_fit,min_shear_rate=0.1,max_shear_rate=100):\n data=pd.DataFrame.from_dict({'Shear rate':res_fit.userkws['x'],'Stress':res_fit.data})\n mask=(data['Shear rate']>min_shear_rate) & (data['Shear rate']<max_shear_rate)\n res_fit=model.fit(data[mask]['Stress'],x=data[mask]['Shear rate'],weights=1/(data[mask]['Stress']))\n \n return res_fit\n\ndisplay(pb.tidy(fit_range(res_fit,min_shear_rate=0.1,max_shear_rate=10)))\ndisplay(pb.tidy(fit_range(res_fit,min_shear_rate=0.1,max_shear_rate=100)))\ndisplay(pb.tidy(fit_range(res_fit,min_shear_rate=0.1,max_shear_rate=1000)))",
"_____no_output_____"
]
],
[
[
"## Reduced $\\chi^2$ sensitivity on shear rate range",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\n\ndef explore_redchi(res_fit,min_shear_rate=0.01,max_shear_rate=1000):\n data=pd.DataFrame.from_dict({'Shear rate':res_fit.userkws['x'],'Stress':res_fit.data})\n res_dict={min_shear:fit_range(res_fit,min_shear_rate=min_shear,max_shear_rate=1000) \n for min_shear in data['Shear rate'][10:-1]}\n \n ax1= plt.subplot(2,1,1)\n \n plt.plot(data['Shear rate'],data['Stress'],'o')\n plt.yscale('log')\n plt.xscale('log')\n plt.xlabel('$\\dot\\gamma [1/s]$')\n plt.ylabel('$\\sigma [Pa]$')\n\n ax2= plt.subplot(2,1,2,sharex = ax1)\n \n plt.plot(list(res_dict.keys()),pb.glance(res_dict)['redchi'])\n plt.xscale('log')\n plt.ylabel('$\\chi_{red}$')\n plt.xlabel('$\\dot\\gamma_{min} [1/s]$')\n \n return plt.gcf()\n \nexplore_redchi(res_fit);",
"_____no_output_____"
]
],
[
[
"## Model parameter sensitivity on shear rate range",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\ndata.sort_values('Shear rate',ascending=False, inplace=True)\n\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\n\ndef explore_param_keephigh(res_fit,min_shear_rate=0.01,max_shear_rate=1000,param_name=None):\n '''keep the right range of data for the fit\n answer the question: how important it is how low we extend the analysis at low shear?\n '''\n data=pd.DataFrame.from_dict({'Shear rate':res_fit.userkws['x'],'Stress':res_fit.data})\n data.sort_values('Shear rate',ascending=False, inplace=True)\n res_fit=res_fit.model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n res_fit=fit_range(res_fit,min_shear_rate=min_shear_rate,max_shear_rate=max_shear_rate)\n res_dict={min_shear:fit_range(res_fit,min_shear_rate=min_shear,max_shear_rate=max(res_fit.userkws['x'])) \n for min_shear in res_fit.userkws['x'][5:-1]}\n \n ax1= plt.subplot(2,1,1)\n \n plt.plot(data['Shear rate'],data['Stress'],'o')\n plt.yscale('log')\n plt.xscale('log')\n plt.xlabel('$\\dot\\gamma [1/s]$')\n plt.ylabel('$\\sigma [Pa]$')\n \n xlim=plt.gca().get_xlim()\n plt.axvspan(xlim[0],min_shear_rate,color='gray',alpha=0.5)\n plt.axvspan(max_shear_rate,xlim[1],color='gray',alpha=0.5)\n \n ax2= plt.subplot(2,1,2,sharex = ax1)\n \n plt.plot(list(res_dict.keys()),pb.tidy(res_dict)['value'][pb.tidy(res_dict)['name']=='HB_n'])\n plt.xscale('log')\n plt.ylabel('$'+param_name+'$')\n plt.xlabel('$\\dot\\gamma_{min} [1/s]$')\n \n plt.axvspan(xlim[0],min_shear_rate,color='gray',alpha=0.5)\n plt.axvspan(max_shear_rate,xlim[1],color='gray',alpha=0.5)\n\n plt.gca().set_xlim(xlim)\n \n return plt.gcf()\n \nexplore_param_keephigh(res_fit,min_shear_rate=0.1,param_name='n');",
"_____no_output_____"
],
[
"data=rheofit.rheodata.example_emulsion()\ndata.sort_values('Shear rate',ascending=False, inplace=True)\n\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n\n\ndef explore_param_keeplow(res_fit,min_shear_rate=0.01,max_shear_rate=1001,param_name=None):\n '''keep the right range of data for the fit\n answer the question: how important it is how low we extend the analysis at low shear?\n '''\n \n data=pd.DataFrame.from_dict({'Shear rate':res_fit.userkws['x'],'Stress':res_fit.data})\n data.sort_values('Shear rate',ascending=True, inplace=True)\n res_fit=res_fit.model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']))\n res_fit=fit_range(res_fit,min_shear_rate=min_shear_rate,max_shear_rate=max_shear_rate)\n res_dict={max_shear:fit_range(res_fit,min_shear_rate=min(res_fit.userkws['x']),max_shear_rate=max_shear) \n for max_shear in res_fit.userkws['x'][5:-1]}\n \n ax1= plt.subplot(2,1,1)\n \n plt.plot(data['Shear rate'],data['Stress'],'o')\n plt.yscale('log')\n plt.xscale('log')\n plt.xlabel('$\\dot\\gamma [1/s]$')\n plt.ylabel('$\\sigma [Pa]$')\n \n xlim=plt.gca().get_xlim()\n plt.axvspan(xlim[0],min_shear_rate,color='gray',alpha=0.5)\n plt.axvspan(max_shear_rate,xlim[1],color='gray',alpha=0.5)\n \n ax2= plt.subplot(2,1,2,sharex = ax1)\n \n plt.plot(list(res_dict.keys()),pb.tidy(res_dict)['value'][pb.tidy(res_dict)['name']=='HB_n'])\n plt.xscale('log')\n plt.ylabel('$'+param_name+'$')\n plt.xlabel('$\\dot\\gamma_{max} [1/s]$')\n \n plt.axvspan(xlim[0],min_shear_rate,color='gray',alpha=0.5)\n plt.axvspan(max_shear_rate,xlim[1],color='gray',alpha=0.5)\n\n plt.gca().set_xlim(xlim)\n \n return plt.gcf()\n \nexplore_param_keeplow(res_fit,min_shear_rate=0.1,param_name='n');",
"_____no_output_____"
]
],
[
[
"## Emcee",
"_____no_output_____"
]
],
[
[
"data=rheofit.rheodata.example_emulsion()\ndata=data[data['Shear rate']>0.1]\nmodel=rheofit.models.HB_model\nres_fit=model.fit(data['Stress'],x=data['Shear rate'],weights=1/(data['Stress']*0.1))\nres_emcee=res_fit.emcee(steps=1000, nwalkers=50, burn=300)\n\ncorner.corner(res_emcee.flatchain, labels=res_emcee.var_names, truths=[res_fit.params[key].value for key in res_fit.params.keys()]);\ndisplay(res_fit)",
"/usr/local/lib/python3.6/dist-packages/emcee/ensemble.py:258: RuntimeWarning: Initial state is not linearly independent and it will not allow a full exploration of parameter space\n category=RuntimeWarning,\n100%|██████████| 1000/1000 [01:23<00:00, 12.12it/s]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecbfac4e1f976f90226c30fb0a275e2dad288884 | 32,527 | ipynb | Jupyter Notebook | solutions/game_of_ur_soln.ipynb | chwebster/ThinkBayes2 | 49af0e36c38c2656d7b91117cfa2b019ead81988 | [
"MIT"
] | 1,337 | 2015-01-06T06:23:55.000Z | 2022-03-31T21:06:21.000Z | solutions/game_of_ur_soln.ipynb | chwebster/ThinkBayes2 | 49af0e36c38c2656d7b91117cfa2b019ead81988 | [
"MIT"
] | 43 | 2015-04-23T13:14:15.000Z | 2022-01-04T12:55:59.000Z | solutions/game_of_ur_soln.ipynb | chwebster/ThinkBayes2 | 49af0e36c38c2656d7b91117cfa2b019ead81988 | [
"MIT"
] | 1,497 | 2015-01-13T22:05:32.000Z | 2022-03-30T09:19:53.000Z | 81.521303 | 12,292 | 0.83878 | [
[
[
"# Think Bayes\n\nThis notebook presents code and exercises from Think Bayes, second edition.\n\nCopyright 2018 Allen B. Downey\n\nMIT License: https://opensource.org/licenses/MIT",
"_____no_output_____"
]
],
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\nfrom thinkbayes2 import Pmf, Cdf, Suite\nimport thinkplot",
"_____no_output_____"
]
],
[
[
"### The Game of Ur problem\n\nIn the Royal Game of Ur, players advance tokens along a track with 14 spaces. To determine how many spaces to advance, a player rolls 4 dice with 4 sides. Two corners on each die are marked; the other two are not. The total number of marked corners -- which is 0, 1, 2, 3, or 4 -- is the number of spaces to advance.\n\nFor example, if the total on your first roll is 2, you could advance a token to space 2. If you roll a 3 on the next roll, you could advance the same token to space 5.\n\nSuppose you have a token on space 13. How many rolls did it take to get there?\n\nHint: you might want to start by computing the distribution of k given n, where k is the number of the space and n is the number of rolls.\n\nThen think about the prior distribution of n.",
"_____no_output_____"
],
[
"Here's a Pmf that represents one of the 4-sided dice.",
"_____no_output_____"
]
],
[
[
"die = Pmf([0, 1])",
"_____no_output_____"
]
],
[
[
"And here's the outcome of a single roll.",
"_____no_output_____"
]
],
[
[
"roll = sum([die]*4)",
"_____no_output_____"
]
],
[
[
"I'll start with a simulation, which helps in two ways: it makes modeling assumptions explicit and it provides an estimate of the answer.\n\nThe following function simulates playing the game over and over; after every roll, it yields the number of rolls and the total so far. When it gets past the 14th space, it starts over.",
"_____no_output_____"
]
],
[
[
"def roll_until(iters):\n \"\"\"Generates observations of the game.\n \n iters: number of observations\n \n yields: number of rolls, total\n \"\"\"\n for i in range(iters):\n total = 0\n for n in range(1, 1000):\n total += roll.Random()\n if total > 14:\n break\n yield(n, total)",
"_____no_output_____"
]
],
[
[
"Now I'll the simulation many times and, every time the token is observed on space 13, record the number of rolls it took to get there.",
"_____no_output_____"
]
],
[
[
"pmf_sim = Pmf()\nfor n, k in roll_until(1000000):\n if k == 13:\n pmf_sim[n] += 1",
"_____no_output_____"
]
],
[
[
"Here's the distribution of the number of rolls:",
"_____no_output_____"
]
],
[
[
"pmf_sim.Normalize()",
"_____no_output_____"
],
[
"pmf_sim.Print()",
"4 0.017022500294558752\n5 0.14797034043003582\n6 0.29740049405989827\n7 0.27880235407359777\n8 0.16182957929022326\n9 0.06743897641333282\n10 0.021953114234876156\n11 0.005827270748418868\n12 0.0014438371319762996\n13 0.00025362007712446754\n14 4.7928203551080484e-05\n15 5.9910254438850605e-06\n16 1.99700848129502e-06\n17 1.99700848129502e-06\n"
],
[
"thinkplot.Hist(pmf_sim, label='Simulation')\nthinkplot.decorate(xlabel='Number of rolls to get to space 13',\n ylabel='PMF')",
"_____no_output_____"
]
],
[
[
"### Bayes\n\nNow let's think about a Bayesian solution. It is straight forward to compute the likelihood function, which is the probability of being on space 13 after a hypothetical `n` rolls.\n\n`pmf_n` is the distribution of spaces after `n` rolls.\n\n`pmf_13` is the probability of being on space 13 after `n` rolls.",
"_____no_output_____"
]
],
[
[
"pmf_13 = Pmf()\nfor n in range(4, 15):\n pmf_n = sum([roll]*n)\n pmf_13[n] = pmf_n[13]\n \npmf_13.Print()\npmf_13.Total()",
"4 0.008544921875\n5 0.0739288330078125\n6 0.14878177642822266\n7 0.13948291540145874\n8 0.08087921887636185\n9 0.033626414369791746\n10 0.010944152454612777\n11 0.002951056014353526\n12 0.0006854188303009323\n13 0.00014100133496341982\n14 2.6227807875534026e-05\n"
]
],
[
[
"The total probability of the data is very close to 1/2, but it's not obvious (to me) why.\n\nNevertheless, `pmf_13` is the probability of the data for each hypothetical values of `n`, so it is the likelihood function.\n\n### The prior\n\nNow we need to think about a prior distribution on the number of rolls. This is not easy to reason about, so let's start by assuming that it is uniform, and see where that gets us.\n\nIf the prior is uniform, the posterior equals the likelihood function, normalized.",
"_____no_output_____"
]
],
[
[
"posterior = pmf_13.Copy()\nposterior.Normalize()\nposterior.Print()",
"4 0.017090119365747274\n5 0.1478600505840099\n6 0.2975683518003199\n7 0.2789703298127999\n8 0.16176104650522904\n9 0.0672539133567936\n10 0.021888657911956433\n11 0.005902207214774348\n12 0.0013708597687294604\n13 0.00028200721791321923\n14 5.245646172683854e-05\n"
]
],
[
[
"That sure looks similar to what we got by simulation. Let's compare them.",
"_____no_output_____"
]
],
[
[
"thinkplot.Hist(pmf_sim, label='Simulation')\nthinkplot.Pmf(posterior, color='orange', label='Normalized likelihoods')\nthinkplot.decorate(xlabel='Number of rolls (n)',\n ylabel='PMF')",
"_____no_output_____"
]
],
[
[
"Since the posterior distribution based on a uniform prior matches the simulation, it seems like the uniform prior must be correct. But it is not obvious (to me) why.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecbfdad6700f3e094149646753a4727556fcbefc | 181,249 | ipynb | Jupyter Notebook | Lecture/Week4_NumPy.ipynb | sudipkrk/WQU_DSM | 720c3e97989985be705559ce54a910f5dd2b6ae4 | [
"MIT"
] | null | null | null | Lecture/Week4_NumPy.ipynb | sudipkrk/WQU_DSM | 720c3e97989985be705559ce54a910f5dd2b6ae4 | [
"MIT"
] | null | null | null | Lecture/Week4_NumPy.ipynb | sudipkrk/WQU_DSM | 720c3e97989985be705559ce54a910f5dd2b6ae4 | [
"MIT"
] | null | null | null | 80.734521 | 65,528 | 0.7902 | [
[
[
"## Basic data tools: NumPy, Matplotlib, Pandas\n\nPython is a powerful and flexible programming language, but it doesn't have built-in tools for mathematical analysis or data visualization. For most data analysis we will rely on some helpful libraries. We'll explore three libraries that are very common for data analysis and visualization.",
"_____no_output_____"
],
[
"## NumPy\n\nFirst among these is NumPy. The main NumPy features are three-fold: its mathematical functions (e.g. `sin`, `log`, `floor`), its `random` submodule (useful for random sampling), and the NumPy `ndarray` object.\n\nA NumPy array is similar to a mathematical n-dimensional matrix. For example, \n\n$$\\begin{bmatrix}\n x_{11} & x_{12} & x_{13} & \\dots & x_{1n} \\\\\n x_{21} & x_{22} & x_{23} & \\dots & x_{2n} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n x_{d1} & x_{d2} & x_{d3} & \\dots & x_{dn}\n\\end{bmatrix}$$\n\nA NumPy array could be 1-dimensional (e.g. [1, 5, 20, 34, ...]), 2-dimensional (as above), or many dimensions. It's important to note that all the rows and columns of the 2-dimensional array are the same length. That will be true for all dimensions of arrays.\n\nLet's contrast this with lists.",
"_____no_output_____"
]
],
[
[
"# to access NumPy, we have to import it\nimport numpy as np",
"_____no_output_____"
],
[
"list_of_lists = [[1,2,3], [4,5,6],[7,8,9]]\nprint (list_of_lists)",
"[[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n"
],
[
"an_array = np.array(list_of_lists)\nprint (an_array)",
"[[1 2 3]\n [4 5 6]\n [7 8 9]]\n"
],
[
"non_rectangular = [[1,2], [3,4,5], [6,7,8,9]]\nprint (non_rectangular)",
"[[1, 2], [3, 4, 5], [6, 7, 8, 9]]\n"
],
[
"non_rectangular_array = np.array(non_rectangular)\nprint (non_rectangular_array)",
"[list([1, 2]) list([3, 4, 5]) list([6, 7, 8, 9])]\n"
]
],
[
[
"Why did these print differently? Let's investigate their _shape_ and _data type_ (`dtype`).",
"_____no_output_____"
]
],
[
[
"print(an_array.shape, an_array.dtype)\nprint(non_rectangular_array.shape, non_rectangular_array.dtype)",
"(3, 3) int32\n(3,) object\n"
]
],
[
[
"The first case, `an_array`, is a 2-dimensional 3x3 array (of integers). In contrast, `non_rectangular_array` is a 1-dimensional length 3 array (of _objects_, namely `list` objects).\n\nWe can also create a variety of arrays with NumPy's convenience functions.",
"_____no_output_____"
]
],
[
[
"np.linspace(1,10,10)",
"_____no_output_____"
],
[
"np.arange(1,10,1)",
"_____no_output_____"
],
[
"np.logspace(1,10,10)",
"_____no_output_____"
],
[
"np.zeros(10)",
"_____no_output_____"
],
[
"np.diag([1,2,3,4])",
"_____no_output_____"
],
[
"np.eye(5)",
"_____no_output_____"
]
],
[
[
"We can also convert the `dtype` of an array after creation.",
"_____no_output_____"
]
],
[
[
"print(np.logspace(1, 10, 10).dtype)\nprint(np.logspace(1, 10, 10).astype(int).dtype)",
"float64\nint32\n"
]
],
[
[
"Why does any of this matter?\n\nArrays are often more efficient in terms of code as well as computational resources for certain calculations. Computationally this efficiency comes from the fact that we pre-allocate a contiguous block of memory for the results of our computation.\n\nTo explore the advantages in code, let's try to do some math on these numbers.\n\nFirst let's simply calculate the sum of all the numbers and look at the differences in the necessary code for `list_of_lists`, `an_array`, and `non_rectangular_array`.",
"_____no_output_____"
]
],
[
[
"print(sum([sum(inner_list) for inner_list in list_of_lists]))\nprint(an_array.sum())",
"45\n45\n"
]
],
[
[
"Summing the numbers in an array is much easier than for a list of lists. We don't have to dig into a hierarchy of lists, we just use the `sum` method of the `ndarray`. Does this still work for `non_rectangular_array`?",
"_____no_output_____"
]
],
[
[
"# what happens here?\nprint(non_rectangular_array.sum())",
"[1, 2, 3, 4, 5, 6, 7, 8, 9]\n"
]
],
[
[
"Remember `non_rectangular_array` is a 1-dimensional array of `list` objects. The `sum` method tries to add them together: first list + second list + third list. Addition of lists results in _concatenation_.",
"_____no_output_____"
]
],
[
[
"# concatenate three lists\nprint([1, 2] + [3, 4, 5] + [6, 7, 8, 9])",
"[1, 2, 3, 4, 5, 6, 7, 8, 9]\n"
]
],
[
[
"The contrast becomes even more clear when we try to sum rows or columns individually.",
"_____no_output_____"
]
],
[
[
"print('Array row sums: ', an_array.sum(axis=1))\nprint('Array column sums: ', an_array.sum(axis=0))",
"Array row sums: [ 6 15 24]\nArray column sums: [12 15 18]\n"
],
[
"print('List of list row sums: ', [sum(inner_list) for inner_list in list_of_lists])\n\ndef column_sum(list_of_lists):\n running_sums = [0] * len(list_of_lists[0])\n for inner_list in list_of_lists:\n for i, number in enumerate(inner_list):\n running_sums[i] += number\n \n return running_sums\n\nprint('List of list column sums: ', column_sum(list_of_lists))",
"List of list row sums: [6, 15, 24]\nList of list column sums: [12, 15, 18]\n"
],
[
"a = np.array ([1,2,3,4,5])\nprint (a + 5) # add a scalar\nprint (a * 5) # multiply by a scalar\nprint (a / 5) # divide by a scalar (note the float!)",
"[ 6 7 8 9 10]\n[ 5 10 15 20 25]\n[0.2 0.4 0.6 0.8 1. ]\n"
],
[
"b = a + 1\nprint(a + b) # add together two arrays\nprint(a * b) # multiply two arrays (element-wise)\nprint(a / b.astype(float)) # divide two arrays (element-wise)",
"[ 3 5 7 9 11]\n[ 2 6 12 20 30]\n[0.5 0.66666667 0.75 0.8 0.83333333]\n"
]
],
[
[
"Arrays can also be used for linear algebra, acting as vectors, matrices, tensors, etc.",
"_____no_output_____"
]
],
[
[
"print (np.dot(a,b)) # inner product of two arrays\nprint (np.outer(a,b)) # outer product of two arrays",
"70\n[[ 2 3 4 5 6]\n [ 4 6 8 10 12]\n [ 6 9 12 15 18]\n [ 8 12 16 20 24]\n [10 15 20 25 30]]\n"
]
],
[
[
"Arrays have a lot to offer us in terms of representing and analyzing data, since we can easily apply mathematical functions to data sets or sections of data sets. Most of the time we won't run into any trouble using arrays, but it's good to be mindful of the restrictions around shape and datatype.\n\nThese restrictions around `shape` and `dtype` allow the `ndarray` objects to be much more performant compared to a general Python `list`. There are few reasons for this, but the main two result from the typed nature of the `ndarray`, as this allows contiguous memory storage and consistent function lookup. When a Python `list` is summed, Python needs to figure out at runtime the correct way in which to add each element of the list together. When an `ndarray` is summed, `NumPy` already knows the type of the each element (and they are consistent), thus it can sum them without checking the correct add function for each element.\n\nLets see this in action by doing some basic profiling. First we will create a list of 100000 random elements and then time the sum function.",
"_____no_output_____"
]
],
[
[
"time_list = [np.random.random() for _ in range (100000)]\ntime_arr = np.array(time_list)",
"_____no_output_____"
],
[
"%%timeit\nsum(time_list)",
"378 µs ± 13.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
],
[
"%%timeit\nnp.sum(time_arr)",
"37.2 µs ± 225 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n"
]
],
[
[
"### Universal functions\n\n`NumPy` defines a `ufunc` which allows it to efficiently run functions over arrays. Many of these functions are built in, such as `np.cos`, and implemented in highly performance compiled `C` code. These functions can perform `broadcasting` which allows them to automatically handle operations between arrays of different shapes, for example two arrays with the same shape, or an array and a scalar.",
"_____no_output_____"
],
[
"### Changing Shape\n\nOften we will want to take arrays that are one shape and transform them to a different shape more amenable to a specific operation.",
"_____no_output_____"
]
],
[
[
"mat = np.random.rand(20,10)",
"_____no_output_____"
],
[
"mat.reshape(40,5).shape",
"_____no_output_____"
],
[
"%%expect_exception ValueError\n\nmat.reshape(30, 5)",
"UsageError: Cell magic `%%expect_exception` not found.\n"
],
[
"mat.ravel().shape",
"_____no_output_____"
],
[
"mat.transpose().shape",
"_____no_output_____"
]
],
[
[
"### Combining arrays",
"_____no_output_____"
]
],
[
[
"print (a)\nprint (b)",
"[1 2 3 4 5]\n[2 3 4 5 6]\n"
],
[
"np.hstack((a,b))",
"_____no_output_____"
],
[
"np.vstack((a,b))",
"_____no_output_____"
],
[
"np.dstack((a,b))",
"_____no_output_____"
]
],
[
[
"### Basic data aggregation\n\nLet's explore some more examples of using arrays, this time using NumPy's `random` submodule to create some \"fake data\". Simulating data is useful for testing and prototyping new techniques or code, and some algorithms even require random input.",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\njan_coffee_sales = np.random.randint(25, 200, size = (4,7))\nprint (jan_coffee_sales)",
"[[127 117 39 131 96 45 127]\n [146 99 112 141 124 128 176]\n [155 174 77 26 112 182 62]\n [154 45 185 82 46 113 73]]\n"
],
[
"#mean sales\nprint ('Mean coffees sold per day in January: %d' % jan_coffee_sales.mean())",
"Mean coffees sold per day in January: 110\n"
],
[
"# mean sales for Monday \nprint ('Mean coffees sold on Monday in January: %d' % jan_coffee_sales[:, 1].mean())",
"Mean coffees sold on Monday in January: 108\n"
],
[
"# day with most sales\n# remember we count dates from 1, not 0!\nprint ('Day with highest sales was January %d' % (jan_coffee_sales.argmax() + 1))",
"Day with highest sales was January 24\n"
],
[
"# is there a weekly periodicity?\nfrom fractions import Fraction\n\nnormalized_sales = (jan_coffee_sales - jan_coffee_sales.mean()) / abs(jan_coffee_sales - jan_coffee_sales.mean()).max()\nfrequencies = [Fraction.from_float(f).limit_denominator() for f in np.fft.fftfreq(normalized_sales.size)]\npower = np.abs(np.fft.fft(normalized_sales.ravel()))**2\nlist(zip(frequencies, power))[:len(power) // 2]",
"_____no_output_____"
]
],
[
[
"Some of the functions we used above do not exist in standard Python and are provided to us by NumPy. Additionally we see that we can use the shape of an array to help us compute statistics on a subset of our data (e.g. mean number of coffees sold on Mondays). But one of the most powerful things we can do to explore data is to simply visualize it.",
"_____no_output_____"
],
[
"## Matplotlib\n\nMatplotlib is the most popular Python plotting library. It allows us to visualize data quickly by providing a variety of types of graphs (e.g. bar, scatter, line, etc.). It also provides useful tools for arranging multiple images or image components within a figure, enabling us to build up more complex visualizations as we need to.\n\nLet's visualize some data! In the next cells, we'll generate some data. For now we'll be focusing on how the graphs are produced rather than how the data is made.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"def gen_stock_price(days, initial_price):\n # stock price grows or shrinks linearly\n # not exceeding 10% per year (heuristic)\n trend = initial_price * (np.arange(days) * .1 / 365 * np.random.rand() * np.random.choice([1, -1]) + 1)\n # noise will be about 2%\n noise = .02 * np.random.randn(len(trend)) * trend\n return trend + noise\n\ndays = 365\ninitial_prices = [80, 70, 65]\nfor price in initial_prices:\n plt.plot(np.arange(-days, 0), gen_stock_price(days, price))\nplt.title('Stock price history for last %d days' % days)\nplt.xlabel('Time (days)')\nplt.ylabel('Price (USD)')\nplt.legend(['Company A', 'Company B', 'Company C']);",
"_____no_output_____"
],
[
"from scipy.stats import linregress\n\ndef gen_football_team(n_players, mean_shoe, mean_jersey):\n shoe_sizes = np.random.normal(size=n_players, loc=mean_shoe, scale=.15 * mean_shoe)\n jersey_sizes = mean_jersey / mean_shoe * shoe_sizes + np.random.normal(size=n_players, scale=.05 * mean_jersey)\n\n return shoe_sizes, jersey_sizes\n\nshoes, jerseys = gen_football_team(16, 11, 100)\n\nfig = plt.figure(figsize=(12, 6))\nfig.suptitle('Football team equipment profile')\n\nax1 = plt.subplot(221)\nax1.hist(shoes)\nax1.set_xlabel('Shoe size')\nax1.set_ylabel('Counts')\n\nax2 = plt.subplot(223)\nax2.hist(jerseys)\nax2.set_xlabel('Chest size (cm)')\nax2.set_ylabel('Counts')\n\nax3 = plt.subplot(122)\nax3.scatter(shoes, jerseys, label='Data')\nax3.set_xlabel('Shoe size')\nax3.set_ylabel('Chest size (cm)')\n\nfit_line = linregress(shoes, jerseys)\nax3.plot(shoes, fit_line[1] + fit_line[0] * shoes, 'r', label='Line of best fit')\n\nhandles, labels = ax3.get_legend_handles_labels()\nax3.legend(handles[::-1], labels[::-1]);",
"_____no_output_____"
],
[
"def gen_hourly_temps(days):\n ndays = len(days)\n seasonality = (-15 * np.cos((np.array(days) - 30) * 2.0 * np.pi / 365)).repeat(24) + 10\n solar = -3 * np.cos(np.arange(24 * ndays) * 2.0 * np.pi / 24)\n weather = np.interp(range(len(days) * 24), range(0, 24 * len(days), 24 * 2), 3 * np.random.randn(np.ceil(float(len(days)) / 2).astype(int)))\n noise = .5 * np.random.randn(24 * len(days))\n\n return seasonality + solar + weather + noise\n\ndays = np.arange(365)\nhours = np.arange(days[0] * 24, (days[-1] + 1) * 24)\nplt.plot(hours, gen_hourly_temps(days))\nplt.title('Hourly temperatures')\nplt.xlabel('Time (hours since Jan. 1)')\nplt.ylabel('Temperature (C)');",
"_____no_output_____"
]
],
[
[
"In the examples above we've made use of the ubiquitous `plot` command, `subplot` for arranging multiple plots in one image, and `hist` for creating histograms. We've also used both the \"state machine\" (i.e. using a sequence of `plt.method` commands) and \"object-oriented\" (i.e. creating figure objects and mutating them) plotting paradigms. The Matplotlib package is very flexible and the possibilities for visualizing data are mostly limited by imagination. A great way to explore Matplotlib and other data visualization packages is by consulting their [gallery pages](https://matplotlib.org/gallery.html).",
"_____no_output_____"
],
[
"# Pandas\n\nNumPy is useful for handling data as it lets us efficiently apply functions to whole data sets or select pieces of them. However, it can be difficult to keep track of related data that might be stored in different arrays, or the meaning of data stored in different rows or columns of the same array.\n\nFor example, in the previous section we had a 1-dimensional array for shoe sizes, and another 1-dimensional array for jersey sizes. If we wanted to look up the shoe and jersey size for a particular player, we'd have to remember his position in each array.\n\nAlternatively, we could combine the two 1-dimensional arrays to make a 2-dimensional array with `n_players` rows and two columns (one for shoe size, one for jersey size). But once we combine the data, we now have to remember which column is shoe size and which column is jersey size.\n\nThe Pandas package introduces a very powerful tool for working with data in Python: the DataFrame. A DataFrame is a table. Each column represents a different type of data (sometimes called a **field**). The columns are named, so I could have a column called `'shoe_size'` and a column called `'jersey_size'`. I don't have to remember which column is which, because I can refer to them by name. Each row represents a different **record** or **entity** (e.g. player). I can also name the rows, so instead of remembering which row in my array corresponds with Ronaldinho, I can name the row 'Ronaldinho' and look up his shoe size and jersey size by name.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nplayers = ['Ronaldinho', 'Pele', 'Lionel Messi', 'Zinedine Zidane', 'Didier Drogba', 'Ronaldo', 'Yaya Toure', \n 'Frank Rijkaard', 'Diego Maradona', 'Mohamed Aboutrika', \"Samuel Eto'o\", 'George Best', 'George Weah', \n 'Roberto Donadoni']\nshoes, jerseys = gen_football_team(len(players), 10, 100)\n\ndf = pd.DataFrame({'shoe_size': shoes, 'jersey_size': jerseys}, index = players)\n\ndf",
"_____no_output_____"
],
[
"# we can also make a dataframe using zip\n\ndf = pd.DataFrame(list(zip(shoes, jerseys)), columns = ['shoe_size', 'jersey_size'], index = players)\n\ndf",
"_____no_output_____"
]
],
[
[
"The DataFrame has similarities to both a `dict` and a NumPy `ndarray`. For example, we can retrieve a column from the DataFrame by using its name, just like we would retrieve an item from a `dict` using its key.",
"_____no_output_____"
]
],
[
[
"print(df['shoe_size'])",
"Ronaldinho 8.547275\nPele 7.927346\nLionel Messi 11.155075\nZinedine Zidane 12.259221\nDidier Drogba 9.341725\nRonaldo 8.291408\nYaya Toure 6.312746\nFrank Rijkaard 8.528853\nDiego Maradona 9.829515\nMohamed Aboutrika 12.398685\nSamuel Eto'o 11.009701\nGeorge Best 10.457628\nGeorge Weah 9.399821\nRoberto Donadoni 9.125251\nName: shoe_size, dtype: float64\n"
]
],
[
[
"And we can easily apply functions to the DataFrame, just like we would with a NumPy array.",
"_____no_output_____"
]
],
[
[
"print(np.log(df))",
" shoe_size jersey_size\nRonaldinho 2.145612 4.527219\nPele 2.070318 4.385485\nLionel Messi 2.411895 4.615494\nZinedine Zidane 2.506278 4.781008\nDidier Drogba 2.234491 4.490053\nRonaldo 2.115220 4.384308\nYaya Toure 1.842571 4.188776\nFrank Rijkaard 2.143455 4.439721\nDiego Maradona 2.285390 4.584057\nMohamed Aboutrika 2.517590 4.871789\nSamuel Eto'o 2.398777 4.669221\nGeorge Best 2.347332 4.735297\nGeorge Weah 2.240691 4.506914\nRoberto Donadoni 2.211045 4.518506\n"
],
[
"df.mean()",
"_____no_output_____"
]
],
[
[
"We'll explore applying functions and analyzing data in a DataFrame in more depth later on. First we need to know how to retrieve, add, and remove data from a DataFrame.\n\nWe've already seen how to retrieve a column, what about retrieving a row? The most flexible syntax is to use the DataFrame's `loc` method.",
"_____no_output_____"
]
],
[
[
"print(df.loc['Ronaldo'])",
"shoe_size 8.291408\njersey_size 80.182757\nName: Ronaldo, dtype: float64\n"
],
[
"print(df.loc[['Ronaldo', 'George Best'], 'shoe_size'])",
"Ronaldo 8.291408\nGeorge Best 10.457628\nName: shoe_size, dtype: float64\n"
],
[
"# can also select position-based slices of data\nprint(df.loc['Ronaldo':'George Best', 'shoe_size'])",
"Ronaldo 8.291408\nYaya Toure 6.312746\nFrank Rijkaard 8.528853\nDiego Maradona 9.829515\nMohamed Aboutrika 12.398685\nSamuel Eto'o 11.009701\nGeorge Best 10.457628\nName: shoe_size, dtype: float64\n"
],
[
"# for position-based indexing, we will typically use iloc\nprint(df.iloc[:5])",
" shoe_size jersey_size\nRonaldinho 8.547275 92.500983\nPele 7.927346 80.277155\nLionel Messi 11.155075 101.037748\nZinedine Zidane 12.259221 119.224510\nDidier Drogba 9.341725 89.126170\n"
],
[
"print(df.iloc[2:4, 0])",
"Lionel Messi 11.155075\nZinedine Zidane 12.259221\nName: shoe_size, dtype: float64\n"
],
[
"# to see just the top of the DataFrame, use head\ndf.head()",
"_____no_output_____"
],
[
"# of for the bottom use tail\ndf.tail()",
"_____no_output_____"
]
],
[
[
"Just as with a `dict`, we can add data to our DataFrame by simply using the same syntax as we would use to retrieve data, but matching it with an assignment.",
"_____no_output_____"
]
],
[
[
"# adding a new column\ndf['position'] = np.random.choice(['goaltender', 'defense', 'midfield', 'attack'], size=len(df))\ndf.head()",
"_____no_output_____"
],
[
"# adding a new row\ndf.loc['Dylan'] = {'jersey_size': 91, 'shoe_size': 9, 'position': 'midfield'}\ndf.loc['Dylan']",
"_____no_output_____"
]
],
[
[
"To delete data, we can use the DataFrame's `drop` method.",
"_____no_output_____"
]
],
[
[
"df.drop('Dylan')",
"_____no_output_____"
],
[
"df.drop('position', axis=1)",
"_____no_output_____"
]
],
[
[
"Notice when we executed `df.drop('position', axis=1)`, there was an entry for `Dylan` even though we had just executed `df.drop('Dylan')`. We have to be careful when using `drop`; many DataFrame functions return a _copy_ of the DataFrame. In order to make the change permanent, we either need to reassign `df` to the copy returned by `df.drop()` or we have to use the keyword `inplace`.",
"_____no_output_____"
]
],
[
[
"df = df.drop('Dylan')\nprint(df)",
" shoe_size jersey_size position\nRonaldinho 8.547275 92.500983 goaltender\nPele 7.927346 80.277155 defense\nLionel Messi 11.155075 101.037748 goaltender\nZinedine Zidane 12.259221 119.224510 defense\nDidier Drogba 9.341725 89.126170 attack\nRonaldo 8.291408 80.182757 defense\nYaya Toure 6.312746 65.942012 midfield\nFrank Rijkaard 8.528853 84.751303 midfield\nDiego Maradona 9.829515 97.910820 attack\nMohamed Aboutrika 12.398685 130.554214 goaltender\nSamuel Eto'o 11.009701 106.614700 defense\nGeorge Best 10.457628 113.897279 midfield\nGeorge Weah 9.399821 90.641667 defense\nRoberto Donadoni 9.125251 91.698505 midfield\n"
],
[
"df.drop('position', axis=1, inplace=True)\nprint(df)",
" shoe_size jersey_size\nRonaldinho 8.547275 92.500983\nPele 7.927346 80.277155\nLionel Messi 11.155075 101.037748\nZinedine Zidane 12.259221 119.224510\nDidier Drogba 9.341725 89.126170\nRonaldo 8.291408 80.182757\nYaya Toure 6.312746 65.942012\nFrank Rijkaard 8.528853 84.751303\nDiego Maradona 9.829515 97.910820\nMohamed Aboutrika 12.398685 130.554214\nSamuel Eto'o 11.009701 106.614700\nGeorge Best 10.457628 113.897279\nGeorge Weah 9.399821 90.641667\nRoberto Donadoni 9.125251 91.698505\n"
]
],
[
[
"We'll explore Pandas in much more detail later in the course, since it has many powerful tools for data analysis. However, even with these tools you can already start to discover patterns in data and draw interesting conclusions",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.